Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HADOOP-18708: Support S3 Client Side Encryption(CSE) With AWS SDK V2 #6884

Merged
merged 6 commits into from
Nov 14, 2024

Conversation

shameersss1
Copy link
Contributor

@shameersss1 shameersss1 commented Jun 12, 2024

Description of PR

This commit adds support for S3 client side encryption (CSE). CSE can configured in two modes CSE-KMS where keys are provided by AWS KMS and CSE-CUSTOM where custom keys are provided by implementing custom keyring

CSE is implemented using S3EncryptionClient (V3 client) and additional configurations (mentioned below) were added to make it compatible with the older encryption client V1 and V2 which is turned OFF by default.

Inorder to have compatibility with V1 client the following operations are done.

  1. V1 client pads extra bytes in multiple of 16 i.e if the file size is 12 bytes, 4bytes are padded to make it multiple of 16. Inorder to get the unencrypted file size of such S3 object ranged S3 GET call is made
  2. Unlike V1/V2 client V3 client does not support reading unencrypted object, Additional s3 client (base client) is introduced to read mix of encrypted and unencrypted s3 objects.

Default Behavior

The configurations to make it backward compatible is turned OFF by default considering the performance implications. The default behavior is as follows

  1. The unencrypted file size is computed by simply subtracting 16 bytes from the file size.
  2. When there is a mix of unencrypted and encrypted s3 objects, The client fails.

This PR is based on the initial work done by @ahmarsuhail as part of #6164

How was this patch tested?

  1. Tested in us-east-1 with mvn -Dparallel-tests -DtestsThreadCount=16 clean verify.
  2. Added integration test for CSE-KMS and CSE-CUSTOM

@shameersss1
Copy link
Contributor Author

@steveloughran @ahmarsuhail Could you please review the changes ?

Copy link
Contributor

@steveloughran steveloughran left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorry, no time to review right now, vector IO problems

I don't want another s3 client as it will only hurt boot time. are already hitting problems here where client init time is O(jar) & I'm thinking of lazy creation of the async one.

look at how we did this before: we altered file lengths in listings.

  • no to netty dependencies
  • no to any dependency on the new library except when CSE is enabled.

@steveloughran
Copy link
Contributor

FYI, Jira on create performance. I'm worrying about how to remove/delay client configuration, so adding a new client makes things way worse: HADOOP-19205

@shameersss1
Copy link
Contributor Author

FYI, Jira on create performance. I'm worrying about how to remove/delay client configuration, so adding a new client makes things way worse: HADOOP-19205

by default we are not initializing new s3 client. It is done under a config to provide backward compatibility only when required.

@steveloughran
Copy link
Contributor

It's still happening in FileSystem.initialize().

If I get time this week I'll do my PoC of lazy creation of transfer manager and async client, as that's doubled startup time already. all of that will be moved behind S3Store with only the copy() methods exposed.

Ultimately I want the S3Client itself hidden behind that API, so here

  • new unencryptedS3 operations getLengthOfEncryptedObject(path), ..., would trigger demand creation of the method
  • accessor to this passed down.

Actually, createFileStatusFromListingEntry() could be part of S3AStore took, somehow. all the config information for the creation, especially cSE flags, would be in the store config, so listing wouldn't need it

@steveloughran
Copy link
Contributor

#6892 is the draft pr with lazy creation; a new s3 client would go in there, but ideally we'd actually push the listing/transformation code into S3AStore so the extra client would never be visible outside it.

@shameersss1
Copy link
Contributor Author

#6892 is the draft pr with lazy creation; a new s3 client would go in there, but ideally we'd actually push the listing/transformation code into S3AStore so the extra client would never be visible outside it.

@steveloughran - I really like the idea of lazy initializing async client and new interface for creating s3 client. I will wait for your changes to get in and refactor my code based on it. By this way i could lazy initialize unencryted s3 client as well when the config is enabled.

@steveloughran
Copy link
Contributor

exactly. when its not needed: no penalty.

@shameersss1
Copy link
Contributor Author

@steveloughran @ahmarsuhail I have rebased the changes on top of #6892 .
Please review the changes

Thanks

@shameersss1 shameersss1 force-pushed the HADOOP-18708 branch 2 times, most recently from 2593281 to bc9cfbd Compare July 8, 2024 12:50
Copy link
Contributor

@steveloughran steveloughran left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(intellij seems to be hiding or has lost some of the review comments...let me see once this review is submitted
)
I'm afraid one thing I now want changed across the entire code base is to stop passing s3 client instances around, so that we can move in future to using the AsyncClient everywhere. The S3AStoreImpl class would provide accessor methods to all operations needed, s3afs would invoke them, while subsidiary components/classes would invoke S3Astore via callbacks which would never call to S3AFS. See BulkDeleteOperationCallbacksImpl for an example.

As well as delivering the alleged speed ups of the async client, the isolation should help testing, maintenance etc. My ultimate goal would be for the S3AFS class to to be a lot more minimal, with no Callback ever been implemented in it.

Anyway, that is a big undertaking. For now: no new accessors of s3client. And no network IO operations in S3AUtils, which should all ready be small utility methods (which is also why error handling is moving to )

Proposed:

  1. ListingOperationCallbacks to provide a createFileStatus() call to create a file status
  2. different implementations/branches of that call to eiher the S3AUtils class, or something which does the extra IO

this is going to have to be pulled out into listing callbacks, executed within the same auditspan and updating statistics. if a client is doing any IO we need to know who and why (server side) and how long (client)

@shameersss1
Copy link
Contributor Author

@steveloughran I really appreciate your time and effort to review this changes. Indeed it was a detailed review.
I have tried to address all your comments and raised a new revision (I hope i have covered all your comments).

Please take a look

Thanks

@steveloughran
Copy link
Contributor

@shameersss1 unless you have no choice, please try not to rebase and force push prs once reviews have started. makes it a lot harder to see what has changed

@@ -727,6 +736,28 @@ S3-CSE to work.
</property>
```

#### 2. CSE-CUSTOM
- Set `fs.s3a.encryption.algorithm=CSE-CUSTOM`.
- Set `fs.s3a.encryption.cse.custom.cryptographic.material.manager.class.name=<fully qualified class name>`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should the new fs.s3a.encryption.cse properties be added to core-default.xml and index.md?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These configurations will be shown as part of the encryption page. similar to https://hadoop.apache.org/docs/r3.4.0/hadoop-aws/tools/hadoop-aws/encryption.html

@shameersss1
Copy link
Contributor Author

@shameersss1 unless you have no choice, please try not to rebase and force push prs once reviews have started. makes it a lot harder to see what has changed

Yeah, I had to rebase for merge conflicts in hadoop-common module. This commit (838dc24) contains the addressing of review comments.

Copy link
Contributor

@raphaelazzolini raphaelazzolini left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that there is a little bit of complexity in S3AFileSystem with if (isCSEEnabled) conditions in many places. I wonder if, instead of repeating the boolean checks all over the code, we could have a handler interface and extract the code to two implementations: one for isCSEEnabled and one for no CSE.

Other than that, the change looks good to me.

@shameersss1
Copy link
Contributor Author

@steveloughran - Gentle reminder for the review
Thanks

@steveloughran
Copy link
Contributor

@shameersss1 what do you think of @raphaelazzolini 's comment

Copy link
Contributor

@steveloughran steveloughran left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

HEAD

I want to push all S3 client invocations behind S3AStore interface,
So we can move to the Async client with ease.
Then move things like ListingOperationCallbacks out of S3AFS, and only invoke S3AStore operations.

I'm going to suggest you add a headObject(path) there which implements the core operation.
Invoke it in S3A fs.getObjectMetadata() as well as in the new classes.

There are way too many HEAD requests going on now.
The result of any previous HEAD MUST be passed down and its values used/reused.

The requirements before any merge are:

  • for a getFileStatus call, the # of HEAD requests SHALL be 1
  • for directory listings, the #of head requests SHALL be the same of the number of files in the list.
  • for delete and rename, there SHALL be no extra head calls, and .instruction files are processed.

Deletion/rename

the skipping of .instruction files means that
rename and delete won't work if they exist, because OperationCallbacksImpl
uses AcceptAllButS3nDirsAndCSEInstructionFile()

  1. Write test to demonstrate this, or that I am wrong.
  2. fix.

I would've expected the rename or delete test to have thrown this up if they had been executed against any directories with some of these files.

Tests should create these files, even if it is just by PUTting 0 byte files of that name.
Then verify that delete and rename handles them.

side issue: what about delete(key) and BulkDelete? it'll just leave the files around, won't it? That'll be something to document

minor

Can you review the new failure conditions and make sure the troubleshooting docs are current.

Some of the javadocs and method argument lists are over 100 lines.
Cut them down, in the special case that "the code actually looks worse"
if they aren't. This is to make it easier to view code side-by-side either in your IDE or github PR reviewer

* @throws IOException If an error occurs while retrieving the object.
*/
@Override
public ResponseInputStream<GetObjectResponse> getObject(S3AStore store, GetObjectRequest request,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

move send arg onto next line

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ack

@Override
public long getS3ObjectSize(String key, long length, S3AStore store, String bucket,
RequestFactory factory, HeadObjectResponse response) throws IOException {
return CSEUtils.getUnPaddedObjectLength(store.getOrCreateS3Client(), bucket,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pass the S3AStore into CSEUtils, have it create the client.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ack

/**
* S3 client side encryption (CSE) utility class.
*/
@InterfaceAudience.Public
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

private

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ack

* @throws IOException
*/
@Test
public void testSkippingCSEInstructionFileWithV1Compatibility() throws IOException {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

where you should add the new delete and rename tests. create the file in subdir 1 under methodPath, rename it, verify source dir is gone. delete rename target dir, again assert not found.

Copy link
Contributor Author

@shameersss1 shameersss1 Oct 30, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with you have instruction file is causing too much confusion/difficulty. I think let's ignore instruction case for time being and document that having instruction is not supported.

Class exceptionClass = AccessDeniedException.class;
if (CSEUtils.isCSEEnabled(getEncryptionAlgorithm(
getTestBucketName(getConfiguration()), getConfiguration()).getMethod())) {
exceptionClass = AWSClientIOException.class;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why a different exception? It should be access denies in both cases, shouldn't it?

After all, that is what test is meant to do: delete no permissions are translated.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unlike normal s3 client which throws AccessDeniedException . S3 encryption client throws AWSClientIOException

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so the SDK is raising some exception? does it have the same semantics as Access Denied? that is: should it be remapped?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, The semantics is same "Caused by: software.amazon.encryption.s3.S3EncryptionClientException: Access Denied (Service: S3, Status Code: 403,"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you show me the full stack? 403 normally maps to AccessDeniedException, and it'd be good to keep the same. AWSClientIOException is just our "something failed" if there's nothing else

  1. Add maybeTranslateEncryptionClientException() in ErrorTranslation to look at the exception, if the classname string value matches "software.amazon.encryption.s3.S3EncryptionClientException" then map to AccessDeniedException
  2. call that to S3AUtils.translateException just before the // no custom handling. bit

It may seem bad, but look at maybeExtractChannelException() and other bits to see worse.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The following is the complete stacktrace

[ERROR] testSingleObjectDeleteNoPermissionsTranslated(org.apache.hadoop.fs.s3a.ITestS3AFailureHandling)  Time elapsed: 26.964 s  <<< ERROR!
org.apache.hadoop.fs.s3a.AWSClientIOException: delete on <>: software.amazon.encryption.s3.S3EncryptionClientException: Access Denied (Service: S3, Status Code: 403,: Access Denied (Service: S3, Status Code: 403, )
	at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:228)
	at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:124)
	at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:163)
	at org.apache.hadoop.fs.s3a.S3AFileSystem$OperationCallbacksImpl.deleteObjectAtPath(S3AFileSystem.java:2637)
	at org.apache.hadoop.fs.s3a.impl.DeleteOperation.deleteObjectAtPath(DeleteOperation.java:383)
	at org.apache.hadoop.fs.s3a.impl.DeleteOperation.execute(DeleteOperation.java:234)
	at org.apache.hadoop.fs.s3a.impl.DeleteOperation.execute(DeleteOperation.java:70)
	at org.apache.hadoop.fs.s3a.impl.ExecutingStoreOperation.apply(ExecutingStoreOperation.java:76)
	at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:547)
	at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:528)
	at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:449)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.deleteWithoutCloseCheck(S3AFileSystem.java:3595)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.delete(S3AFileSystem.java:3571)
	at org.apache.hadoop.fs.s3a.ITestS3AFailureHandling.lambda$testSingleObjectDeleteNoPermissionsTranslated$2(ITestS3AFailureHandling.java:207)
	at org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:500)
	at org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:386)
	at org.apache.hadoop.fs.s3a.ITestS3AFailureHandling.testSingleObjectDeleteNoPermissionsTranslated(ITestS3AFailureHandling.java:206)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
	at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
	at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.lang.Thread.run(Thread.java:750)
Caused by: software.amazon.encryption.s3.S3EncryptionClientException: Access Denied (Service: S3, Status Code: 403,
	at software.amazon.encryption.s3.S3EncryptionClient.deleteObject(S3EncryptionClient.java:377)
	at org.apache.hadoop.fs.s3a.impl.S3AStoreImpl.lambda$deleteObject$5(S3AStoreImpl.java:677)
	at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:547)
	at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:528)
	at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:468)
	at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:431)
	at org.apache.hadoop.fs.s3a.impl.S3AStoreImpl.deleteObject(S3AStoreImpl.java:668)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.deleteObject(S3AFileSystem.java:3223)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.deleteObjectAtPath(S3AFileSystem.java:3248)
	at org.apache.hadoop.fs.s3a.S3AFileSystem$OperationCallbacksImpl.lambda$deleteObjectAtPath$0(S3AFileSystem.java:2638)
	at org.apache.hadoop.fs.s3a.Invoker.lambda$once$0(Invoker.java:165)
	at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:122)
	... 30 more
Caused by: software.amazon.awssdk.services.s3.model.S3Exception: Access Denied (Service: S3, Status Code: 403, 
	at software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handleErrorResponse(AwsXmlPredicatedResponseHandler.java:156)
	at software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handleResponse(AwsXmlPredicatedResponseHandler.java:108)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, error translation should do the job.


- The V1 and V2 clients support reading unencrypted S3 objects, whereas the V3 client does not. In order to read S3 objects in a directory with a mix of encrypted and unencrypted objects.
- Unlike the V2 and V3 clients which always pads 16 bytes, V1 client pads extra bytes to the next multiple of 16. For example if unencrypted object size is 12bytes, V1 client pads extra 4bytes to make it multiple of 16.
- The V1 client supports storing encryption metadata in instruction file.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

explain this in its own section and consequences: file rename loses this, file delete doesn't clean it up

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ack

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We won't be considering instruction file and documented the same.

### Compatibility Issues

- The V1 and V2 clients support reading unencrypted S3 objects, whereas the V3 client does not. In order to read S3 objects in a directory with a mix of encrypted and unencrypted objects.
- Unlike the V2 and V3 clients which always pads 16 bytes, V1 client pads extra bytes to the next multiple of 16. For example if unencrypted object size is 12bytes, V1 client pads extra 4bytes to make it multiple of 16.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Unlike the V2 and V3 clients which always append 16 bytes to a file, the, V1 client pads extra bytes to the next multiple of 16. For example if unencrypted object size is 28 bytes, the V1 client pads an extra 4 bytes to make it at multiple of 16.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ack.

*/
public static final String S3_ENCRYPTION_CSE_INSTRUCTION_FILE_SUFFIX = ".instruction";


Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: cut the surplus lnes

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ack

@steveloughran
Copy link
Contributor

Thinking about this some more: those .instruction files raise a lot of problems; I don't like the way t

There's already the notion that files beginning with . or _ are hidden, applications scanning directo

So do we really need to hide them at all? Making them visible stops us playing yet more tricks with the

What is actually in the files? Is it just some list of attributes encrypted with the same CSE settings?

@shameersss1
Copy link
Contributor Author

shameersss1 commented Oct 30, 2024

Thinking about this some more: those .instruction files raise a lot of problems; I don't like the way t

There's already the notion that files beginning with . or _ are hidden, applications scanning directo

So do we really need to hide them at all? Making them visible stops us playing yet more tricks with the

What is actually in the files? Is it just some list of attributes encrypted with the same CSE settings?

  1. Instruction file contains metadata about the encryption and some headers.
  2. If a file named "abc" is written to s3 with CSE V1 with Instruction mode, It will store two files, "abc" and "abc.instruction"
  3. So we are ultimately skipping all files ending .instruction when that flag is enabled.
  4. I understand your concern, skipping .instruction files will lead to not deleting them when delete or rename operation is called.

Just thinking if there is any better way to solve this. or may be we can simply quote ".instruction" files are not supported and should not be present. @steveloughran anythoughts on this ?

@shameersss1
Copy link
Contributor Author

Force pushed to rebase with the trunk.

@steveloughran - Thanks a lot for a detailed review. I know it is a pain to review changes which are big (~50 files). I really appreciate the time you put into this.

I have addressed your comments with the new commit

PS: Ignored instruction file handling for simplicilty

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 18m 8s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 1s No case conflicting files found.
+0 🆗 codespell 0m 0s codespell was not available.
+0 🆗 detsecrets 0m 0s detect-secrets was not available.
+0 🆗 xmllint 0m 0s xmllint was not available.
+0 🆗 markdownlint 0m 0s markdownlint was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 17 new or modified test files.
_ trunk Compile Tests _
+0 🆗 mvndep 14m 41s Maven dependency ordering for branch
+1 💚 mvninstall 34m 38s trunk passed
+1 💚 compile 17m 26s trunk passed with JDK Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04
+1 💚 compile 15m 59s trunk passed with JDK Private Build-1.8.0_422-8u422-b05-1~20.04-b05
+1 💚 checkstyle 4m 19s trunk passed
+1 💚 mvnsite 3m 24s trunk passed
+1 💚 javadoc 2m 51s trunk passed with JDK Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04
+1 💚 javadoc 2m 24s trunk passed with JDK Private Build-1.8.0_422-8u422-b05-1~20.04-b05
+0 🆗 spotbugs 0m 45s branch/hadoop-project no spotbugs output file (spotbugsXml.xml)
+1 💚 shadedclient 35m 53s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 39s Maven dependency ordering for patch
+1 💚 mvninstall 1m 44s the patch passed
+1 💚 compile 16m 39s the patch passed with JDK Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04
+1 💚 javac 16m 39s the patch passed
+1 💚 compile 16m 7s the patch passed with JDK Private Build-1.8.0_422-8u422-b05-1~20.04-b05
+1 💚 javac 16m 7s the patch passed
-1 ❌ blanks 0m 0s /blanks-eol.txt The patch has 18 line(s) that end in blanks. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
-0 ⚠️ checkstyle 4m 15s /results-checkstyle-root.txt root: The patch generated 2 new + 34 unchanged - 0 fixed = 36 total (was 34)
-1 ❌ mvnsite 0m 55s /patch-mvnsite-hadoop-tools_hadoop-aws.txt hadoop-aws in the patch failed.
+1 💚 javadoc 2m 45s the patch passed with JDK Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04
+1 💚 javadoc 2m 26s the patch passed with JDK Private Build-1.8.0_422-8u422-b05-1~20.04-b05
+0 🆗 spotbugs 0m 40s hadoop-project has no data from spotbugs
+1 💚 shadedclient 35m 50s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 0m 38s hadoop-project in the patch passed.
+1 💚 unit 19m 39s hadoop-common in the patch passed.
+1 💚 unit 3m 0s hadoop-aws in the patch passed.
+1 💚 asflicense 1m 5s The patch does not generate ASF License warnings.
271m 51s
Subsystem Report/Notes
Docker ClientAPI=1.47 ServerAPI=1.47 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6884/18/artifact/out/Dockerfile
GITHUB PR #6884
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint markdownlint
uname Linux 93ff913c98eb 5.15.0-124-generic #134-Ubuntu SMP Fri Sep 27 20:20:17 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / 1deb43a
Default Java Private Build-1.8.0_422-8u422-b05-1~20.04-b05
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_422-8u422-b05-1~20.04-b05
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6884/18/testReport/
Max. process+thread count 1252 (vs. ulimit of 5500)
modules C: hadoop-project hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: .
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6884/18/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 0m 52s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 codespell 0m 1s codespell was not available.
+0 🆗 detsecrets 0m 1s detect-secrets was not available.
+0 🆗 xmllint 0m 1s xmllint was not available.
+0 🆗 markdownlint 0m 1s markdownlint was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 17 new or modified test files.
_ trunk Compile Tests _
+0 🆗 mvndep 14m 58s Maven dependency ordering for branch
+1 💚 mvninstall 32m 26s trunk passed
+1 💚 compile 17m 19s trunk passed with JDK Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04
+1 💚 compile 16m 10s trunk passed with JDK Private Build-1.8.0_422-8u422-b05-1~20.04-b05
+1 💚 checkstyle 4m 20s trunk passed
+1 💚 mvnsite 3m 27s trunk passed
+1 💚 javadoc 2m 49s trunk passed with JDK Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04
+1 💚 javadoc 2m 18s trunk passed with JDK Private Build-1.8.0_422-8u422-b05-1~20.04-b05
+0 🆗 spotbugs 0m 45s branch/hadoop-project no spotbugs output file (spotbugsXml.xml)
+1 💚 shadedclient 35m 13s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 39s Maven dependency ordering for patch
+1 💚 mvninstall 1m 43s the patch passed
+1 💚 compile 16m 40s the patch passed with JDK Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04
+1 💚 javac 16m 40s the patch passed
+1 💚 compile 16m 13s the patch passed with JDK Private Build-1.8.0_422-8u422-b05-1~20.04-b05
+1 💚 javac 16m 13s the patch passed
-1 ❌ blanks 0m 0s /blanks-eol.txt The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
+1 💚 checkstyle 4m 19s the patch passed
+1 💚 mvnsite 3m 21s the patch passed
+1 💚 javadoc 2m 48s the patch passed with JDK Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04
+1 💚 javadoc 2m 25s the patch passed with JDK Private Build-1.8.0_422-8u422-b05-1~20.04-b05
+0 🆗 spotbugs 0m 40s hadoop-project has no data from spotbugs
+1 💚 shadedclient 35m 41s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 0m 37s hadoop-project in the patch passed.
+1 💚 unit 19m 46s hadoop-common in the patch passed.
+1 💚 unit 3m 0s hadoop-aws in the patch passed.
+1 💚 asflicense 1m 6s The patch does not generate ASF License warnings.
252m 7s
Subsystem Report/Notes
Docker ClientAPI=1.47 ServerAPI=1.47 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6884/19/artifact/out/Dockerfile
GITHUB PR #6884
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint markdownlint
uname Linux a49bc6f9a380 5.15.0-124-generic #134-Ubuntu SMP Fri Sep 27 20:20:17 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / b196965
Default Java Private Build-1.8.0_422-8u422-b05-1~20.04-b05
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_422-8u422-b05-1~20.04-b05
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6884/19/testReport/
Max. process+thread count 1899 (vs. ulimit of 5500)
modules C: hadoop-project hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: .
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6884/19/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

@hadoop-yetus
Copy link

🎊 +1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 0m 53s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 1s No case conflicting files found.
+0 🆗 codespell 0m 0s codespell was not available.
+0 🆗 detsecrets 0m 0s detect-secrets was not available.
+0 🆗 xmllint 0m 0s xmllint was not available.
+0 🆗 markdownlint 0m 0s markdownlint was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 17 new or modified test files.
_ trunk Compile Tests _
+0 🆗 mvndep 14m 22s Maven dependency ordering for branch
+1 💚 mvninstall 31m 23s trunk passed
+1 💚 compile 17m 15s trunk passed with JDK Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04
+1 💚 compile 16m 2s trunk passed with JDK Private Build-1.8.0_422-8u422-b05-1~20.04-b05
+1 💚 checkstyle 4m 20s trunk passed
+1 💚 mvnsite 3m 22s trunk passed
+1 💚 javadoc 2m 54s trunk passed with JDK Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04
+1 💚 javadoc 2m 25s trunk passed with JDK Private Build-1.8.0_422-8u422-b05-1~20.04-b05
+0 🆗 spotbugs 0m 46s branch/hadoop-project no spotbugs output file (spotbugsXml.xml)
+1 💚 shadedclient 35m 41s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 38s Maven dependency ordering for patch
+1 💚 mvninstall 1m 43s the patch passed
+1 💚 compile 16m 39s the patch passed with JDK Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04
+1 💚 javac 16m 39s the patch passed
+1 💚 compile 16m 2s the patch passed with JDK Private Build-1.8.0_422-8u422-b05-1~20.04-b05
+1 💚 javac 16m 2s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
+1 💚 checkstyle 4m 16s the patch passed
+1 💚 mvnsite 3m 18s the patch passed
+1 💚 javadoc 2m 46s the patch passed with JDK Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04
+1 💚 javadoc 2m 26s the patch passed with JDK Private Build-1.8.0_422-8u422-b05-1~20.04-b05
+0 🆗 spotbugs 0m 40s hadoop-project has no data from spotbugs
+1 💚 shadedclient 35m 17s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 0m 40s hadoop-project in the patch passed.
+1 💚 unit 19m 43s hadoop-common in the patch passed.
+1 💚 unit 3m 0s hadoop-aws in the patch passed.
+1 💚 asflicense 1m 4s The patch does not generate ASF License warnings.
250m 12s
Subsystem Report/Notes
Docker ClientAPI=1.47 ServerAPI=1.47 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6884/20/artifact/out/Dockerfile
GITHUB PR #6884
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint markdownlint
uname Linux c533c8b57a58 5.15.0-124-generic #134-Ubuntu SMP Fri Sep 27 20:20:17 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / 59a7e05
Default Java Private Build-1.8.0_422-8u422-b05-1~20.04-b05
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_422-8u422-b05-1~20.04-b05
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6884/20/testReport/
Max. process+thread count 1252 (vs. ulimit of 5500)
modules C: hadoop-project hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: .
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6884/20/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

@shameersss1
Copy link
Contributor Author

@steveloughran - raised a revision by addressing the comments and checkstyle issue ifx.

Copy link
Contributor

@steveloughran steveloughran left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK. Last big bit is the exception translation.

Let's try getting this in this week!

configureClientBuilder(S3AsyncClient.builder(), parameters, conf, bucket)
.httpClientBuilder(httpClientBuilder);

// TODO: Enable multi part upload with cse once it is available.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Create a followup JIRA and reference it here "multipart upload pending with HADOOP-xyz"

/**
* An interface that defines the contract for handling certain filesystem operations.
*/
public interface S3AFileSystemHandler {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes


import java.io.IOException;

import org.apache.hadoop.conf.Configuration;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit, put block below the "other" package

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ack

public ResponseInputStream<GetObjectResponse> getObject(S3AStore store, GetObjectRequest request,
RequestFactory factory) throws IOException {
boolean isEncrypted = isObjectEncrypted(store.getOrCreateS3Client(), factory, request.key());
return isEncrypted ? store.getOrCreateS3Client().getObject(request)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

aah.

import java.io.IOException;
import java.net.URI;

import org.apache.hadoop.conf.Configuration;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

move block below the software. one

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ack

String xAttrPrefix = "header.";

// Assert KeyWrap Algo
assertEquals("Key wrap algo isn't same as expected", KMS_KEY_WRAP_ALGO,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

assertJ assertThat

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ack

* tests.
* @param conf Configuations
*/
private static void unsetEncryption(Configuration conf) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would it make sense to move this to S3ATestUtils and use elsewhere, unsetting all encryption options? Then use in ITestS3AClientSideEncryptionCustom

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes make sense!

Class exceptionClass = AccessDeniedException.class;
if (CSEUtils.isCSEEnabled(getEncryptionAlgorithm(
getTestBucketName(getConfiguration()), getConfiguration()).getMethod())) {
exceptionClass = AWSClientIOException.class;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you show me the full stack? 403 normally maps to AccessDeniedException, and it'd be good to keep the same. AWSClientIOException is just our "something failed" if there's nothing else

  1. Add maybeTranslateEncryptionClientException() in ErrorTranslation to look at the exception, if the classname string value matches "software.amazon.encryption.s3.S3EncryptionClientException" then map to AccessDeniedException
  2. call that to S3AUtils.translateException just before the // no custom handling. bit

It may seem bad, but look at maybeExtractChannelException() and other bits to see worse.

@@ -156,7 +159,12 @@ public void testGeneratePoolTimeouts() throws Throwable {
ContractTestUtils.createFile(fs, path, true, DATASET);
final FileStatus st = fs.getFileStatus(path);
try (FileSystem brittleFS = FileSystem.newInstance(fs.getUri(), conf)) {
intercept(ConnectTimeoutException.class, () -> {
Class exceptionClass = ConnectTimeoutException.class;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

won't be needed once the translation is fixed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ack

@steveloughran
Copy link
Contributor

let's make this the next big change, with everything else blocked before it goes in. That way, no more merge pain

@hadoop-yetus
Copy link

🎊 +1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 17m 29s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 1s No case conflicting files found.
+0 🆗 codespell 0m 0s codespell was not available.
+0 🆗 detsecrets 0m 0s detect-secrets was not available.
+0 🆗 xmllint 0m 0s xmllint was not available.
+0 🆗 markdownlint 0m 0s markdownlint was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 17 new or modified test files.
_ trunk Compile Tests _
+0 🆗 mvndep 16m 5s Maven dependency ordering for branch
+1 💚 mvninstall 32m 32s trunk passed
+1 💚 compile 17m 23s trunk passed with JDK Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04
+1 💚 compile 16m 19s trunk passed with JDK Private Build-1.8.0_422-8u422-b05-1~20.04-b05
+1 💚 checkstyle 4m 22s trunk passed
+1 💚 mvnsite 3m 20s trunk passed
+1 💚 javadoc 2m 53s trunk passed with JDK Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04
+1 💚 javadoc 2m 26s trunk passed with JDK Private Build-1.8.0_422-8u422-b05-1~20.04-b05
+0 🆗 spotbugs 0m 46s branch/hadoop-project no spotbugs output file (spotbugsXml.xml)
+1 💚 shadedclient 36m 15s branch has no errors when building and testing our client artifacts.
-0 ⚠️ patch 36m 42s Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 40s Maven dependency ordering for patch
+1 💚 mvninstall 1m 45s the patch passed
+1 💚 compile 16m 40s the patch passed with JDK Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04
+1 💚 javac 16m 40s the patch passed
+1 💚 compile 16m 10s the patch passed with JDK Private Build-1.8.0_422-8u422-b05-1~20.04-b05
+1 💚 javac 16m 10s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
-0 ⚠️ checkstyle 4m 15s /results-checkstyle-root.txt root: The patch generated 1 new + 34 unchanged - 1 fixed = 35 total (was 35)
+1 💚 mvnsite 3m 19s the patch passed
+1 💚 javadoc 2m 48s the patch passed with JDK Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04
+1 💚 javadoc 2m 25s the patch passed with JDK Private Build-1.8.0_422-8u422-b05-1~20.04-b05
+0 🆗 spotbugs 0m 39s hadoop-project has no data from spotbugs
+1 💚 shadedclient 37m 18s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 0m 38s hadoop-project in the patch passed.
+1 💚 unit 21m 13s hadoop-common in the patch passed.
+1 💚 unit 3m 7s hadoop-aws in the patch passed.
+1 💚 asflicense 1m 5s The patch does not generate ASF License warnings.
274m 20s
Subsystem Report/Notes
Docker ClientAPI=1.47 ServerAPI=1.47 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6884/21/artifact/out/Dockerfile
GITHUB PR #6884
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint markdownlint
uname Linux 6dca9c8ecbd5 5.15.0-124-generic #134-Ubuntu SMP Fri Sep 27 20:20:17 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / 2f2790f
Default Java Private Build-1.8.0_422-8u422-b05-1~20.04-b05
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_422-8u422-b05-1~20.04-b05
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6884/21/testReport/
Max. process+thread count 1252 (vs. ulimit of 5500)
modules C: hadoop-project hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: .
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6884/21/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

@shameersss1
Copy link
Contributor Author

@steveloughran - Thanks a lot for the review. I have addressed your comment.

Copy link
Contributor

@steveloughran steveloughran left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks; I like the translation now and you can see from the tests how it makes invocation a lot more consistent. Yes -the error translation code is very complicated. But consider this: if we had not remapped all v1 SDK exceptions to IOEs before throwing them, it would have been nearly impossible to upgrade to the V2 SDK without breaking so much code downstream.

keyring =
getKeyringProvider(cseMaterials.getCustomKeyringClassName(), cseMaterials.getConf());
} catch (RuntimeException e) {
throw new IOException("Failed to instantiate a custom keyring provider", e);
Copy link
Contributor

@steveloughran steveloughran Nov 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you include the error text of e in the new IOE; it's really easy to lose the base exception in log stack chains with deep wrapping/rewrapping.

IOException("Failed to instantiate a custom keyring provider: " + e, e)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make sense!
ack.

* @see SdkException
* @see AwsServiceException
*/
public static SdkException maybeExtractSdkExceptionFromEncryptionClientException(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is a bit long. How about maybeProcessEncryptionClientException()

this says it is handled, without saying exactly what happens.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ack.

return ReflectionUtils.newInstance(keyringProviderClass, conf,
new Class[] {Configuration.class}, conf);
} catch (Exception ex) {
throw new RuntimeException("Failed to create Keyring provider", ex);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same comment about including the wrapped exception text

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ack

@shameersss1
Copy link
Contributor Author

shameersss1 commented Nov 7, 2024

@steveloughran Yeah, Error translation feature make sense now. I have addressed your comments.

@hadoop-yetus
Copy link

🎊 +1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 0m 55s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 1s No case conflicting files found.
+0 🆗 codespell 0m 1s codespell was not available.
+0 🆗 detsecrets 0m 1s detect-secrets was not available.
+0 🆗 xmllint 0m 1s xmllint was not available.
+0 🆗 markdownlint 0m 1s markdownlint was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 17 new or modified test files.
_ trunk Compile Tests _
+0 🆗 mvndep 16m 19s Maven dependency ordering for branch
+1 💚 mvninstall 31m 43s trunk passed
+1 💚 compile 17m 22s trunk passed with JDK Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04
+1 💚 compile 16m 13s trunk passed with JDK Private Build-1.8.0_422-8u422-b05-1~20.04-b05
+1 💚 checkstyle 4m 28s trunk passed
+1 💚 mvnsite 3m 21s trunk passed
+1 💚 javadoc 2m 53s trunk passed with JDK Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04
+1 💚 javadoc 2m 25s trunk passed with JDK Private Build-1.8.0_422-8u422-b05-1~20.04-b05
+0 🆗 spotbugs 0m 44s branch/hadoop-project no spotbugs output file (spotbugsXml.xml)
+1 💚 shadedclient 35m 46s branch has no errors when building and testing our client artifacts.
-0 ⚠️ patch 36m 14s Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 41s Maven dependency ordering for patch
+1 💚 mvninstall 1m 45s the patch passed
+1 💚 compile 16m 42s the patch passed with JDK Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04
+1 💚 javac 16m 42s the patch passed
+1 💚 compile 16m 16s the patch passed with JDK Private Build-1.8.0_422-8u422-b05-1~20.04-b05
+1 💚 javac 16m 16s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
-0 ⚠️ checkstyle 4m 22s /results-checkstyle-root.txt root: The patch generated 1 new + 34 unchanged - 1 fixed = 35 total (was 35)
+1 💚 mvnsite 3m 20s the patch passed
+1 💚 javadoc 2m 44s the patch passed with JDK Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04
+1 💚 javadoc 2m 25s the patch passed with JDK Private Build-1.8.0_422-8u422-b05-1~20.04-b05
+0 🆗 spotbugs 0m 40s hadoop-project has no data from spotbugs
+1 💚 shadedclient 36m 4s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 0m 38s hadoop-project in the patch passed.
+1 💚 unit 20m 43s hadoop-common in the patch passed.
+1 💚 unit 3m 7s hadoop-aws in the patch passed.
+1 💚 asflicense 1m 7s The patch does not generate ASF License warnings.
255m 11s
Subsystem Report/Notes
Docker ClientAPI=1.47 ServerAPI=1.47 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6884/22/artifact/out/Dockerfile
GITHUB PR #6884
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint markdownlint
uname Linux 13650f953a63 5.15.0-124-generic #134-Ubuntu SMP Fri Sep 27 20:20:17 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / d4397bf
Default Java Private Build-1.8.0_422-8u422-b05-1~20.04-b05
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_422-8u422-b05-1~20.04-b05
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6884/22/testReport/
Max. process+thread count 3154 (vs. ulimit of 5500)
modules C: hadoop-project hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: .
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6884/22/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

@shameersss1
Copy link
Contributor Author

@steveloughran - Gentle reminder for review.

Copy link
Contributor

@steveloughran steveloughran left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

@steveloughran steveloughran merged commit 2273278 into apache:trunk Nov 14, 2024
4 checks passed
@steveloughran
Copy link
Contributor

ok, merged. provide a 3.4 version and I'll pull that in too.

@shameersss1
Copy link
Contributor Author

Sure, will work on branch-3.4 after HADOOP-19336

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants