diff --git a/docs/user_guides/fs/storage_connector/creation/s3.md b/docs/user_guides/fs/storage_connector/creation/s3.md index f44b6fcb0..02392cd94 100644 --- a/docs/user_guides/fs/storage_connector/creation/s3.md +++ b/docs/user_guides/fs/storage_connector/creation/s3.md @@ -73,8 +73,12 @@ If you have SSE-KMS enabled for your bucket, you can find the key ARN in the "Pr ### Step 5: Add Spark Options (optional) Here you can specify any additional spark options that you wish to add to the spark context at runtime. Multiple options can be added as key - value pairs. -!!! tip - To connect to a S3 compatible storage other than AWS S3, you can add the option with key as `fs.s3a.endpoint` and the endpoint you want to use as value. The storage connector will then be able to read from your specified S3 compatible storage. +To connect to a S3 compatiable storage other than AWS S3, you can add the option with key as `fs.s3a.endpoint` and the endpoint you want to use as value. The storage connector will then be able to read from your specified S3 compatible storage. + +!!! warning "Spark Configuration" + When using the storage connector within a Spark application, the credentials are set at application level. This allows users to access multiple buckets with the same storage connector within the same application (assuming the credentials allow it). + You can disable this behaviour by setting the option `fs.s3a.global-conf` to `False`. If the `global-conf` option is disabled, the credentials are set on a per-bucket basis and users will be able to use the credentials to access data only from the bucket specified in the storage connector configuration. + ## Next Steps Move on to the [usage guide for storage connectors](../usage.md) to see how you can use your newly created S3 connector.