Skip to content

Commit

Permalink
[#6229] docs: add fileset credential vending example (#6231)
Browse files Browse the repository at this point in the history
### What changes were proposed in this pull request?

add credential vending document for fileset

### Why are the changes needed?

Fix: #6229 

### Does this PR introduce _any_ user-facing change?

no

### How was this patch tested?

just document
  • Loading branch information
FANNG1 authored and web-flow committed Jan 14, 2025
1 parent dec7ea0 commit fd735f2
Show file tree
Hide file tree
Showing 4 changed files with 88 additions and 12 deletions.
26 changes: 23 additions & 3 deletions docs/hadoop-catalog-with-adls.md
Original file line number Diff line number Diff line change
Expand Up @@ -480,11 +480,31 @@ For other use cases, please refer to the [Gravitino Virtual File System](./how-t

Since 0.8.0-incubating, Gravitino supports credential vending for ADLS fileset. If the catalog has been [configured with credential](./security/credential-vending.md), you can access ADLS fileset without providing authentication information like `azure-storage-account-name` and `azure-storage-account-key` in the properties.

### How to create an ADLS Hadoop catalog with credential enabled
### How to create an ADLS Hadoop catalog with credential vending

Apart from configuration method in [create-adls-hadoop-catalog](#configuration-for-a-adls-hadoop-catalog), properties needed by [adls-credential](./security/credential-vending.md#adls-credentials) should also be set to enable credential vending for ADLS fileset.
Apart from configuration method in [create-adls-hadoop-catalog](#configuration-for-a-adls-hadoop-catalog), properties needed by [adls-credential](./security/credential-vending.md#adls-credentials) should also be set to enable credential vending for ADLS fileset. Take `adls-token` credential provider for example:

### How to access ADLS fileset with credential
```shell
curl -X POST -H "Accept: application/vnd.gravitino.v1+json" \
-H "Content-Type: application/json" -d '{
"name": "adls-catalog-with-token",
"type": "FILESET",
"comment": "This is a ADLS fileset catalog",
"provider": "hadoop",
"properties": {
"location": "abfss://[email protected]/path",
"azure-storage-account-name": "The account name of the Azure Blob Storage",
"azure-storage-account-key": "The account key of the Azure Blob Storage",
"filesystem-providers": "abs",
"credential-providers": "adls-token",
"azure-tenant-id":"The Azure tenant id",
"azure-client-id":"The Azure client id",
"azure-client-secret":"The Azure client secret key"
}
}' http://localhost:8090/api/metalakes/metalake/catalogs
```

### How to access ADLS fileset with credential vending

If the catalog has been configured with credential, you can access ADLS fileset without providing authentication information via GVFS Java/Python client and Spark. Let's see how to access ADLS fileset with credential:

Expand Down
22 changes: 19 additions & 3 deletions docs/hadoop-catalog-with-gcs.md
Original file line number Diff line number Diff line change
Expand Up @@ -459,11 +459,27 @@ For other use cases, please refer to the [Gravitino Virtual File System](./how-t

Since 0.8.0-incubating, Gravitino supports credential vending for GCS fileset. If the catalog has been [configured with credential](./security/credential-vending.md), you can access GCS fileset without providing authentication information like `gcs-service-account-file` in the properties.

### How to create a GCS Hadoop catalog with credential enabled
### How to create a GCS Hadoop catalog with credential vending

Apart from configuration method in [create-gcs-hadoop-catalog](#configurations-for-a-gcs-hadoop-catalog), properties needed by [gcs-credential](./security/credential-vending.md#gcs-credentials) should also be set to enable credential vending for GCS fileset.
Apart from configuration method in [create-gcs-hadoop-catalog](#configurations-for-a-gcs-hadoop-catalog), properties needed by [gcs-credential](./security/credential-vending.md#gcs-credentials) should also be set to enable credential vending for GCS fileset. Take `gcs-token` credential provider for example:

### How to access GCS fileset with credential
```shell
curl -X POST -H "Accept: application/vnd.gravitino.v1+json" \
-H "Content-Type: application/json" -d '{
"name": "gcs-catalog-with-token",
"type": "FILESET",
"comment": "This is a GCS fileset catalog",
"provider": "hadoop",
"properties": {
"location": "gs://bucket/root",
"gcs-service-account-file": "path_of_gcs_service_account_file",
"filesystem-providers": "gcs",
"credential-providers": "gcs-token"
}
}' http://localhost:8090/api/metalakes/metalake/catalogs
```

### How to access GCS fileset with credential vending

If the catalog has been configured with credential, you can access GCS fileset without providing authentication information via GVFS Java/Python client and Spark. Let's see how to access GCS fileset with credential:

Expand Down
26 changes: 23 additions & 3 deletions docs/hadoop-catalog-with-oss.md
Original file line number Diff line number Diff line change
Expand Up @@ -495,11 +495,31 @@ For other use cases, please refer to the [Gravitino Virtual File System](./how-t

Since 0.8.0-incubating, Gravitino supports credential vending for OSS fileset. If the catalog has been [configured with credential](./security/credential-vending.md), you can access OSS fileset without providing authentication information like `oss-access-key-id` and `oss-secret-access-key` in the properties.

### How to create a OSS Hadoop catalog with credential enabled
### How to create an OSS Hadoop catalog with credential vending

Apart from configuration method in [create-oss-hadoop-catalog](#configuration-for-an-oss-hadoop-catalog), properties needed by [oss-credential](./security/credential-vending.md#oss-credentials) should also be set to enable credential vending for OSS fileset.
Apart from configuration method in [create-oss-hadoop-catalog](#configuration-for-an-oss-hadoop-catalog), properties needed by [oss-credential](./security/credential-vending.md#oss-credentials) should also be set to enable credential vending for OSS fileset. Take `oss-token` credential provider for example:

### How to access OSS fileset with credential
```shell
curl -X POST -H "Accept: application/vnd.gravitino.v1+json" \
-H "Content-Type: application/json" -d '{
"name": "oss-catalog-with-token",
"type": "FILESET",
"comment": "This is a OSS fileset catalog",
"provider": "hadoop",
"properties": {
"location": "oss://bucket/root",
"oss-access-key-id": "access_key",
"oss-secret-access-key": "secret_key",
"oss-endpoint": "http://oss-cn-hangzhou.aliyuncs.com",
"filesystem-providers": "oss",
"credential-providers": "oss-token",
"oss-region":"oss-cn-hangzhou",
"oss-role-arn":"The ARN of the role to access the OSS data"
}
}' http://localhost:8090/api/metalakes/metalake/catalogs
```

### How to access OSS fileset with credential vending

If the catalog has been configured with credential, you can access OSS fileset without providing authentication information via GVFS Java/Python client and Spark. Let's see how to access OSS fileset with credential:

Expand Down
26 changes: 23 additions & 3 deletions docs/hadoop-catalog-with-s3.md
Original file line number Diff line number Diff line change
Expand Up @@ -498,11 +498,31 @@ For more use cases, please refer to the [Gravitino Virtual File System](./how-to

Since 0.8.0-incubating, Gravitino supports credential vending for S3 fileset. If the catalog has been [configured with credential](./security/credential-vending.md), you can access S3 fileset without providing authentication information like `s3-access-key-id` and `s3-secret-access-key` in the properties.

### How to create a S3 Hadoop catalog with credential enabled
### How to create a S3 Hadoop catalog with credential vending

Apart from configuration method in [create-s3-hadoop-catalog](#configurations-for-s3-hadoop-catalog), properties needed by [s3-credential](./security/credential-vending.md#s3-credentials) should also be set to enable credential vending for S3 fileset.
Apart from configuration method in [create-s3-hadoop-catalog](#configurations-for-s3-hadoop-catalog), properties needed by [s3-credential](./security/credential-vending.md#s3-credentials) should also be set to enable credential vending for S3 fileset. Take `s3-token` credential provider for example:

### How to access S3 fileset with credential
```shell
curl -X POST -H "Accept: application/vnd.gravitino.v1+json" \
-H "Content-Type: application/json" -d '{
"name": "s3-catalog-with-token",
"type": "FILESET",
"comment": "This is a S3 fileset catalog",
"provider": "hadoop",
"properties": {
"location": "s3a://bucket/root",
"s3-access-key-id": "access_key",
"s3-secret-access-key": "secret_key",
"s3-endpoint": "http://s3.ap-northeast-1.amazonaws.com",
"filesystem-providers": "s3",
"credential-providers": "s3-token",
"s3-region":"ap-northeast-1",
"s3-role-arn":"The ARN of the role to access the S3 data"
}
}' http://localhost:8090/api/metalakes/metalake/catalogs
```

### How to access S3 fileset with credential vending

If the catalog has been configured with credential, you can access S3 fileset without providing authentication information via GVFS Java/Python client and Spark. Let's see how to access S3 fileset with credential:

Expand Down

0 comments on commit fd735f2

Please sign in to comment.