Skip to content

Latest commit

 

History

History
202 lines (148 loc) · 5.89 KB

CONTRIBUTING.md

File metadata and controls

202 lines (148 loc) · 5.89 KB

Development instructions

Running Tests

Tests can be run using cargo

cargo test

Running Integration Tests

By default, integration tests are not run. To run them you will need to set TEST_INTEGRATION=1 and then provide the necessary configuration for that object store

AWS

To test the S3 integration against localstack

First start up a container running localstack

$ LOCALSTACK_VERSION=sha256:a0b79cb2430f1818de2c66ce89d41bba40f5a1823410f5a7eaf3494b692eed97
$ podman run -d -p 4566:4566 localstack/localstack@$LOCALSTACK_VERSION
$ podman run -d -p 1338:1338 amazon/amazon-ec2-metadata-mock:v1.9.2 --imdsv2

Setup environment

export TEST_INTEGRATION=1
export AWS_DEFAULT_REGION=us-east-1
export AWS_ACCESS_KEY_ID=test
export AWS_SECRET_ACCESS_KEY=test
export AWS_ENDPOINT=http://localhost:4566
export AWS_ALLOW_HTTP=true
export AWS_BUCKET_NAME=test-bucket

Create a bucket using the AWS CLI

podman run --net=host --env-host amazon/aws-cli --endpoint-url=http://localhost:4566 s3 mb s3://test-bucket

Or directly with:

aws s3 mb s3://test-bucket --endpoint-url=http://localhost:4566
aws --endpoint-url=http://localhost:4566 dynamodb create-table --table-name test-table --key-schema AttributeName=path,KeyType=HASH AttributeName=etag,KeyType=RANGE --attribute-definitions AttributeName=path,AttributeType=S AttributeName=etag,AttributeType=S --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5

Run tests

$ cargo test --features aws

Encryption tests

To create an encryption key for the tests, you can run the following command:

export AWS_SSE_KMS_KEY_ID=$(aws --endpoint-url=http://localhost:4566 \
  kms create-key --description "test key" |
  jq -r '.KeyMetadata.KeyId')

To run integration tests with encryption, you can set the following environment variables:

export AWS_SERVER_SIDE_ENCRYPTION=aws:kms
export AWS_SSE_BUCKET_KEY=false
cargo test --features aws

As well as:

unset AWS_SSE_BUCKET_KEY
export AWS_SERVER_SIDE_ENCRYPTION=aws:kms:dsse
cargo test --features aws

SSE-C Encryption tests

Unfortunately, localstack does not support SSE-C encryption (localstack/localstack#11356).

We will use MinIO to test SSE-C encryption.

First, create a self-signed certificate to enable HTTPS for MinIO, as SSE-C requires HTTPS.

mkdir ~/certs
cd ~/certs
openssl genpkey -algorithm RSA -out private.key
openssl req -new -key private.key -out request.csr -subj "/C=US/ST=State/L=City/O=Organization/OU=Unit/CN=example.com/[email protected]"
openssl x509 -req -days 365 -in request.csr -signkey private.key -out public.crt
rm request.csr

Second, start MinIO with the self-signed certificate.

docker run -d \
  -p 9000:9000 \
  --name minio \
  -v ${HOME}/certs:/root/.minio/certs \
  -e "MINIO_ROOT_USER=minio" \
  -e "MINIO_ROOT_PASSWORD=minio123" \
  minio/minio server /data

Create a test bucket.

export AWS_BUCKET_NAME=test-bucket
export AWS_ACCESS_KEY_ID=minio
export AWS_SECRET_ACCESS_KEY=minio123
export AWS_ENDPOINT=https://localhost:9000
aws s3 mb s3://test-bucket --endpoint-url=https://localhost:9000 --no-verify-ssl

Run the tests. The real test is test_s3_ssec_encryption_with_minio()

export TEST_S3_SSEC_ENCRYPTION=1
cargo test --features aws --package object_store --lib aws::tests::test_s3_ssec_encryption_with_minio -- --exact --nocapture

Azure

To test the Azure integration against azurite

Startup azurite

$ podman run -p 10000:10000 -p 10001:10001 -p 10002:10002 mcr.microsoft.com/azure-storage/azurite

Create a bucket

$ podman run --net=host mcr.microsoft.com/azure-cli az storage container create -n test-bucket --connection-string 'DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://127.0.0.1:10000/devstoreaccount1;QueueEndpoint=http://127.0.0.1:10001/devstoreaccount1;'

Run tests

AZURE_USE_EMULATOR=1 \
TEST_INTEGRATION=1 \
OBJECT_STORE_BUCKET=test-bucket \
AZURE_STORAGE_ACCOUNT=devstoreaccount1 \
AZURE_STORAGE_ACCESS_KEY=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw== \
cargo test --features azure

GCP

To test the GCS integration, we use Fake GCS Server

Startup the fake server:

docker run -p 4443:4443 tustvold/fake-gcs-server -scheme http

Configure the account:

curl -v -X POST --data-binary '{"name":"test-bucket"}' -H "Content-Type: application/json" "http://localhost:4443/storage/v1/b"
echo '{"gcs_base_url": "http://localhost:4443", "disable_oauth": true, "client_email": "", "private_key": ""}' > /tmp/gcs.json

Now run the tests:

TEST_INTEGRATION=1 \
OBJECT_STORE_BUCKET=test-bucket \
GOOGLE_SERVICE_ACCOUNT=/tmp/gcs.json \
cargo test -p object_store --features=gcp