Skip to content

Commit

Permalink
resolved merge conflict?
Browse files Browse the repository at this point in the history
  • Loading branch information
Richard Whaling authored and Richard Whaling committed Sep 27, 2024
2 parents 10424f1 + 2498837 commit 0e37f42
Show file tree
Hide file tree
Showing 4 changed files with 50 additions and 5 deletions.
3 changes: 2 additions & 1 deletion crates/core/src/operations/transaction/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ use object_store::path::Path;
use object_store::Error as ObjectStoreError;
use serde_json::Value;

use self::conflict_checker::{CommitConflictError, TransactionInfo, WinningCommitSummary};
use self::conflict_checker::{TransactionInfo, WinningCommitSummary};
use crate::checkpoints::{cleanup_expired_logs_for, create_checkpoint_for};
use crate::errors::DeltaTableError;
use crate::kernel::{
Expand All @@ -97,6 +97,7 @@ use crate::table::config::TableConfig;
use crate::table::state::DeltaTableState;
use crate::{crate_version, DeltaResult};

pub use self::conflict_checker::CommitConflictError;
pub use self::protocol::INSTANCE as PROTOCOL;

#[cfg(test)]
Expand Down
44 changes: 44 additions & 0 deletions docs/integrations/object-storage/gcs.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
# GCS Storage Backend

`delta-rs` offers native support for using Google Cloud Storage (GCS) as an object storage backend.

You don’t need to install any extra dependencies to read/write Delta tables to GCS with engines that use `delta-rs`. You do need to configure your GCS access credentials correctly.

## Using Application Default Credentials

Application Default Credentials (ADC) is a strategy used by GCS to automatically find credentials based on the application environment.

If you are working from your local machine and have ADC set up then you can read/write Delta tables from GCS directly, without having to pass your credentials explicitly.

## Example: Write Delta tables to GCS with Polars

Using Polars, you can write a Delta table to GCS like this:

```python
# create a toy dataframe
import polars as pl
df = pl.DataFrame({"foo": [1, 2, 3, 4, 5]})

# define path
table_path = "gs://bucket/delta-table"

# write Delta to GCS
df.write_delta(table_path)
```

## Passing GCS Credentials explicitly

Alternatively, you can pass GCS credentials to your query engine explicitly.

For Polars, you would do this using the `storage_options` keyword. This will forward your credentials to the `object store` library that Polars uses under the hood. Read the [Polars documentation](https://docs.pola.rs/api/python/stable/reference/api/polars.DataFrame.write_delta.html) and the [`object store` documentation](https://docs.rs/object_store/latest/object_store/gcp/enum.GoogleConfigKey.html#variants) for more information.

## Delta Lake on GCS: Required permissions

You will need the following permissions in your GCS account:

- `storage.objects.create`
- `storage.objects.delete` (only required for uploads that overwrite an existing object)
- `storage.objects.get` (only required if you plan on using the Google Cloud CLI)
- `storage.objects.list` (only required if you plan on using the Google Cloud CLI)

For more information, see the [GCP documentation](https://cloud.google.com/storage/docs/uploading-objects)
4 changes: 2 additions & 2 deletions docs/integrations/object-storage/s3-like.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# CloudFlare R2 & Minio

`delta-rs` offers native support for using Cloudflare R2 or Minio as an S3-compatible storage backend. R2 and Minio support conditional puts, which removes the need for DynamoDB for safe concurrent writes. However, we have to pass this flag into the storage options. See the example below.
`delta-rs` offers native support for using Cloudflare R2 or Minio as an S3-compatible storage backend. R2 and Minio support conditional puts, which removes the need for DynamoDB for safe concurrent writes. However, we have to pass the `aws_conditional_put` flag into `storage_options`. See the example below.

You don’t need to install any extra dependencies to red/write Delta tables to S3 with engines that use `delta-rs`. You do need to configure your AWS access credentials correctly.
You don’t need to install any extra dependencies to read/write Delta tables to S3 with engines that use `delta-rs`. You do need to configure your AWS access credentials correctly.

## Passing S3 Credentials

Expand Down
4 changes: 2 additions & 2 deletions docs/integrations/object-storage/s3.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# AWS S3 Storage Backend

`delta-rs` offers native support for using AWS S3 as an objet storage backend.
`delta-rs` offers native support for using AWS S3 as an object storage backend.

You don’t need to install any extra dependencies to red/write Delta tables to S3 with engines that use `delta-rs`. You do need to configure your AWS access credentials correctly.
You don’t need to install any extra dependencies to read/write Delta tables to S3 with engines that use `delta-rs`. You do need to configure your AWS access credentials correctly.

## Note for boto3 users

Expand Down

0 comments on commit 0e37f42

Please sign in to comment.