Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support Redshift COPY for bulk loads #24546

Open
brendanstennett opened this issue Dec 20, 2024 · 3 comments
Open

Support Redshift COPY for bulk loads #24546

brendanstennett opened this issue Dec 20, 2024 · 3 comments

Comments

@brendanstennett
Copy link

When loading data into Redshift using a CTAS statement, the Redshift connector defaults to BaseJdbcClient behaviour of using batched insert statements. While this behaviour works well for most database systems, Redshift's handling of INSERT statements is very slow and, according to this article by AWS, considered an anti-pattern. This is also the case for most OLAP systems. In real world performance, we see about 300 rows per second per Redshift node.

I was wondering if there was an appetite to improve this using the Redshift COPY statement? I think it would work as follows:

  • Administrator enables inserts using copy as part of the catalog properties of Redshift (new property)
  • Administrator provides AWS Access Key ID, Secret (or optionally role if running on AWS), and bucket + prefix (new properties)
  • When running a CTAS statement, Trino still creates the table as it did before, but instead of streaming data using INSERT statements, it writes query results (using parquet or csv) to S3 using at location s3://my-bucket/my-prefix/trino_query_id/parts
  • When all parts are written, Trino issues a COPY ... FROM ...

I can put my hand up to implement this if it is felt that this is something that would be useful to others. It seems like the way to do so based on looking through similar implementations would be to implement the PageSink and then the PageSinkProvider for this operation, as well as any surrounding credential providers. Please let me know if I am overlooking anything thinking about it this way or not considering anything major.

I have noticed that @mayankvadariya is implementing an adjacent feature in #24117 though so I don't want to step on any toes or duplicate effort if this is already being worked on.

@mayankvadariya
Copy link
Contributor

hi @brendanstennett, #24117 just improves the read path by implementing redshift UNLOAD.

@raunaqmorarka
Copy link
Member

raunaqmorarka commented Dec 26, 2024

What's the use case for moving data into Redshift using Trino ?
JDBC connectors are mainly intended for the purpose of allowing ad-hoc queries and easy extraction of data into the lake (hive/delta/iceberg).
Feel free to implement it, but it's not a use case that we're typically optimizing for.

@brendanstennett
Copy link
Author

Thanks @mayankvadariya, this is great functionality. Thank you for implementing!

@raunaqmorarka For multi-cloud ETL jobs coming out of a relational database into Redshift to support other operational use cases. I definitely understand you typically see this going the other direction but all sorts of interesting use cases open up for easy movement of data in either direction.

I'll see where I can devote some cycles to look into this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

3 participants