RDS MariaDB Limitations: How to Achieve DBeaver-Level CSV Export Performance Programmatically? #33111
-
I'm working with large tables (300+ million rows) in MariaDB on Amazon RDS. Due to RDS limitations, I cannot use SELECT INTO OUTFILE, which is the fastest method I've found so far but still considerably slower than DBeaver's export functionality. I'm seeking a programmatic solution that can match or exceed DBeaver's performance. Could someone shed light on how DBeaver handles CSV export for large datasets, particularly in terms of data fetching, memory management, and potential optimizations? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
We don't do anything fancy :^) We fetch data by simply Related code can be found here:
|
Beta Was this translation helpful? Give feedback.
We don't do anything fancy :^)
We fetch data by simply
SELECT
-ing it from a table. The fetch size can be specified during the initial setup. We stream the fetched data in a buffered manner to the output file.Related code can be found here:
org.jkiss.dbeaver.model.impl.jdbc.struct.JDBCTable#readData
org.jkiss.dbeaver.tools.transfer.stream.exporter.DataExporterCSV#exportRow