-
Notifications
You must be signed in to change notification settings - Fork 421
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Z-Order with larger dataset resulting in memory error #2284
Comments
@pyjads since you have a partitioned table you can run the optimize.z-order on each partition. You can use the |
Shouldn't delta-rs automatically be doing the z-order within partitions anyway since you can't z-order across partitions? And if a partition is too big to fit in memory, shouldn't it spill to disk? Anecdotally spilling to disk does not seem to work, unless I set it to a very large value and spill to swap even a medium sized table can't be z-ordered. |
@ion-elgreco Each partition is too big to be loaded completely in the memory. How can it be configured to prevent memory errors for large tables? |
There is a bug in datafusion that prevents this at the moment, I will have to find the related issue for this though |
Environment
Windows (8 GB RAM)
Delta-rs version: 0.16.0
Bug
What happened:
I am trying to execute z-order on the partitioned data. There are 65 partitions and each partition contains approx. 900 MB of data in approx. 16 parquet files with approx. 55 mb file size of each parquet. It results into following error
DeltaError: Failed to parse parquet: Parquet error: Z-order failed while scanning data: ResourcesExhausted("Failed to allocate additional 403718240 bytes for ExternalSorter[2] with 0 bytes already allocated - maximum available is 381425355")
.I am new to deltalake and don't have much knowledge on how z_order work. Is it due to the large amount of data? I am trying to run it on my local laptop with limited resources.
The text was updated successfully, but these errors were encountered: