-
Notifications
You must be signed in to change notification settings - Fork 825
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Array::shrink_to_fit(&mut self)
#6790
Conversation
/// └────────────────────┘ left | ||
/// | ||
/// | ||
/// 2 views |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ooops… my editor is set to trim trailing whitspace on save. Let me know if you want me to reverse.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should make this call realloc or no-op, i.e. match the broad semantic of Vec::shrink_to_fit.
I think more comprehensive compaction, including materializing offsets, recomputing dictionaries, etc... belongs in a separate kernel where this behaviour can be configured.
I've added a test to make sure the This works with the current code, but if I try to use diff --git a/arrow-buffer/src/buffer/immutable.rs b/arrow-buffer/src/buffer/immutable.rs
index 40625329a..70a100734 100644
--- a/arrow-buffer/src/buffer/immutable.rs
+++ b/arrow-buffer/src/buffer/immutable.rs
@@ -173,7 +173,15 @@ impl Buffer {
///
/// If the capacity is already less than or equal to the desired capacity, this is a no-op.
pub fn shrink_to_fit(&mut self) {
- if self.len() < self.capacity() {
+ let desired_capacity = self.len();
+ if desired_capacity < self.capacity() {
+ if let Some(bytes) = Arc::get_mut(&mut self.data) {
+ if bytes.try_realloc(desired_capacity).is_ok() {
+ return;
+ }
+ }
+
+ // Fallback:
*self = Self::from_vec(self.as_slice().to_vec())
}
}
diff --git a/arrow-buffer/src/bytes.rs b/arrow-buffer/src/bytes.rs
index ba61342d8..e616ee7c0 100644
--- a/arrow-buffer/src/bytes.rs
+++ b/arrow-buffer/src/bytes.rs
@@ -96,6 +96,27 @@ impl Bytes {
}
}
+ /// Try to reallocate the underlying memory region to a new size (smaller or larger).
+ ///
+ /// Returns `Err(())` if the reallocation failed.
+ /// Only works for bytes allocated with the standard allocator.
+ pub fn try_realloc(&mut self, new_len: usize) -> Result<(), ()> {
+ if let Deallocation::Standard(layout) = self.deallocation {
+ if let Ok(new_layout) = std::alloc::Layout::from_size_align(new_len, layout.align()) {
+ let new_ptr =
+ unsafe { std::alloc::realloc(self.ptr.as_mut(), new_layout, new_len) };
+ if let Some(ptr) = NonNull::new(new_ptr) {
+ self.ptr = ptr;
+ self.len = new_len;
+ self.deallocation = Deallocation::Standard(new_layout);
+ return Ok(());
+ }
+ }
+ }
+
+ Err(())
+ }
+
#[inline]
pub(crate) fn deallocation(&self) -> &Deallocation {
&self.deallocation |
What do you think about adding a #[non_exhaustive]
#[derive(Debug, Default)]
pub struct ShrinkPolicy {} Options I could see us adding in the future:
|
It is an idea with merit, but I think let's keep shrink_to_fit simple. More complex logic to recompute minimal array representations belongs in arrow-select, we want to keep arrow-array as lightweight as possible.
How are you measuring this? Many allocators, glibc especially, hang onto memory that isn't in use to avoid thrashing the TLB. They will reuse it, they just won't give it back to the system. |
I install an allocator and note down all calls to it (see |
let (concatenated, _bytes_allocated_globally, bytes_allocated_by_this_thread) = | ||
memory_use(|| { | ||
let mut concatenated = concatenate(num_concats, list_array.clone()); | ||
concatenated.shrink_to_fit(); // This is what we're testing! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Without this call, this test fails (as it should).
In other words, this test is a regression-test for shrink_to_fit
Good idea with |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good to me, thank you.
I'll leave this open for a little longer in case anyone else wants to review, memory management shenanigans can have subtleties.
Btw, this is not a breaking change, so this could be added to an |
The release schedule can be found in the repository readme, the next release will be a major release in early December. Unfortunately we've already started integrating breaking changes and so a patch release is unlikely save for a major security vulnerability. See #5368 if you're interested in some of the history behind this |
assert_eq!(shrunk_empty.len(), 0); | ||
assert_eq!(shrunk_empty.capacity(), 1); // `Buffer` and `Bytes` doesn't support 0-capacity, so we shrink to 1 | ||
assert_eq!(shrunk_empty.as_slice(), &[]); | ||
assert_eq!(shrunk_empty.capacity(), 1); // NOTE: `Buffer` and `Bytes` doesn't support 0-capacity |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They should do, but IIRC you need to use a dangling ptr, there should be some examples of this...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For MutableBuffer
there is special handling for the size=0 case, with a dangling_ptr
helper. We could copy all that logic to Bytes
, but I rather not add all that complexity in this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added in #6817
MIRI caught a couple of bugs in my unsafe code. I pushed some fixes and test improvements. |
Looks good to me |
Which issue does this PR close?
shrink_to_fit
toArray
#6360Rationale for this change
Concatenating many arrow buffers incrementally can lead to situations where the buffers are using much more memory than they need (their capacity is larger than their lengths).
Example:
If you run this you will see 12 MB is used for 6 MB of data.
Adding a call to the new
.shrink_to_fit();
onconcatenated
removes the memory overhead.What changes are included in this PR?
This PR adds
shrink_to_fit(&mu self)
toArray
and all buffers.It is best-effort.
Are there any user-facing changes?
trait Array
now has afn shrink_to_fit(&mut self) { }
.