You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current PooledBlockAllocator implementation relies on a single PooledBlockAllocatorProvider instance. The current PooledBlockAllocator implementation has an upper limit on the number of free blocks that can be reused, but there is no way to free unused blocks.
Let's take for instance this scenario:
We need to serialize a very large file, for example more than 100 Mb
Using PooledBlockAllocator with blockSize = 32768 results in the creation of 2048 blocks
Once the serialization is complete, we don't need 2048 blocks because all the following serializations will only use a few dozen blocks
As a result, we have a lot of pre-allocated blocks that are never used
It would be nice to have some API to explicitly control the PooledBlockAllocator. Also, this problem can be solved by using some heuristics based on "allocation pressure". For example, if we have 2048 blocks, but have used no more than 100 blocks in the last 1 minute, we can be sure that reducing half of the free blocks would be safe.
The text was updated successfully, but these errors were encountered:
The current
PooledBlockAllocator
implementation relies on a singlePooledBlockAllocatorProvider
instance. The currentPooledBlockAllocator
implementation has an upper limit on the number of free blocks that can be reused, but there is no way to free unused blocks.Let's take for instance this scenario:
PooledBlockAllocator
withblockSize = 32768
results in the creation of 2048 blocksIt would be nice to have some API to explicitly control the
PooledBlockAllocator
. Also, this problem can be solved by using some heuristics based on "allocation pressure". For example, if we have 2048 blocks, but have used no more than 100 blocks in the last 1 minute, we can be sure that reducing half of the free blocks would be safe.The text was updated successfully, but these errors were encountered: