You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Dec 4, 2023. It is now read-only.
When downloading objects in chunks using for instance TransferManager with multiPartCopyPartSize, the memory usage of this mock is proportional to object size * multipart count. This is irrespective of backend.
For example, if I have a 300 MB object, and I download this in chunks of 30MB, resulting in 300MB/30MB=10 parallel downloads, the mock uses 300MB*10~=3GB of memory. This is prohibitive.
I suspect this is due to the fact that each of the handlers read the data from the provider in the form of GetObjectData. GetObjectData will always return the full file byte array. So if you have 10 actors all reading the entire file into memory, this memory bloat is the consequence.
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
When downloading objects in chunks using for instance
TransferManager
withmultiPartCopyPartSize
, the memory usage of this mock is proportional toobject size * multipart count
. This is irrespective of backend.For example, if I have a 300 MB object, and I download this in chunks of
30MB
, resulting in300MB/30MB=10
parallel downloads, the mock uses300MB*10~=3GB
of memory. This is prohibitive.I suspect this is due to the fact that each of the handlers read the data from the provider in the form of
GetObjectData
.GetObjectData
will always return the full file byte array. So if you have 10 actors all reading the entire file into memory, this memory bloat is the consequence.The text was updated successfully, but these errors were encountered: