We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
--- Last few GCs ---> [88570:0x5d8d1c0] 10971084 ms: Scavenge 1879.1 (1982.1) -> 1864.6 (1983.3) MB, 5.6 / 0.0 ms (average mu = 0.186, current mu = 0.149) allocation failure [88570:0x5d8d1c0] 10971209 ms: Scavenge 1880.1 (1983.3) -> 1870.7 (1984.3) MB, 9.7 / 0.2 ms (average mu = 0.186, current mu = 0.149) task [88570:0x5d8d1c0] 10971254 ms: Scavenge 1881.4 (1984.5) -> 1871.0 (1989.5) MB, 10.5 / 0.0 ms (average mu = 0.186, current mu = 0.149) allocation failure <--- JS stacktrace ---> FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory 1: 0xa04200 node::Abort() [node] 2: 0x94e4e9 node::FatalError(char const*, char const*) [node] 3: 0xb7978e v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [node] 4: 0xb79b07 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [node] 5: 0xd34395 [node] 6: 0xd34f1f [node] 7: 0xd42fab v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [node] 8: 0xd46b6c v8::internal::Heap::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [node] 9: 0xd1524b v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationType, v8::internal::AllocationOrigin) [node] 10: 0x105b23f v8::inter
The text was updated successfully, but these errors were encountered:
Definitely want to improve memory efficiency, but I could use a bit more details. How many files are you copying? What's their median size?
Note: You can mange your memory usage with some of the advanced options. For example:
--copy-concurrency 500
--large-copy-concurrency 75
--list-concurrency 100
--max-queue-size 50000
Sorry, something went wrong.
Can you run an s3p summarize on your bucket and, if it's safe to do so, share the results?
s3p summarize
I found memory problem (crash after allocating 2GB of RAM) appears after 1 hr 40 mins and 2.5 million objects copied regardless advanced options
I am having this issue too when listing a large bucket. Adding --list-concurrency 100 made no differnce
died after listing 5m rows
No branches or pull requests
The text was updated successfully, but these errors were encountered: