Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory #30

Open
vlad88sv opened this issue Apr 11, 2021 · 4 comments
Labels
bug Something isn't working

Comments

@vlad88sv
Copy link

--- Last few GCs --->

[88570:0x5d8d1c0] 10971084 ms: Scavenge 1879.1 (1982.1) -> 1864.6 (1983.3) MB, 5.6 / 0.0 ms  (average mu = 0.186, current mu = 0.149) allocation failure 
[88570:0x5d8d1c0] 10971209 ms: Scavenge 1880.1 (1983.3) -> 1870.7 (1984.3) MB, 9.7 / 0.2 ms  (average mu = 0.186, current mu = 0.149) task 
[88570:0x5d8d1c0] 10971254 ms: Scavenge 1881.4 (1984.5) -> 1871.0 (1989.5) MB, 10.5 / 0.0 ms  (average mu = 0.186, current mu = 0.149) allocation failure 


<--- JS stacktrace --->

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
 1: 0xa04200 node::Abort() [node]
 2: 0x94e4e9 node::FatalError(char const*, char const*) [node]
 3: 0xb7978e v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [node]
 4: 0xb79b07 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [node]
 5: 0xd34395  [node]
 6: 0xd34f1f  [node]
 7: 0xd42fab v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [node]
 8: 0xd46b6c v8::internal::Heap::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [node]
 9: 0xd1524b v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationType, v8::internal::AllocationOrigin) [node]
10: 0x105b23f v8::inter
@shanebdavis
Copy link
Member

shanebdavis commented Jul 14, 2021

Definitely want to improve memory efficiency, but I could use a bit more details. How many files are you copying? What's their median size?

Note: You can mange your memory usage with some of the advanced options. For example:

  • --copy-concurrency 500 (ADVANCED) - Maximum number of simultaneous small-copies
  • --large-copy-concurrency 75 (ADVANCED) - Maximum number of simultaneous large-copies
  • --list-concurrency 100 (ADVANCED) - Maximum number of simultaneous list operations
  • --max-queue-size 50000 (ADVANCED) - Maximum number of files that can be queued for copying before list-reading is throttled.

@shanebdavis shanebdavis added the bug Something isn't working label Jul 14, 2021
@shanebdavis
Copy link
Member

shanebdavis commented Jul 14, 2021

Can you run an s3p summarize on your bucket and, if it's safe to do so, share the results?

@klimchuk
Copy link

I found memory problem (crash after allocating 2GB of RAM) appears after 1 hr 40 mins and 2.5 million objects copied regardless advanced options

@mwilliamson-nid
Copy link

mwilliamson-nid commented Aug 26, 2022

I am having this issue too when listing a large bucket. Adding --list-concurrency 100 made no differnce

died after listing 5m rows

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants