You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm running sync command for a bucket that contains a lot of (around 100mln) relatively small (1-4kB) files. Most of the time command ends with JavaScript heap out of memory exception like this:
<--- Last few GCs --->
[31547:0x49594a0] 53318576 ms: Scavenge 15416.2 (16413.1) -> 15401.9 (16413.9) MB, 28.2 / 0.0 ms (average mu = 0.302, current mu = 0.272) allocation failure
[31547:0x49594a0] 53318632 ms: Scavenge 15417.5 (16414.1) -> 15403.2 (16414.9) MB, 28.5 / 0.0 ms (average mu = 0.302, current mu = 0.272) allocation failure
[31547:0x49594a0] 53318688 ms: Scavenge 15418.7 (16415.1) -> 15404.4 (16416.6) MB, 28.5 / 0.0 ms (average mu = 0.302, current mu = 0.272) allocation failure
<--- JS stacktrace --->
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
1: 0xa04200 node::Abort() [node]
2: 0x94e4e9 node::FatalError(char const*, char const*) [node]
3: 0xb797be v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [node]
4: 0xb79b37 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [node]
5: 0xd343c5 [node]
6: 0xd34f4f [node]
7: 0xd42fdb v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [node]
8: 0xd46b9c v8::internal::Heap::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [node]
9: 0xd1527b v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationType, v8::internal::AllocationOrigin) [node]
10: 0x105b21f v8::internal::Runtime_AllocateInYoungGeneration(int, unsigned long*, v8::internal::Isolate*) [node]
11: 0x14011f9 [node]
By increasing max heap size I'm able only to delay the inevitable. The actual command:
I'm running sync command for a bucket that contains a lot of (around 100mln) relatively small (1-4kB) files. Most of the time command ends with
JavaScript heap out of memory
exception like this:By increasing max heap size I'm able only to delay the inevitable. The actual command:
NODE_OPTIONS=--max_old_space_size=16384 nohup npx s3p sync --bucket .... --to-bucket .... --overwrite --prefix products/ 2>&1 &> products.log &
The text was updated successfully, but these errors were encountered: