You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current maximum number of open files (RLIMIT_NOFILE) is not being set high enough. In some cases, certain combinations of the fanout and the current open file limit leads to pdsh dumping core after reaching its maximum number of open files. This code:
int nfds = (2 * opt->fanout) + 32;
does end up setting enough file descriptors in a case like this (ssh rcmd, 1024 max files, 300 fanout):
quartz187 ~/pie# ulimit -Sn 1024; strace -e prlimit64 -o strace.out pdsh -f 300 -w "equartz[1-300]" echo > /dev/null; echo; cat strace.out
pdsh@quartz187: pipecmd: socketpair: Too many open files
pdsh@quartz187: pipecmd: socketpair: Too many open files
pdsh@quartz187: pipecmd: socketpair: Too many open files
pdsh@quartz187: pipecmd: socketpair: Too many open files
pdsh@quartz187: pipecmd: socketpair: Too many open files
pdsh@quartz187: pipecmd: socketpair: Too many open files
pdsh@quartz187: pipecmd: socketpair: Too many open files
pdsh@quartz187: exec cmd ssh failed for host equartz242
pdsh@quartz187: pipecmd: socketpair: Too many open files
pdsh@quartz187: pipecmd: socketpair: Too many open files
pdsh@quartz187: exec cmd ssh failed for host equartz299
pdsh@quartz187: pipecmd: socketpair: Too many open files
pdsh@quartz187: pipecmd: socketpair: Too many open files
pdsh@quartz187: exec cmd ssh failed for host equartz38
pdsh@quartz187: pipecmd: socketpair: Too many open files
pdsh@quartz187: pipecmd: socketpair: Too many open files
pdsh@quartz187: pipecmd: socketpair: Too many open files
pdsh@quartz187: pipecmd: socketpair: Too many open files
Segmentation fault
prlimit64(0, RLIMIT_STACK, NULL, {rlim_cur=RLIM64_INFINITY, rlim_max=RLIM64_INFINITY}) = 0
prlimit64(0, RLIMIT_NOFILE, NULL, {rlim_cur=1024, rlim_max=125*1024}) = 0
+++ killed by SIGSEGV (core dumped) +++
The text was updated successfully, but these errors were encountered:
The 2*fanout estimate possibly only applies to the rcmd implementation and not exec (it could be 4*fanout here because there's potentially a socketpair(2) call for both stdin/out and stderr). Perhaps all that is required here is to increase the multiplier to 4 (and fix the segfault of course).
The current maximum number of open files (RLIMIT_NOFILE) is not being set high enough. In some cases, certain combinations of the fanout and the current open file limit leads to pdsh dumping core after reaching its maximum number of open files. This code:
does end up setting enough file descriptors in a case like this (ssh rcmd, 1024 max files, 300 fanout):
The text was updated successfully, but these errors were encountered: