-
Notifications
You must be signed in to change notification settings - Fork 144
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Everything looks ok, but spades stuck #1444
Comments
I have more or less the same problem, but it gets stuck in the read error correction up to one week although log only shows about 1 hour run. |
Likely the problem is around I/O on your server if it stuck at this moment. Try moving temporary directory location from some network shared storage to local / scratch. |
Thanks for the suggestion. My temporary directory is already located on local storage. Additionally, I noticed that when I downsample the data to 1GB (from the original 40GB), the assembly completes without any issues. However, when I downsample to 4GB, the k-mer counting step gets stuck and the process halts. |
Hi, I noticed that even though the process has been stuck for a week, you haven't terminated it. Does this indicate that it might still be running, albeit very slowly? I'm curious if you've managed to resolve this issue or if you have any further insights to share. |
It doesn't seem so. You
And indeed, there is no |
The default Additionally, the process stopped with the message "finished abnormally, OS return value: 12," despite having at least 1600GB of free memory available. spades.logparams.txt |
Right. And if it on some kind of NFS shared storage, it could easily cause problems as these systems were not designed to handle big I/O
It doesn't seem so:
So, the hard memory limit was set to 250 Gb (default) and you have not overrode it. As a result, when more RAM was required you received out of memory error per log:
|
By the way, I was wondering if downsampling the data would improve the assembly results or make them worse. |
The file system in use is ParaStor, a distributed file system. If it is ok?
I reset the memory limit but still got the same issue. spades.logparams.txt |
You'd better ask your system administrator. We cannot know the specifics of every NAS solutions and its issues.
You didn't:
Please next time double check the options & log before submitting the issue. Please refer to SPAdes manual for the information about command line options: https://ablab.github.io/spades/running.html#advanced-options |
Description of bug
Everything appears to be running normally—logs show no errors or warnings, and system outputs look as expected. However, the program has been stuck for over 30 hours without making any progress.
I allocated 16 threads, but at the point where the process stalled, the CPU usage was only about 30% on a single thread, and it remained stuck indefinitely. Additionally, when I allocate too many threads, the program terminates due to insufficient memory allocation (OS return-value: 12). I have previously run tests on SPAdes, and it executed normally.
I would appreciate any insights or suggestions on what might be causing the stall. Thank you!
spades.log
spades.log
params.txt
params.txt
SPAdes version
SPAdes version: 4.0.0
Operating System
OS: Linux-3.10.0-1127.el7.x86_64-x86_64-with-glibc2.17
Python Version
Python version: 3.13.1
Method of SPAdes installation
conda
No errors reported in spades.log
The text was updated successfully, but these errors were encountered: