Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bug when running test data #5

Open
mol529 opened this issue Oct 29, 2015 · 2 comments
Open

bug when running test data #5

mol529 opened this issue Oct 29, 2015 · 2 comments

Comments

@mol529
Copy link

mol529 commented Oct 29, 2015

hello!
I ran the LSA with the test data, and had some problem. I don't know how to solve.
The first and second steps succeed, but the third step "bash ReadPartitioning.sh 4" with the problem:

Fri Oct 30 01:57:27 CST 2015 partitioning reads in hashed input file 4
printing end of last log file...
Traceback (most recent call last):
File "LSA/write_partition_parts.py", line 71, in
G = [open('%s%s.%s.cols.%d' % (tmpdir,sample_id,outpart,i),'w') for i in range(0,2hashobject.hash_size,2hashobject.hash_size/5
0)]
ValueError: range() step argument must not be zeroG = [open('%s%s.%s.cols.%d' % (tmpdir,sample_id,outpart,i),'w') for i in range(0
,2hashobject.hash_size,2hashobject.hash_size/50)]

G = [open('%s%s.%s.cols.%d' % (tmpdir,sample_id,outpart,i),'w') for i in range(0,2**hashobject.hash_size,2**hashobject.hash_size/5

0)]
ValueErrorValueError: : range() step argument must not be zerorange() step argument must not be zeroG = [open('%s%s.%s.cols.%d' %
(tmpdir,sample_id,outpart,i),'w') for i in range(0,2hashobject.hash_size,2hashobject.hash_size/50)]

And my datasets are more than 30Gb. Can I run it with qsub instead of bsub?

Thanks!

@nmb85
Copy link

nmb85 commented Nov 3, 2015

"Can I run it with qsub instead of bsub?"
FWIW, I've been running my jobs with qsub without any problems. This PDF gives you a one-for-one conversion between the qsub (SGE) and bsub (LSF) options. Maybe later this weekend I'll post a modified create_jobs.py for SGE.

Here is an example of an LSF script modified for SGE. the header, PATH, and SGE_TASK_ID need to be changed:

#!/bin/bash
#$ -S /bin/bash
#$ -V
#$ -cwd
#$ -N KmerCorpus
#$ -t 1-6:1
#$ -o ./Logs/KmerCorpus.out
#$ -e ./Logs/KmerCorpus.err
#$ -q micro
PATH=...
export PATH
echo Date: `date`
t1=`date +%s`
sleep $(($SGE_TASK_ID % 60))
python LSA/kmer_corpus.py -r ${SGE_TASK_ID} -i ./hashed_reads/ -o ./cluster_vectors/
[ $? -eq 0 ] || echo 'JOB FAILURE: $?'
echo Date: `date`
t2=`date +%s`
tdiff=`echo 'scale=3;('$t2'-'$t1')/3600' | bc`
echo 'Total time:  '$tdiff' hours'

@brian-cleary
Copy link
Owner

Sorry for the slow response.

That's great about converting over to qsub! So we're you able to get past
your original error?

On Mon, Nov 2, 2015 at 10:39 PM, russianconcussion <[email protected]

wrote:

Can I run it with qsub instead of bsub?
FWIW, I've been running my jobs with qsub without any problems. This PDF
http://www.med.upenn.edu/hpc/assets/user-content/documents/SGE_to_LSF_User_Migration_Guide.pdf
gives you a one-for-one conversion between the qsub (SGE) and bsub (LSF)
options.


Reply to this email directly or view it on GitHub
#5 (comment)
.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants