You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a dataset which was far too large to basecall in one piece. All data is the same sample against the same reference. However it is in 149 individual pieces (i.e. 149 folders containing the bam file and CTC outputs.) I've done some attempts at merging files based on other responses in this repository, however the files are far too large to all fit in to memory and so I am struggling to merge the .npy files. Is this an approach for handling this?
The text was updated successfully, but these errors were encountered:
I have a dataset which was far too large to basecall in one piece. All data is the same sample against the same reference. However it is in 149 individual pieces (i.e. 149 folders containing the bam file and CTC outputs.) I've done some attempts at merging files based on other responses in this repository, however the files are far too large to all fit in to memory and so I am struggling to merge the
.npy
files. Is this an approach for handling this?The text was updated successfully, but these errors were encountered: