-
Notifications
You must be signed in to change notification settings - Fork 91
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Concatenating multiple runs #70
Comments
@jenniferColonell any advice on this? |
Hi @harshk95 Sorry it took me so long to get around to answering this question! These errors show that the start times in the metadata files are not consistent with consecutive trials -- the negative values for 'rem' indicate that the calculated end time of the concatenated file comes AFTER the end of the file it is trying to add. I'm guessing from the errors that these were actually just independent recordings (that is, not collected as trials) that you need to concatenate. Indeed, supercat, which just concatenates recordings end to end is the correct CatGT command. |
@jenniferColonell Hi! I use your modified edition for spikeglx and want to concatenate bin files. But the bin files are not of different trials seprated by triggers. When we recorded, sometimes the spikeglx would crash because of the disk writing problem, so we started recording again(independent recordings). I changed the names of these bin files to be t0~n and set t as 0,n to run the pipline. But CatGT just created a bin file that the size is same to the last recording (xxx_g0_tn.imec0.bin file). Why? Can I use the pipline concatenate them? |
Hi @PathwayinGithub The specific problem you are seeing probably has to do with paths. However, for correct concatenation across multiple runs, you'll need to use the supercat feature in CatGT; for multiple streams, make sure you include the -supercat_trim_edges option. I haven't implemented this in the pipeline because it's a less common case, but I can help you with getting writing the appropriate .bat files if that's useful. The basic procedure (see the CatGT Readme for details) is: By the way, what kinds of disk writing problems are you having? Are your disks filling up? If you are running multiple probes, you can direct the data streams to different disks to avoid that (this is a feature in SpikeGLX). |
Hi,
We had an issue with concatenating recordings from different triggers with data acquired with SpikeGLX from a NP1.0 probe. We followed the inline comments in 'sglx_multi_run_pipeline.py' using the fork for SpikeGLX data and had 5 different triggers to concatenate.
However, it does not seem that we get the concatenated file from all the runs since the duration is much shorter than expected and we get the following log from catGT.
This is what the folder with the run looks like -
We noticed in the documentation of CatGT there is a mention of supercat and were wondering if this is the command that is run.
Thanks!
The text was updated successfully, but these errors were encountered: