-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upload dataset on OpenNeuro #30
Comments
Hi @jcohenadad , I hope you had a great start to the new year! Could you please have a look at this data organization? I can only have 2 participants due to space limitations - but should be enough to show you the data organization. |
Hi @jcohenadad, Thank you very much for the quick reply. Sure, I can add it as a run name I guess! I will see what the best option is! yes, it does, it just gives a few warnings which are fine! |
since we ended up cropping the data to only 20 volumes, i find that naming them eg. "motor" is a bit misleading, as:
my suggestion would be to rename them as 'rest' as suggested in WARNING 1. |
WARNING 2 is also worrisome: number of files should be the same across subjects |
WARNING 3 can be addressed right away. Why wait? |
Thank you very much! Wrt 1, is up to you, yes I agree that first 20 volumes may not contain task- but what if it does, rest may be misleading as well? Ideally, it would have been nice to get data during task blocks, now that I think about it. Wrt number 2, i do not think that is worrisome because Leipzig data has pain task and Ken s data has motor task. because it will be multisite dataset, we will get that error should we decide to keep different task names. I believe that warning refers to that, otherwise, each folder should have 1 dataset. Wrt number 3, I just added your name to test. I will organize the whole dataset. some parts of the organization will have to be manual (also to make sure the authors are correct etc.) So, I just wanted to get an exemplary dataset first and will organize the whole dataset later. |
hum... i really don't think we should encourage people to do anything useful from a functional standpoint with only 20 vol. What was the motivation for adding the individual scans again? Alternatively, if we really want the data to be more useful, should we then reconsider adding also the source (ie: non-moco) data with all the volumes? One possibility would be to upload all volumes under sub-XX and the moco + moco-mean under derivatives.
then that relates to Wrt 1. If we fix Wrt 1 by naming all the files 'rest', that will do
to do things properly, i suggest adding the authors for the dataset you currently have (ie: leipzig and nw). Then you upload on OPENNEURO. Then you add more datasets with more authors, etc. Additional comments:
|
Thank you so much for the reply @jcohenadad !
Individual volumes? So that we can test and develop our segmentation method on individual volumes which hopefully can be used for developing a moco algorithm!
Useful for us (for development of other things), for others, or both?
Yes, will change that!
Sure, once you approve the format, I will upload to OpenNeuro.
My follow-up questions:
|
i'm having after thoughts about this 'mid point' solution. I'd say let's share the whole time series, or only the mean moco, but not a sample of it.
Yes, I would agree. Alongside that argument, we do not need the moco time series right now, so I suggest we only put the mean moco for now then.
https://intranet.neuro.polymtl.ca/data/dataset-curation.html#json-sidecars
Let's first clarify the previous points, to decide what makes most sense. If we end up not uploading the time series (for now), then maybe we should re-consider putting the moco-mean under the source data, and call it: The derivatives would be called: Tagging @NathanMolinier |
rec would be the ideal field:
|
Hmm, I do encounter an error that I have encountered before and I am not sure how I solved it - maybe I did not... |
ah... that's annoying. Can you please dig a little in the BIDS specs to see if there is a workaround, and also post this issue on neurostars and ask what they suggest? thanks |
Yes, I will do that and keep you posted! |
Hi @jcohenadad, I was able to figure this out thanks to help from openneuro team. The trick is to edit the header and make sure it is 4D which can be easily done like this (just adding this -may be helpful for future reference). I will be organizing data. I also need to edit the json files so that it will take a bit of time, but I will try to do it asap :) |
Tracking requirements:
cc: @MerveKaptan |
@MerveKaptan any update on this? the earlier the data are on OpenNeuro, the better it will be for reproducing the training that @rohanbanerjee is currently doing-- currently it makes it very hard, non-transparent, prone-to-error to reproduce the model training pipeline-- so this is quite urgent-- |
Hi Julien, Thanks a lot! Yes, completely understand! Merve |
Dear @jcohenadad and @rohanbanerjee , Finally, the initial version of our dataset is on openneuro (except Barry's data)! You both have admin access! |
@MerveKaptan great! pls add a link to openneuro |
This is the link to the dataset: https://openneuro.org/datasets/ds005143 |
Hello! @jcohenadad I need your email to be able to share it with you and when I use the associated email, I get the following error: Let me write a ticket to OpenNeuro team! |
yup, that's exactly the thing to do. in parallel, also try with my orcid 0000-0003-3662-9532 |
Thank you! Unfortunately, they are asking for an email but I will keep you posted as soon as I hear from the openneuro team! |
The data has been successfully uploaded to Openneuro and also been made public. Closing the issue therefore. |
For cross-referencing, here is the dataset URL pointing to the published version: https://openneuro.org/datasets/ds005143/versions/1.2.0 |
Use OpenNeuro to version track datasets used for specific training rounds and revisions. This doesn't need to be public, version tracking can be private and then we will make it public after all the iterations are done.
Related to #1
The text was updated successfully, but these errors were encountered: