You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for this package that seems very useful.
Nevertheless, I have an integrated datasets with 150 000 cells, and my R crash when calculating the co-clustering frequencies. I have a computer with a RAM memory of 128 GB. My integrated dataset takes up 15.4GB. Do you have any idea how could I overcome this issue.
Thank you in advance
The text was updated successfully, but these errors were encountered:
Thanks for mentioning this - we have not yet optimized chooseR for very large data sets, but will let you know when we update the code to allow for that. In the meantime, is there any substructure at the top level of your data set? If there are clearly discrete subgroups of cells at the top level, you could run chooseR separately on these subsets - usually, the subsets are also where there is more confusion about what parameter values are the most robust.
Also, we have forked the development version of the code here: https://github.com/MenonLab/chooseR
We hope to have the updated, more memory-efficient version soon.
Hello,
Thank you for this package that seems very useful.
Nevertheless, I have an integrated datasets with 150 000 cells, and my R crash when calculating the co-clustering frequencies. I have a computer with a RAM memory of 128 GB. My integrated dataset takes up 15.4GB. Do you have any idea how could I overcome this issue.
Thank you in advance
The text was updated successfully, but these errors were encountered: