Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pheniqs on very large barcode spaces #39

Open
dbogdano opened this issue Apr 1, 2024 · 1 comment
Open

Pheniqs on very large barcode spaces #39

dbogdano opened this issue Apr 1, 2024 · 1 comment

Comments

@dbogdano
Copy link

dbogdano commented Apr 1, 2024

Hello,

Thank you for developing and maintaining Pheniqs, I'm excited to try it in a number of data processing workflows. Would you be able to provide some pointers on how/if Pheniqs's PAMLD decoder could be used for very large barcoding spaces, of 2-3 Billion barcodes. I'm working with a dataset with an 18 base barcode with a small amount of non-permitted sequences (no homopolymers longer than 5 bases). This is more akin to a random UMI sequence than the smaller whitelists of barcodes used in the use examples in this repo. The Pheniqs2.0 paper describes the intractability of very large initial whitelists, and mentions that other strategies should be considered for an initial first pass through the data to reduce the barcode space. Are there any approaches or tools which you have found to work well for these scenarios? Let me know if I can provide any more information about my particular use case.

Thanks,
Derek

@moonwatcher
Copy link
Contributor

moonwatcher commented Apr 28, 2024

Hi Derek

PAMLD computes full Bayesian probabilities, that means it will compute the conditional probability of each possible barcode sequence for each observation, and then compute the posterior using all those and the priors. With a white list of 2-3 biilion possible barcode sequences this will be both slow and pointless. The conditional probabilities for the vast majority of possibilities will be extremely small and contribute almost nothing to the posterior. To be honest, even creating a JSON configuration file with such a long list might be challenging and require a significant amount of memory and load time.

The most realistic approach would be to try and narrow down your list. You can try and extract the relevant 18 bases from your data and do a basic sort|uniq -c to see the possible combinations. You can also try and run an initial MDD run and use the results from that run to compute priors for PAMLD. With a list this long, MDD will likely not allow any errors since almost every sequence is possible. That means MDD will stop scanning once it hits the first match. That said, it might not be that different from a basic sort|uniq. since almost every sequence is allowed to be a barcode, there is not much room for "error correcting" and I am not sure a bayesian approach even makes sense. If what you want is speed with no tolerance for errors that would be best achieved with a tool that uses suffix trees (trie). I think this one is mentioned in the pheniqs manuscript: https://academic.oup.com/bioinformatics/article/34/22/3924/5026649

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants