-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
estimate density ratio of large training set and test set #6
Comments
I haven't experimented with very large datasets. Let me know if you can get anything useful out of densratio_py. One point you should ensure is that you pick the same number of samples from both the training set and the test set. This means it's really the size of the test set that bounds how many samples to take. Another point is that uLSIF/RuLSIF (the underlying algorithm that densratio_py uses) was made for the purpose of change-point detection. If you're hoping to recover the actual densities of the training and test set, I'm not sure how useful this will be. Still, let me know! |
Dear Ameya, Thank you very much for your reply. Do you have any suggestions? Thank you again. |
I believe that if you take a sufficient number of samples from both distributions - not necessarily the entire training/test set - you might still get a good result with RuLSIF. However, how much is 'sufficient' is really dependent on the distributions themselves. Only experimentation can tell you that. I would start out by splitting an artificial dataset (ie, one for which you know the underlying distribution) into training and test sets, and seeing how many samples you need for a reasonable output. Thanks! |
Thanks a lot for your valuable suggestion! Anyway, I will try the method you suggested. Thanks! |
Hi,
Thank you for sharing this python package for density ratio estimation.
In practical applications, the training sets are often very large.
Is it possible to use this tool to estimate density ratio of large training set and test set? For example, a training set of 20 GB data.
The text was updated successfully, but these errors were encountered: