-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issues about Training and Sampling #8
Comments
Hi, |
Thank you for your response. |
I agree that training SDMs typically requires larger datasets to generate high-quality synthetic images. In my work with the CoNSeP and GLySAC datasets, I was able to utilize a larger effective dataset compared to the 3D dataset with 30 samples you mentioned. This is because:
Furthermore, to enhance the quality of the generated data, I employed a different sampling strategy:
|
Hi! |
Both the training and sampling processes are conditional processes. |
I'll provide a detailed reply by next Tuesday, as my servers are currently under inspection.
I'll investigate this further and provide a more comprehensive analysis ASAP. Thank you for your patience. |
Thank you very much |
Hello, I would like to know which dataset you used to train the SDM.
During training, was it conditioned generation or unconditional generation?
In the original SDM code, both the training and sampling commands involve the num-classes parameter. How can I address the issue where the number of classes in the training dataset does not match the number of classes in the sampling dataset?
Thanks
The text was updated successfully, but these errors were encountered: