Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unconditional model generates okay quality of fake human voice but failed on music. #80

Open
piobmx opened this issue Oct 26, 2023 · 5 comments

Comments

@piobmx
Copy link

piobmx commented Oct 26, 2023

Hi, I've been playing with this diffusion model library for a few days, it is great to have such library that allows common users to train audio data with limited resources.

I have a problem with regard to the training data and the output. I fed the unconditional model with mozilla's common voice dataset. I used only one language and the size is about 15k. I resampled them to 44.1k and padded them to 2^18 samples per file if shorter. And the unconditional results were okay, at least I could tell it's human speaking although never actually audible.

But when I replace the training data with music (mostly pure pianos, same sample rate but 2^17 samples per input tensor), the model is not generating outputs that sounds like piano, in fact they are mostly noise.

I used the same configurations for each layers for both models, tried lowering the downsampling factors or increase attentions heads, but no significant difference. Any tips on why my problem happens?

@piobmx
Copy link
Author

piobmx commented Oct 28, 2023

Weirdly, this gets kinda improved after I use just the default Adam optimizer with 1e-1 lr without any other configurations.

@0417keito
Copy link

Sorry for the sudden question. I would like to know about the value of the loss, how did the loss converge? What was the initial value of the loss and how did it evolve?

@piobmx
Copy link
Author

piobmx commented Dec 5, 2023

Sorry for the sudden question. I would like to know about the value of the loss, how did the loss converge? What was the initial value of the loss and how did it evolve?

The initial value could depend on many factors but the loss is supposed to drops like this

image

@0417keito
Copy link

Thank you.

@YuZongNB
Copy link

Hi,Did you successfully generate piano music.?I'm also training with speech data and piano music, and I end up producing samples that are close to white noise.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants