Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

auxiliary input #39

Open
james-wynne-dev opened this issue Jul 24, 2018 · 0 comments
Open

auxiliary input #39

james-wynne-dev opened this issue Jul 24, 2018 · 0 comments

Comments

@james-wynne-dev
Copy link

Hi, I'm wondering if you could help me. I'm trying to build your speaker-dependent vocoder in TensorFlow, but I'm struggling to understand how auxiliary input is feed to the network, is it added in parallel (two parallel layers) to the sample values and the output combined at a later layer? If you can point me in the direction of a text-book/article on auxiliary input/conditioning network I would be eternally grateful, I've looked many times and I can't find anything that gives a general undestanding of this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant