You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The npy files in */test_wavs are generated by the MFA tool, but first its corresponding phoneme sequence has to be known.
It is not limited to the above method, but any tool that can predict the duration of articulation can be used, such as the acoustic model of ASR.
The above method can accurately estimate the duration information of the reference audio. For cloning, in fact, the accuracy of duration information is not so demanding, and the result of coarse estimation using manual methods can achieve the same effect. For example, using a speech spectrogram viewing tool, or other audio annotation tools, the duration of phonemes can be estimated audiovisually.
The Style_Encoder in this model is equivalent to an audio frame encoder, where the final output of the network is related to the content only, with phoneme position information embedded in the results. Based on these temporal position encodings, a simple estimation of the phoneme duration of the reference audio can be performed using the Style_Encoder. Better yet, the Style_Encoder method does not require knowledge of the phoneme sequence corresponding to the audio.
The npy files in */test_wavs are generated by the MFA tool, but first its corresponding phoneme sequence has to be known.
It is not limited to the above method, but any tool that can predict the duration of articulation can be used, such as the acoustic model of ASR.
The above method can accurately estimate the duration information of the reference audio. For cloning, in fact, the accuracy of duration information is not so demanding, and the result of coarse estimation using manual methods can achieve the same effect. For example, using a speech spectrogram viewing tool, or other audio annotation tools, the duration of phonemes can be estimated audiovisually.
The Style_Encoder in this model is equivalent to an audio frame encoder, where the final output of the network is related to the content only, with phoneme position information embedded in the results. Based on these temporal position encodings, a simple estimation of the phoneme duration of the reference audio can be performed using the Style_Encoder. Better yet, the Style_Encoder method does not require knowledge of the phoneme sequence corresponding to the audio.
One-Shot-Voice-Cloning/TensorFlowTTS/tensorflow_tts/models/moduls/core.py
Lines 700 to 705 in 6beec14
Originally posted by @CMsmartvoice in #3 (comment)
The text was updated successfully, but these errors were encountered: