conda create -n rvc python=3.10.14
conda activate rvc
pip3 install -r requirements.txt
First, clone our repository.
git clone [email protected]:jjihwan/Voice-Cloning.git
Set model_name and make dataset/model_name
directory. e.g. dataset/iu
mkdir -p dataset/iu # change iu as your own model name
- Download source musics in
musics
folder (more than 4 songs recommended). Store them as following structure:musics └── iu └── boo.mp3 # names are not important └── foo.mp3 └── moo.mp3 ...
- Download pretrained vocal-remover from the original repository
Download pretrained model
wget https://github.com/tsurumeso/vocal-remover/releases/download/v5.1.0/vocal-remover-v5.1.0.zip unzip vocal-remover-v5.1.0.zip mv vocal-remover/models .
- Use vocal-remover to decompose the vocal and instrument.
python3 vocal_remover.py --input_dir musics/iu --output_dir dataset/iu --gpu 0
Train your own model
python3 RVC_train.py --model_name iu --save_frequency 50 --epochs 200
-
Prepare the target musics in the
target_musics
folder.target_musics └── boo.mp3 # names are not important └── foo.mp3 └── moo.mp3 ...
-
Run vocal-remover
python3 vocal_remover.py --input_dir target_musics --output_dir target_dataset --gpu 0
-
Inference with your own model
python3 RVC_inference.py --model_name iu --target_dir target_dataset/vocals
You might have to modify L17 to adjust the key if the model and the target keys are different. You can find the results in the
results
folder. -
Compose with instruments
python3 compose_song.py --model_name iu --target_dir target_dataset/instruments
Run the run.sh
file after preparing the training datasets(in 2.1.1) and target musics(in 2.3.1).
You can train the model and inference at once by the following command! 🔥🔥🔥
sh run.sh iu # change iu as your own model name
Our codes are built on two nice open-source projects, RVC-project and Vocal-Remover. Thanks for the authors!