#
Codename Fork
Last update: July 4, 2025
#
#
Introduction
This fork has more features compared to others and changes to potentially increase quality.
Codename's fork is for advanced users, so if you don't know much about RVC then it's better if you use Applio.
This guide will be only talking about the new features since everything else has been covered in the Applio guide.
#
Are RVC Models Safe?
RVC Models are PyTorch Models, a Python library used for AI. PyTorch uses serialization via Pythons' Pickle Module, converting the model to a file. Since pickle can execute arbitrary code when loading a model, it could be theoretically used for malware, but this fork has a built-in feature to prevent code execution along the model. Also, HuggingFace has a Security Scanner which scans for any unsafe pickle exploits and uses also ClamAV for scanning dangerous files.
#
Pros & Cons
The pros & cons are subjective to your necessities.
- All of the pros of Applio.
- Has a Warmup Phase option
- Multiple optimizer choices
- Mel similarity metric
- SoX resampler
- Hold-Out validation
- TF32
- More complicated features.
#
#
Downloading
Go to the github repo here. Then find the releases tab and click it.
Click on the zip file and download it. Then go into your C drive and extract it.

- Go into the codename fork folder and run the
run-install.bat
file then once it's done rungo-fork.bat
.
#
#
New Features
#
MRF HifiGAN & RefineGAN:
- In the training section you are given the option to choose your vocoder
- HiFi-GAN: the default vocoder for RVC.
- MRF HiFi-GAN: a version of HiFi-GAN with MRF instead of MPD and new loss functions. This has higher fidelity but only works with this fork and the latest version Applio.
- RefineGAN: an entirely new GAN which is in a experimental phase. This only works with this fork and Applio.

#
Warmup Phase:
In the training section there is an option to enable a warmup phase and a slider to choose how long it lasts. Do not use this with Ranger21 or RAdam since they do this on their own.

- The warmup phase is where the learning rate (lr) linearly increased for a certain amount of epochs, this can be used to prevent large destabilizing updates in the early stages of training.
- There isn't much testing on what using a warmup in RVC does so expect varying results.
#
Multiple Optimizers:
This fork gives you the option to choose between three optimizers.
- AdamW
- RAdam
- Ranger21

#
Custom LR for gen and disc:
In the training section under advanced there is a option to set a custom learning rate for both the generator and discriminator.
- This controls how quickly or slowly either the gen or disc learn.
#
TF32
TF32 (TensorFloat32) is a different precision to use instead of FP32 or BF16. This can give a speed boost. This is only supported on Ampere GPUs or newer.
#
Upcoming Features:
- Ability to delay / headstart the Generator or Discriminator.
- Ability to choose lr_decay from the ui
- And more...