Releases: macrocosm-os/pretraining
Releases · macrocosm-os/pretraining
Release 2.1.5
- Fixes the CUDA device side assert issue, affecting all validators. All miner evals are now run in separate subprocesses
- This release also includes an update to the pretrain API to greatly simplify the interface, including a new get_repo() API to get the hugging face link to a miner's model. This is an optional update for Miners.
Release 2.1.4
- Increases the maximum number of model parameters to 186M
- Adds a read-my-write check to metadata writes to the chain
Hotfix 2.1.3
- Fixes the validator issue when a bad model causes an exception in the evaluation loop, failing the full eval loop.
- Validators perform a full evaluation after upgrades.
Hotfix 2.1.2
Logging improvements.
Symlink fixes to unblock cleanup thread.
Hotfix 2.1.1
Fix newly downloaded models being added to the upcoming evaluation loop.
Release 2.1
Release Notes
Core Subnet improvements:
- Miners now store models in Hugging face and advertise their current model via metadata on the chain.
- Validators will detect and pull new models soon after they're published. Previously, this took up to a few days.
- The subnet now supports any AutoModelForCausalLM model that has < 122,268,040 parameters. We will work to increase this in future.
- Miner rewards are now distributed more heavily to the top-performing Miner.
New Features
- Auto-update script for validators!
- A new pretrain API to download, save, and publish models
Misc
- The validator is more resilient across restarts
- Significant test coverage was added across the codebase.