Skip to content

Releases: macrocosm-os/pretraining

Release 2.1.5

02 Feb 07:02
4402b91
Compare
Choose a tag to compare
  • Fixes the CUDA device side assert issue, affecting all validators. All miner evals are now run in separate subprocesses
  • This release also includes an update to the pretrain API to greatly simplify the interface, including a new get_repo() API to get the hugging face link to a miner's model. This is an optional update for Miners.

Release 2.1.4

17 Jan 02:47
06eecdd
Compare
Choose a tag to compare
  • Increases the maximum number of model parameters to 186M
  • Adds a read-my-write check to metadata writes to the chain

Hotfix 2.1.3

15 Jan 04:18
2300785
Compare
Choose a tag to compare
  • Fixes the validator issue when a bad model causes an exception in the evaluation loop, failing the full eval loop.
  • Validators perform a full evaluation after upgrades.

Hotfix 2.1.2

09 Jan 03:34
88ee418
Compare
Choose a tag to compare

Logging improvements.

Symlink fixes to unblock cleanup thread.

Hotfix 2.1.1

05 Jan 05:54
9cb69dc
Compare
Choose a tag to compare

Fix newly downloaded models being added to the upcoming evaluation loop.

Release 2.1

03 Jan 03:34
972950a
Compare
Choose a tag to compare

Release Notes

Core Subnet improvements:

  1. Miners now store models in Hugging face and advertise their current model via metadata on the chain.
  2. Validators will detect and pull new models soon after they're published. Previously, this took up to a few days.
  3. The subnet now supports any AutoModelForCausalLM model that has < 122,268,040 parameters. We will work to increase this in future.
  4. Miner rewards are now distributed more heavily to the top-performing Miner.

New Features

  1. Auto-update script for validators!
  2. A new pretrain API to download, save, and publish models

Misc

  1. The validator is more resilient across restarts
  2. Significant test coverage was added across the codebase.