Miners train locally and periodically publish their best model to hugging face and commit the metadata for that model to the Bittensor chain.
Miners can only have one model associated with them on the chain for evaluation by validators at a time.
The communication between a miner and a validator happens asynchronously chain and therefore Miners do not need to be running continuously. Validators will use whichever metadata was most recently published by the miner to know which model to download from hugging face.
Miners will need enough disk space to store their model as they work on. Each uploaded model (As of Jan 1st, 2024) may not be more than 1 GB. It is reommended to have at least 25 GB of disk space.
Miners will need enough processing power to train their model. The device the model is trained on is recommended to be a large GPU with atleast 20 GB of VRAM.
- Get a Hugging Face Account:
Miner and validators use hugging face in order to share model state information. Miners will be uploading to hugging face and therefore must attain a account from hugging face along with a user access token which can be found by following the instructions here.
Make sure that any repo you create for uploading is public so that the validators can download from it for evaluation.
- Clone the repo
git clone https://github.com/RaoFoundation/pretraining.git
-
Setup your python virtual environment or Conda environment.
-
Install the requirements. From your virtual environment, run
cd pretraining
python -m pip install -e .
-
Make sure you've created a Wallet and registered a hotkey.
-
(Optional) Run a Subtensor instance:
Your node will run better if you are connecting to a local Bittensor chain entrypoint node rather than using Opentensor's.
We recommend running a local node as follows and passing the --subtensor.network local
flag to your running miners/validators.
To install and run a local subtensor node follow the commands below with Docker and Docker-Compose previously installed.
git clone https://github.com/opentensor/subtensor.git
cd subtensor
docker compose up --detach
The mining script uploads a model to hugging face which will be evaluated by validators.
See Validator Psuedocode for more information on how they the evaluation occurs.
The Miner requires a .env file with your hugging face access token in order to upload models.
Create a .env
file in the pretraining
directory and add the following to it:
HF_ACCESS_TOKEN="YOUR_HF_ACCESS_TOKEN"
To start your miner the most basic command is
python neurons/miner.py --wallet.name coldkey --wallet.hotkey hotkey --hf_repo_id my-username/my-project --avg_loss_upload_threshold YOUR_THRESHOLD
-
--wallet.name
: should be the name of the coldkey that contains the hotkey your miner is registered with. -
--wallet.hotkey
: should be the name of the hotkey that your miner is registered with. -
--hf_repo_id
: should be the namespace/model_name that matches the hugging face repo you want to upload to. Must be public so that the validators can download from it. -
--avg_loss_upload_threshold
: should be the minimum average loss before you want your miner to upload the model.
The Miner offers some flags to customize properties, such as how to train the model and which hugging face repo to upload to.
You can view the full set of flags by running
python ./neurons/miner.py -h
Some flags you may find useful:
-
--offline
: when set you can run the miner without being registered and it will not attempt to upload the model. -
--wandb_entity
+--wandb_project
: when both flags are set the miner will log its training to the provided wandb project. -
--device
: by default the miner will use your gpu but you can specify with this flag if you have multiple.
--load_best
: when set you will download and train the model from the current best miner on the network.--load_uid
: when passing a uid you will download and train the model from the matching miner on the network.--load_model_dir
: the path to a local model directory [saved via Hugging Face API].--load_model
: the path to a safetensors file [not necessarily saved from Hugging Face API].
In some cases you may have failed to upload a model or wish to upload a model without further training.
Due to rate limiting by the Bittensor chain you may only upload a model every 20 minutes.
You can manually upload with the following command:
python scripts/upload_model.py --load_model_dir <path to model> --hf_repo_id my-username/my-project --wallet.name coldkey --wallet.hotkey hotkey
As of Jan 1st, 2024 the subnet works with any model supported by AutoModelForCausalLM subject to the following constraints:
- Has less than 122268040 parameters.
- Total size of the repo is less than 1 Gigabyte.
- Models sequence_length parameter must be 1024 tokens.
The pretain/mining.py
file has several methods that you may find useful. Example below.
import pretrain as pt
import bittensor as bt
from transformers import PreTrainedModel
config = bt.config(...)
wallet = bt.wallet()
metagraph = bt.metagraph(netuid=9)
actions = pt.mining.actions.Actions.create(config, wallet)
# Load a model from another miner.
model: PreTrainedModel = actions.load_remote_model(uid=123, metagraph=metagraph, download_dir="mydir")
# Save the model to local file.
actions.save(model, "model-foo/")
# Load the model from disk.
actions.load_local_model("model-foo/")
# Publish the model for validator evaluation.
actions.push(model)