- Clone https://github.com/BASALT-2022-Karlsruhe/ka-basalt-2022-datadownloader somewhere outside of this project. For example to a shared folder, where all on your server have access to:
- Download the basalt data
- Clone
git clone [email protected]:BASALT-2022-Karlsruhe/ka-basalt-2022-datadownloader.git
- Move into dir
cd ka-basalt-2022-datadownloader
- Create an
.env
file and adjust the number of samples you'd like to download (see ReadMe.md) run.sh
--> You should end up with a volume containing the downloaded demonstration data
- Clone
- Move back to this project: ka-basalt-2022
cd ..
cd ka-basalt-2022
- Create an
.env
file and adjust the parametersNAME=<NAME>_<Goal>
# e.g. kulbach_baselineVERSION='0_0_1'
# Version of your experimentsMODELS_ROOT='/home/shared/BASALT/models'
# Folder where you expect and save your models.PORT=9898
PYTHONUNBUFFERED=1
DATA_ROOT=data_wombat
GIT_ACCESS_TOKEN=YOUR_TOKEN_HERE_123
where DATA_ROOT=data_wombat or DATA_ROOT=data
- data_wombat: Loads data from volume on mounted shared wombat-server folder
- data: Loads from volume on host server (Bison)
-
Build container and start docker container
sh run.sh
-
This starts
bin/bash
on the container . From here you can now start e.g.train.py
to train your agent -
To be able to specify different GPUs for e.g. train.py, change the gpu paramter within the
docker_compose.yaml
(DO NOT COMMIT CHANGES WITHIN THIS FILE!) for the graphics card you'd like to use.
docker exec -it --user root basalt_container_${NAME}_${GOAL} /bin/bash
In your docker-compose .yaml (or docker-compose.override.yaml), if you change entrypoint to:
entrypoint: "python train.py"
and just start run.sh, then it will start the training process directly and you should be able to see the output via docker logs -f CONAINER_NAME
[Official Tutorial https://github.com/minerllabs/basalt_2022_competition_submission_template/blob/main/README.md]
- Go to https://gitlab.aicrowd.com/, navigate to "Preferences" -> "SSH Keys" and add an ssh-key to your profile.
- Create a private repo.
- Add it as a remote via
git remote add aicrowd [email protected]:<user>/<repo>.git
. - Modify the
aicrowd.json
file. Use"debug": true
when testing the submission process. - Open bash in the docker container.
- Run
git lfs track train/*.weights
. - Check if the model-weights (and other large files you want to push) are marked with
Git LFS objects to be committed
when callinggit lfs status
. - If the model weights are not tracked correctly, run
git lfs migrate info --everything --include="train/*.weights"
followed bygit add --renormalize .
and check again. - Commit.
- Push the branch you want to submit via
git push aicrowd <branch>
- Create a git tag with
git tag -am "submission-<version>" submission-<version>
- Push the tag with
git push aicrowd submission-<version>
- Check the status of your submission in the issues section of the repository.
# Install dependency packages
sudo apt -qq update && xargs -a apt.txt apt -qq install -y --no-install-recommends \
&& rm -rf /var/cache/*
# Create conda environment
conda env create -n basalt -f environment.yml --prune
# Activate environment
conda activate basalt
# Test whether your code works
python <your-script>.py