The official implementation of the VIPS4Lab @ UNIVR for the 2022 ACM/IEEE TinyML Design Contest at ICCAD.
| Model | F-1 | F-B | SEN | SPE | BAC | ACC | PPV | NPV | Mn | Ln | Score |
|---|---|---|---|---|---|---|---|---|---|---|---|
| TinyModel | 0.955 | 0.979 | 0.995 | 0.934 | 0.964 | 0.960 | 0.917 | 0.996 | 27.11 | 4.54 | 135.75 |
The repository is organized as follows:
-
In the root folder, the main programs can be found:
train.pyused to train our network.evaluate.pyused to evaluate our network and calculate the metrics.convert.pyused to convert a given<model_name>.pklto anetwork.onnxfile.
-
The
datafolder contains the.csvfiles listing the train and test splits of the dataset given by the challenge organizers. -
The
datasetsfolder includesiegm.py, the python definition of the dataset used in this challenge. -
The
modelsfolder includestinyml_univr.py, the python definition of thePyTorchmodel created for this challenge, namedTinyModel. -
The
utilsfolder contains different definitions of functions used throughout our training/testing procedures, including metrics calculations, logging and data processing. -
X-CUBE-AI-codecontains theCcode to compile the network and run evaluation on board, as specified by the challenge organizers. -
The
checkpointsfolder will be used to store the model checkpoints (<model_name>.pkl) obtained during training, the performance reports for the best model obtained during training or evaluation (<model_name>_<mode>_results.txt) and thenetwork.onnxfor the challenge submission.
The actual data for the challenge will not be included in this repository.
Whenever we refer to the data_dir in this repository, we expect a path pointing to a folder, containing all the data.
For example, if data_dir was <path_to_folder>/tinyml_data/, and assuming the same data given for the challenge, the directory tree should look like:
<path_to_folder>
|__tinyml_data
| S01-AFb-1.txt
| S01-AFb-2.txt
| ...
| S95-VT-342.txt
- Clone this repo, we'll refer to the directory that you cloned as ${TINYML_CONTEST}.
- Install dependencies. We use python 3.9, PyTorch >= 1.12.1 and CUDA 11.3.
conda create -n tinyml python=3.9
conda activate tinyml
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu113
cd ${TINYML_CONTEST}
pip install -r requirements.txt
- Launch
train.pywith the desired parameters. - Be sure to set
--data_dirto the directory containing the data (not the indexes). - By default the scripts trains over the concatenation of both the Train and Test partition of our data. Use
--not_concatenate_tsif you wish to train only over the Train partition. - IMPORTANT: starting a new training with
--checkpoints_dirset to folder./checkpoints(default) will OVERRIDE the files currently saved in that folder!
python train.py --data_dir `<path_to_folder/>' --checkpoints_dir <path/to/save/destination/folder>
- Launch
evaluate.pywith the desired parameters. --data_dirshould point to the directory containing the data (not the indexes).- Be sure to set
--pretrained_modelto the path pointing to the<model_name>.pklcheckpoint you wish to use, then:
python evaluate.py --pretrained_model './checkpoints/tinymodel.pkl' --data_dir `<path_to_folder/>'
- Launch
convert.pywith the desired parameters. - Be sure to set
--pretrained_modelto the path pointing to the<model_name>.pklto the path pointing to the<model_name>.pklcheckpoint you wish to convert to ONNX format. - The output model will always be called
network.onnxand be located in the same folder of the<model_name>.pklfile used for the conversion. - IMPORTANT: starting a new conversion will replace the
network.onnxcurrently present in the--pretrained_modeldirectory!
python convert.py --pretrained_model './checkpoints/tinymodel.pkl'
As an alternative, you can use Docker to perform the training (although it may be 10-15% slower). In order to do this you will need to:
- Install Docker and nvidia-docker. It is recommended to apply the post installation steps for docker on linux.
- Make
./docker_build_run.shand./utils/pipeline.shexecutable with.
chmod +x ./docker_build_run.sh
chmod +x ./utils/pipeline.sh
-
If correctly installed, you should be able to execute
./docker_build_run.sh, which builds and runs the pipeline (train + evaluation + onnx creation). -
After the run, the
network.onnxfile is created in thecheckpoints/folder, along with the results.
