Single Shot Detector for Autonomus Vehicle Vision. Detects vehicles, pedestrians and signs. Trained on data from Waymo and Trondheim.
Single Shot Detector for Autonomus Vehicle Vision
These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.
ssh username@clab[00-25].idi.ntnu.no
mk_work_dir
cd ../../../../work/username
rm -rf . // dobbeltsjekk denne kommandoen hehe
git clone https://github.com/Sandbergo/autonomous-vehicle-detection.git
cd autonomous-vehicle-detector/SSD
rm -rf ~/.local/lib/python3.6/site-packages/
rm -rf ~/.local/lib/python2.7/site-packages/
pip3 install --user torch torchvision
pip3 install --user -r requirements.txt
pip3 install --user --upgrade torch torchvision
pip3 install --user tensorflow
pip3 install --upgrade --user pandas
python3 setup_waymo.py
python3 train.py configs/train_waymo.yaml
python3 update_tdt4265_dataset.py
python3 train.py configs/train_tdt4265.yaml
python3 submit_results.py configs/train_tdt4265.yaml
Remember to refresh file explorer if working on VS Code server with SSH - remote extension
Download videoes from Google Drive, and put them in datasets/videos/
.
Run
mkdir outputs/videos // Not tested
python3 demo_video.py configs/train_tdt4265.yaml datasets/videos/2019-12-05_18-26-20-front_split2.mp4 outputs/videos/output1.mp4
python3 demo_video.py configs/train_tdt4265.yaml datasets/videos/2019-12-06_09-44-38-front_split1.mp4 outputs/videos/output2.mp4
Follow the above instructions to after cloning the github repository. Make your way to the anaconda_setup
and run install_anaconda_env.sh
AS SPECFIED. This script will create a user install of anaconda at /work/<user-name>/anaconda
. The script also updates both the waymo and the tdt4265 datasets. If you have named your 'mk_work_dir' something other than your username, the name can be passed as an argument into the bash script.
cd autonomous-vehicle-detector/anaconda_setup
source install_conda_env.sh
# If you have an alternate folder name
source install_conda_env.sh <alternate-folder-name>
The conda environment should already be activated after running the install script above. If using bash as the default shell you should see '(TermProject)' infront of the terminal prompt. You should be good to go!
The autonomous-vehicle-detection/anaconda_setup/TermProject.yaml
is 'symlinked' to the anaconda install created. This means that if a change is made in the .yaml file, the conda environment can be updated by running
conda env update –f TermProject.yml –n TermProject --prune
conda activate TermProject
conda deactivate
conda info #information about current conda environment
conda list #shows installed packages
conda env list #shows conda environments on machine
First, on terminal in Cybele:
tensorboard --logdir outputs
Note the resulting port XXXX. Then, on a separate terminal running on local computer:
ssh -L 127.0.0.1:6008:127.0.0.1:XXXX username@clab[00-25].idi.ntnu.no
Acess on localhost:6008
The hierarchy should look like this:
./
├── LICENSE
├── papers
│ └── project.pdf
├── README.md
└── SSD
├── configs
│ ├── train_tdt4265.yaml
│ └── train_waymo.yaml
├── demo.ipynb
├── demo.py
├── demo_video.py
├── download_waymo.py
├── plot_scalars.ipynb
├── README.md
├── requirements.txt
├── setup_waymo.py
├── ssd
│ ├── config
│ │ ├── defaults.py
│ │ └── path_catlog.py
│ ├── container.py
│ ├── data
│ │ ├── build.py
│ │ ├── datasets
│ │ │ ├── evaluation
│ │ │ │ ├── coco
│ │ │ │ │ └── __init__.py
│ │ │ │ ├── __init__.py
│ │ │ │ ├── mnist
│ │ │ │ │ └── __init__.py
│ │ │ │ ├── voc
│ │ │ │ │ ├── eval_detection_voc.py
│ │ │ │ │ └── __init__.py
│ │ │ │ └── waymo
│ │ │ │ └── __init__.py
│ │ │ ├── __init__.py
│ │ │ ├── mnist_object_detection
│ │ │ │ ├── mnist_object_dataset.py
│ │ │ │ ├── mnist.py
│ │ │ │ └── visualize_dataset.py
│ │ │ ├── tdt4265.py
│ │ │ └── waymo.py
│ │ ├── samplers.py
│ │ └── transforms
│ │ ├── __init__.py
│ │ ├── target_transform.py
│ │ └── transforms.py
│ ├── engine
│ │ ├── inference.py
│ │ └── trainer.py
│ ├── modeling
│ │ ├── backbone
│ │ │ ├── basic.py
│ │ │ └── vgg.py
│ │ ├── box_head
│ │ │ ├── box_head.py
│ │ │ ├── inference.py
│ │ │ ├── loss.py
│ │ │ └── prior_box.py
│ │ └── detector.py
│ ├── solver
│ │ ├── build.py
│ │ └── lr_scheduler.py
│ ├── torch_utils.py
│ └── utils
│ ├── box_utils.py
│ ├── checkpoint.py
│ ├── logger.py
│ ├── metric_logger.py
│ ├── model_zoo.py
│ └── nms.py
├── submit_results.py
├── test.ipynb
├── test.py
├── train.ipynb
├── train.py
├── tutorials
│ ├── annotation_images
│ │ ├── canvas_completed.png
│ │ ├── canvas.png
│ │ ├── canvas_shape_part1.png
│ │ ├── canvas_shape_part2.png
│ │ ├── canvas_shape_part3.png
│ │ ├── create_shape.png
│ │ ├── login_edit.png
│ │ ├── task_assignee_edit.png
│ │ └── tasks_edit.png
│ ├── annotation_tutorial.md
│ ├── dataset.md
│ ├── environment_setup.md
│ ├── evaluation_tdt4265.md
│ ├── run.md
│ └── tensorboard.md
├── update_tdt4265_dataset.py
├── visualize_dataset.ipynb
└── visualize_dataset.py
- Lars Sandberg @Sandbergo
- Sondre Olsen @sondreo
- Bendik Austnes @kidneb7
- Olav Pedersen @olavpe
- Håkon Hukkelås @hukkelas