Another short tutorial of HOW TO NEUROPSYTOX:
- Behavior analysis tutorial
- Installation
- Running DLC for animal tracking
- Behavior metrics
- Assistent watchers to create ethograms
- Discovering behaviors (pose + frame networks)
-
By hand:
- Install Python 3.7 or higher.
- Install DeepLabCut using pip:
pip install deeplabcut[gui]
.
-
Through anaconda yalm file
- Install anaconda environments
- Create a new environment: conda create -n my_env python=3.7.
- Activate the environment: conda activate my_env.
- Install DeepLabCut using conda: conda install -c conda-forge deeplabcut.
- Install additional dependencies: pip install deeplabcut[gui].
- Verify the installation: deeplabcut check-requirements.
-
git clone https://github.com/DeepLabCut/DeepLabCut.git or download as a zip file
-
Open the Anaconda Navigator
-
Run:
-
Import deeplabcut
Links 📖: https://deeplabcut.github.io/DeepLabCut/docs/installation.html
-
Import library
import deeplabcut
-
Open the config file with
Load project
-
To analyze videos, first you must have your videos cropped and/or with a compression (reduce the resolution)
- If not, use the DLC tool
2. Just click crop as many videos you have 3. Also, you can rotate or flip your videos 4. If is borring to click, you can do it with a script by using ffmpeg (-crf to adjust the resolution, higher is low res):
ffmpeg -i input.mp4 -vcodec libx265 -crf 28 input_cropped.mp4
-
You can now analyze your videos with just one-click, remember to allow the save as csv option
-
For quality control, is highly recommendable to view some videos of each batch to be sure that the network is predicting your animal (mouse or rat) pose and the maze. To do that, just click on create videos tab and in the button.
-
Your animals are tracked! You can work with the coordinates in the best way you want to (see Behavior metrics or Discovering behaviors).
-
You will have 3 more files:
If the pose DLC or NOR -network is doing a terrible job to tracking, you can re-train it using your own videos.
- Use a few videos to re-train (maybe 4 per recording conditions).
- You have to re-label, in total would be 5 frames per video.
- Extracting outliers.
- Re-label.
- Re-training the network
- Analyze and quality control (See above section)
Please read the paper and check the github repository: https://github.com/ETHZ-INS/DLCAnalyzer
Based on those scripts i built a quick shiny app to run the scripts by GUI
- Open RStudio
- Load the app file
- Just run the app
- It will open the GUI
- Fill the inputs
- Add the input directory, the files are the coordinates in csv obtained by DLC
- Load the zoneinfo file from EPM or NOR
- Fill the frames of your videos, all csv files must have been obtained from videos with the same fps, you can check it by see the videos details
- Fill the options, you can allow the export option to create the excel table
- Metrics are ready for each csv file!
- You can explore the metrics based on demographics, just upload it and click submit
- You will see the same table as before, but now, in the graph tab you can create different visualization of the metrics and demographics, select the Y,X,Z that you're interested.
- At the end, you will have the indexes file with all basic DLCAnalyzer metrics