dpu-sc presented a rapid demo which run AI inference on DPU with MPSoC.
- Xilinx KV260
- Opencv
- XIR
- VART
- Vitis-AI 1.4
sudo python3 -m pip install --upgrade pip
sudo python3 -m pip install scikit-build cmake opencv-python mock cython
sudo python3 -m pip install tensorflow==2.4.1 -f https://tf.kmtea.eu/whl/stable.html
We provide three modes for AI sample:
- customcnn: Default CNN model. For inference cats and dogs. In dpu-sc, you can add argument
-x cnn
to use it. - yolov3-voc: Default YOLO model. For inference some common objects. In dpu-sc, you can add argument
-x yolo
to use it. - License Plate Recognition(LPR): We supported taiwain plate license detection and recognition. Make sure you have downloaded
pytesseract
in your environment. Please replace the model path tomodels/obj/yolov4-tiny_lpr.xmodel
and anchor to19,14,62,43,63,50,70,45,71,55,80,59
in config.json. You can add argument-x yolo -lpr
to use it.
Notice: Our models were built for DPU4096, if you want to use DPU3136 or others DPU config, please contact our PM James([email protected]). Also, we supported Vitis-AI 1.4 now.
and if you want to change model, you can modify model path in config.json.
python3 dpusc -i <path-to-image> -x <xmodel-type> -t <output-type>
-v <path-to-video>
-c <webcam device nodes>
# Inference with image, output image, using CNN
python3 dpusc -i dataset/images/dog.jpg -x cnn -t image
After execute above command(use CNN xmodel), you will get the result and the output image like below:
# Inference with image, output DP, using yolo
python3 dpusc -i dataset/images/moto.jpg -x yolo -t dp
After execute above command(use YOLO xmodel), you will get the result and the output image like below:
# Inference with image, output image, using LPR
python3 dpusc -i dataset/images/lpr-2.jpg -x yolo -lpr -t image
After execute above command(use LPR mode), you will get the result and the output image like below:
# Inference with video, output image, using yolo
python3 dpusc -v dataset/videos/walking_humans.nv12.1920x1080.h264 -x yolo -t image
# Inference with video, output video, using yolo
python3 dpusc -v dataset/videos/walking_humans.nv12.1920x1080.h264 -x yolo -t video
# Inference with webcam, output DP, using yolo
python3 dpusc -c 0 -x yolo -t dp
# Inference with webcam, output image, using yolo
python3 dpusc -c 0 -x yolo -t image
# Inference with video, output DP, using LPR
python3 dpusc -v <video path> -x yolo -lpr -t dp
If run with CNN, you must follow the format of dataset naming rule which is label on the prefix of file name.
e.g.
- at images_demo we detect cat or dog.
- at images_usb we detect perfect or defect.
and so on.
Xmodel and environment settings are in config.json
.
-
DISPLAY
"DISPLAY": { "WIDTH": "1920", "HEIGHT": "1080" }
Key Name Description WIDTH
The width of your display resolution. HEIGHT
The height of your display resolution. -
MODLES-XMODELS_CLASS
"MODLES": { "XMODELS_CLASS": { "TYPE": "cnn", "MODEL": "models/cnn/customcnn.xmodel", "CLASS": ["dog", "cat"], "INPUT_SIZE": [250, 200] } }
Key Name Description TYPE
Xmodel's type. MODEL
Path to xmodel. CLASS
The classes that the xmodel provide. INPUT_SIZE
The image size that the xmodel can accept. -
MODLES-XMODELS_OBJ
"MODLES": { "XMODELS_OBJ": { "TYPE": "yolo", "MODEL": "models/obj/yolov3-voc.xmodel", "CLASS": ["aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat", "chair", "cow", "diningtable", "dog", "horse", "motobike", "person", "pottedplant", "sheep", "sofa", "train", "tv"], "ANCHORS": [ 10, 13, 16, 30, 33, 23, 30, 61, 62, 45, 59, 119, 116, 90, 156, 198, 373, 326], "INPUT_SIZE": [416, 416], "IOU": "0.213", "NMS": "0.45", "CONF": "0.2", "BOX_MAX_NUM": "30" } }
Key Name Description TYPE
Xmodel's type. MODEL
Path to xmodel. CLASS
The classes that the xmodel provide. ANCHORS
The anchors that the xmodel provide. INPUT_SIZE
The image size that the xmodel can accept. IOU
Xmodel's IoU(Intersection over Union). NMS
Xmodel's NMS(Non-Maximum Suppression). CONF
Xmodel's confidence. BOX_MAX_NUM
The maximum number of bounding box that can be displayed in an image. -
OUTPUT
"OUTPUT": { "VIDEO_OUTPUT": "./output.mp4", "IMAGE_OUT_DIR": "./" }
Key Name Description VIDEO_OUTPUT
The path of the output video. IMAGE_OUT_DIR
The path of the output image directory.
provide unittest script in /unittest.
You can use following setps to download and install tensorflow or use our RPM package to install (please contact [email protected]).
-
Use following command to download the
tensorflow.whl
.sudo wget https://github.com/KumaTea/tensorflow-aarch64/releases/download/v2.4/tensorflow-2.4.1-cp37-cp37m-linux_aarch64.whl
-
Install the
tensorflow.whl
without any dependencies.sudo pip3 install --no-dependencies tensorflow-2.4.1-cp37-cp37m-linux_aarch64.whl
-
After install
tensorflow
, follow the instructions below to manually install dependencies which we need.- Create a file named
requirements.txt
. - Fill the following dependencies in requirements.txt.
Keras-Preprocessing==1.1.2 flatbuffers==22.12.6 termcolor==2.1.1 astunparse==1.6.3 gast==0.5.3 opt-einsum==3.3.0 typing-extensions==4.4.0 wrapt==1.14.1 google-api-python-client==2.70.0 absl-py==1.3.0
- Use the following command to install the dependencies
python3 -m pip install -r requirements.txt
- Create a file named
-
Now you can run the dpu-sc with tensorflow.