Skip to content

Commit

Permalink
Merge pull request #113 from hmllr/all_examples_on_sdk_4_8_0_2
Browse files Browse the repository at this point in the history
Updates face detection to sdk 4.8.0.2
  • Loading branch information
knmcguire authored Mar 3, 2023
2 parents d397115 + 33b17f6 commit 915603a
Show file tree
Hide file tree
Showing 10 changed files with 449 additions and 129 deletions.
Binary file added docs/images/face_detection.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
43 changes: 37 additions & 6 deletions docs/img-proc-examples/face-detection.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,41 @@ page_id: face-detection

This is the face detection application based on the example as developed by Greenwaves technologies. It is a bit more tailor made towards the AI-deck and uses the wifi streamer to stream the output to your computer.

This was tested on **GAP_SDK version 3.8.1**. Make sure that next to `make SDK`, you also do `make gap_tools`.
This was tested on **GAP_SDK version 4.8.0.2**, which at the moment of writing was the newest we had a docker container for.

> Working directory: AIdeck_examples/GAP8/image_processing_examples/FaceDetection
# Docker GAP-SDK

Make sure to follow the [getting started with the AI deck tutorial](https://www.bitcraze.io/documentation/tutorials/getting-started-with-aideck/) before continuing.

To clean, compile and flash the FaceDetection example you have to be in the aideck-gap8-examples directory and execute:

docker run --rm -v ${PWD}:/module --privileged aideck-with-autotiler tools/build/make-example examples/image_processing/FaceDetection clean all flash

(if you did not modify the Makefile it should be possible to skip the _clean_).

If you configured your Crazyflie firmware such that the AIdeck will act as access point (as described in the [wifi-streamer example](/docs/test-functions/wifi-streamer.md)) you can now just connect to it. If you configured it to connect to an existing network you should make sure your computer is in the same network and you need to check the IP address of the AIdeck - for example connect to it through the cfclient and check the console prints.

Now you can run the image viewer:

python3 opencv-viewer.py -n "AI_DECK_IP"

where "AI_DECK_IP" should be replaced with for example 192.168.4.1 (which is the default value and the value used if the AIdeck acts as access point, so in this case it can be omitted).

Now you should see something like this:

![image streamer](/docs/images/face_detection.png)

Note that the face detection does not work great under all conditions - try with a white background and dark hair...

In the makefile you can comment the following line if you would like to disable the streamer:

APP_CFLAGS += -DUSE_STREAMER

Or - what is way more fun to play with - you can set the resolution of the streamed image with _STREAM\_W_ and _STREAM\_H_. Note that you cannot set values higher than 324x244 and that unproportional changes will result in distorted images. Note also that you need to adapt the resolution in the _opencv-viewer.py_ as well.

# Local GAP-SDK installation

> Working directory: aideck-gap8-examples/examples/image_processing/FaceDetection
To make the face detection application

Expand All @@ -24,9 +56,8 @@ To flash the code fully on the ai deck:
make flash


In the makefile you can uncomment the following lines if you would like to use the himax camera or the streamer:
In the makefile you can comment the following line if you would like to disable the streamer:

# APP_CFLAGS += -DUSE_CAMERA
# APP_CFLAGS += -DUSE_STREAMER
APP_CFLAGS += -DUSE_STREAMER

After that, you can also use `viewer.py` to see the image stream. The rectangle generated around your face is implemented by the firmware.
After that, you can also use `opencv-viewer.py` to see the image stream. The rectangle generated around your face is implemented by the firmware.
7 changes: 5 additions & 2 deletions docs/test-functions/test-camera.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,10 @@ title: Testing the Himax Camera
page_id: test-camera
---

This concerns the example in folder *AIdeck_examples/GAP8/test_functionalities/test_camera/*. This was tested on **GAP_SDK version 3.8.1**.
# Testing the Himax camera on the AIdeck


This concerns the example in folder *AIdeck_examples/GAP8/test_functionalities/test_camera/*. This was tested on **GAP_SDK version 4.8.0.2**, which at the moment of writing was the newest we had a docker container for.

In the makefile enable `APP_CFLAGS += -DASYNC_CAPTURE` if you want to test the asynchronous camera capture and remove it if you want to test the normal one. To save a color image enable `APP_CFLAGS += -DCOLOR_IMAGE`. And, to capture a `324x324` image enable `APP_CFLAGS += -DQVGA_MODE`. *Please note though that capturing an image in non-QVGA mode might not always work correctly.*

Expand All @@ -16,5 +19,5 @@ for directly running the code from L2 (second level internal memory) with your p
## Run in Docker
To build and execute in the docker we need to place the `demosaicking`-files in the `common`-folder inside the docker.
```
docker run --rm -it -v $PWD:/module/data/ --device /dev/ttyUSB0 --privileged -P gapsdk:3.7 /bin/bash -c 'export GAPY_OPENOCD_CABLE=interface/ftdi/olimex-arm-usb-tiny-h.cfg; source /gap_sdk/configs/ai_deck.sh; cd /module/data/; make clean all run'
docker run --rm -it -v $PWD:/module/data/ --device /dev/ttyUSB0 --privileged -P bitcraze/aideck:4.8.0.2 /bin/bash -c 'export GAPY_OPENOCD_CABLE=interface/ftdi/olimex-arm-usb-tiny-h.cfg; source /gap_sdk/configs/ai_deck.sh; cd /module/data/; make clean all run'
```
2 changes: 1 addition & 1 deletion examples/image_processing/FaceDetection/FaceDetModel.c
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ int main(int argc, char **argv)
GenerateCascadeClassifier("Cascade_3",Wout,Hout,24,24);


GenerateResize("final_resize", W, H, 160, 120);
GenerateResize("final_resize", W, H, STREAM_W, STREAM_H);

// Now that we are done with model parsing we generate the code
GenerateTilingCode();
Expand Down
15 changes: 11 additions & 4 deletions examples/image_processing/FaceDetection/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -18,25 +18,32 @@ APP = face_detection

NB_FRAMES ?= -1

STREAM_W = 160
STREAM_H = 120

FACE_DET_MODEL_SRC = FaceDetGenerator.c FaceDetModel.c
FACE_DET_MODEL_GEN = FaceDetKernels
FACE_DET_MODEL_GEN_C = $(addsuffix .c, $(FACE_DET_MODEL_GEN))
FACE_DET_MODEL_GEN_CLEAN = $(FACE_DET_MODEL_GEN_C) $(addsuffix .h, $(FACE_DET_MODEL_GEN))
FACE_DET_SRCS += main.c faceDet.c FaceDetBasicKernels.c ImageDraw.c $(FACE_DET_MODEL_GEN_C)

APP_SRCS += $(FACE_DET_SRCS)
APP_SRCS += ../../../lib/cpx/src/com.c ../../../lib/cpx/src/cpx.c
APP_INC += $(TILER_INC)
APP_CFLAGS += -O3 -g -D__PMSIS__ -DNB_FRAMES=$(NB_FRAMES)
# APP_CFLAGS += -DUSE_CAMERA
# APP_CFLAGS += -DUSE_STREAMER
APP_CFLAGS += -DUSE_STREAMER
APP_CFLAGS += -DSTREAM_W=$(STREAM_W)
APP_CFLAGS += -DSTREAM_H=$(STREAM_H)
APP_INC += ../../../lib/cpx/inc
APP_CFLAGS += -DconfigUSE_TIMERS=1 -DINCLUDE_xTimerPendFunctionCall=1

BOARD_NAME ?= ai_deck
PMSIS_OS ?= pulp_os
USE_PMSIS_BSP = 1
APP_LDFLAGS += -lgaptools -lgaplib

export GAP_USE_OPENOCD=1
io=host
# io=uart
PMSIS_OS = freertos

# This needs to be defined or else some other IO functionalities or cluster things will break...
APP_CFLAGS += -DHIMAX
Expand Down
Loading

0 comments on commit 915603a

Please sign in to comment.