-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to run multiple detection models in single pipeline? #789
Comments
Yes that should be possible - will provide an example. Question: should the results for the detections be combined and sent together (one set per frame) - or seperated? |
@nnshah1, yeah actually I am able to see person model is sending data and only bounding boxes for it is visible, what ideally I want is to have person getting detected then, face is getting detected and face detection matrix can be given to recognition models. So yeah can the results be combined and sent together on each frame? Please help with it. |
To clarify - do you want to do face detection only within the person detection region? Or to do them independently. That is do you want to have faces and people detected seperatly, or to do people detection -> face detection(within detected people) -> recognition (within faces). |
Actually I want both, 1. people detection -> face detection(within detected people) -> recognition (within faces)-->mqtt. |
In the second one - is it suffidience to have live stream --> person detection --> face detection --> age-gender-recognition --> mqtt (i.e. both branches combining to single mqtt endpoint?) |
yes, but will it send data if person is standing with his back visible and not face, in that case will data will be sent to mqtt? |
yes |
yeah then its great for me |
Please find template below for each usecase Person_detect -> face_detect (roi list) -> age_gender_recog -> metaconvert -> metapublishGst-launch pipeline :
Template adjusted based on your example:
Person_detect -> queue -> face_detect -> queue -> metaconvert -> metapublishGst-launch pipeline :
Template adjusted based on your example:
|
@tthakkal @nnshah1 I want to run2 detection models together, one is head detection model and another is a model that should take roi of first model and run on specific roi only which will be passes by first detection model. Based on your previous suggestion of using roi-list I have tried but that is not working for me. Please see the pipeline that I am trying to run.
It stucks with below error: |
|
@divdaisymuffin if it is head detection please set right |
@tthakkal
|
Try with gst-launch by exec into container and see if it works.
for any further debug, setup a meeting. |
@divdaisymuffin Which version are you using? If the element doesn't support the property it's probably a DL Streamer version mismatch. |
Can I use two or more than one gvadetect elements. I Actually want to use person detection alongwith face detection, I tried something like below but it didnt worked.
{ "name": "object_detection", "version": 2, "type": "GStreamer", "template":"rtspsrc udp-buffer-size=212992 name=source ! queue ! rtph264depay ! h264parse ! video/x-h264 ! tee name=t ! queue ! decodebin ! videoconvert name=\"videoconvert\" ! video/x-raw,format=BGRx ! queue leaky=upstream ! t. ! gvadetect ie-config=CPU_BIND_THREAD=NO model=\"{models[person_detection_2020R2][1][network]}\" model-proc=\"{models[person_detection_2020R2][1][proc]}\" name=\"detection1\" threshold=0.50 ! t. ! gvadetect ie-config=CPU_BIND_THREAD=NO model=\"{models[face_detection_adas][1][network]}\" model-proc=\"{models[face_detection_adas][1][proc]}\" name=\"detection\" threshold=0.50 ! gvaclassify model=\"{models[age-gender-recognition-retail-0013][1][network]}\" model-proc=\"{models[age-gender-recognition-retail-0013][1][proc]}\" name=\"recognition\" model-instance-id=recognition ! gvametaconvert name=\"metaconvert\" ! queue ! gvametapublish name=\"destination\" ! appsink name=appsink t. ! splitmuxsink max-size-time=60000000000 name=\"splitmuxsink\"", "description": "Object Detection Pipeline", "parameters": { "type" : "object", "properties" : { "inference-interval": { "element":"detection", "type": "integer", "minimum": 0, "maximum": 4294967295 }, "cpu-throughput-streams": { "element":"detection", "type": "string" }, "n-threads": { "element":"videoconvert", "type": "integer" }, "nireq": { "element":"detection", "type": "integer", "minimum": 1, "maximum": 64 }, "recording_prefix": { "type":"string", "default":"recording" } } } }
The text was updated successfully, but these errors were encountered: