Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Export detections (bounding-box coordinates, classfication...) for detection/tracking #62

Open
careyer opened this issue Sep 22, 2021 · 0 comments

Comments

@careyer
Copy link

careyer commented Sep 22, 2021

  • rpi-deep-pantilt version: 1.2.1
  • Python version: 3.7.3
  • TensorFlow version: 2.4.0
  • Operating System: RaspiOS lite (latest version)

Description

Searching for a way on how to export the bounding box coordintes for the detection / tracking. I want to evaluate where in the video objects are detected. Is there a way to export that information in order to process it somewhere else (e.g. by piping the output of rpi-deep-pantilt to anyother program/process?
It would be awesome if this could work with and without edge-tpu. (P.S: I noticed that the console output with edge-tpu is much less informative than without).

Other than that I'd like to trigger some action if a specific object is detected with a probability threshold >xx% , e.g. if a bird is detected trigger a Deterrent system.

Thank you very much fo the great project!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant