This repository has been archived by the owner on Jan 3, 2023. It is now read-only.
FFmpeg video analytics release v0.4
Pre-release
Pre-release
This release contains FFmpeg* Video Analytics plugins that bring Deep Learning Inference capabilities to open-source framework FFmpeg* and helps developers to build highly efficient and scalable video analytics applications.
This release is targeting the following platforms:
Server platforms with Intel® Xeon™ CPU and Linux OS.
Desktop platforms with Intel® Core™ CPU and integrated graphics and Linux* OS.
Linux OS platform with Intel® Movidius™ Neural Compute Stick and Intel® Movidius™ Neural Compute
Stick 2.
VCAC-A Accelerator Card
New in This Release
- Migrate to FFmpeg v4.2 release
- Support OpenVINO™ 2019 R3 release and above.
- Adopt official IE C API and use inference request callback mechanism
- Support person re-identification model
- Refine “metaconvert” filter to convert the inference results in AVFrame’s sidedata to a consolidated
metadata format. - Replace “iemetadata” muxer with “metapublish” to mux JSON metadata to file or Kafka® streams.
- Add one option to do filter pre-initialization before pipeline starts to avoid “first frame wait initialization”
such as scenario of RTSP streaming, which lower the latency of the pipeline and avoid possible real
time streaming corruption. - Support drawing more output info overlay with original video through OpenCV
- Add a cpp sample to demonstrate how to use FFmpeg API in addition to video analytics filter plugins
Known Issues/limitations
This release is subject to the following limitations:
- Running the pipeline with Gen-accelerated hardware decoding and inference on Gen (GPU), there
might be segment fault issue happen (This issue doesn’t exist on OpenVINO™ 2019 R1). - Some models from Open Model Zoo (For example, vehicle-license-plate-detection-barrier-0106.xml,
and license-plate-recognition-barrier-0001.xml) doesn’t support batch-size setting. This could be
workaround by setting batch-size as 1. - When GPU-accelerated hardware decoding is enabled in the ffmpeg command line, there might be
issue reported that hardware surface is not available. This could be workaround by:- Setting appropriate “-extra_hw_frames” numbers and “nireq” numbers for each inference filter.
- Setting “-threads 1” to disable multiple-threading for decoding.
- The inference output supports limited number of pre-defined metadata format for use cases, including
objects detection, emotion, age, gender, license plate, etc. and the format may not be identical with
lower versions. - There is memory leak to run inference on GPU with OpenVINO™ 2019 R3 release.
Release details are available in the attached release notes. Getting started is available on the Wiki or in the attached user guide.