Skip to content
This repository has been archived by the owner on Jan 3, 2023. It is now read-only.

FFmpeg video analytics release v0.4

Pre-release
Pre-release
Compare
Choose a tag to compare
@linxie47 linxie47 released this 10 Jan 05:09
· 24 commits to ffmpeg4.2_va since this release

This release contains FFmpeg* Video Analytics plugins that bring Deep Learning Inference capabilities to open-source framework FFmpeg* and helps developers to build highly efficient and scalable video analytics applications.

This release is targeting the following platforms:

Server platforms with Intel® Xeon CPU and Linux OS.
Desktop platforms with Intel® Core CPU and integrated graphics and Linux* OS.
Linux OS platform with Intel® Movidius Neural Compute Stick and Intel® Movidius Neural Compute
Stick 2.
VCAC-A Accelerator Card

New in This Release

  1. Migrate to FFmpeg v4.2 release
  2. Support OpenVINO 2019 R3 release and above.
  3. Adopt official IE C API and use inference request callback mechanism
  4. Support person re-identification model
  5. Refine “metaconvert” filter to convert the inference results in AVFrame’s sidedata to a consolidated
    metadata format.
  6. Replace “iemetadata” muxer with “metapublish” to mux JSON metadata to file or Kafka® streams.
  7. Add one option to do filter pre-initialization before pipeline starts to avoid “first frame wait initialization”
    such as scenario of RTSP streaming, which lower the latency of the pipeline and avoid possible real
    time streaming corruption.
  8. Support drawing more output info overlay with original video through OpenCV
  9. Add a cpp sample to demonstrate how to use FFmpeg API in addition to video analytics filter plugins

Known Issues/limitations

This release is subject to the following limitations:

  1. Running the pipeline with Gen-accelerated hardware decoding and inference on Gen (GPU), there
    might be segment fault issue happen (This issue doesn’t exist on OpenVINO 2019 R1).
  2. Some models from Open Model Zoo (For example, vehicle-license-plate-detection-barrier-0106.xml,
    and license-plate-recognition-barrier-0001.xml) doesn’t support batch-size setting. This could be
    workaround by setting batch-size as 1.
  3. When GPU-accelerated hardware decoding is enabled in the ffmpeg command line, there might be
    issue reported that hardware surface is not available. This could be workaround by:
    1. Setting appropriate “-extra_hw_frames” numbers and “nireq” numbers for each inference filter.
    2. Setting “-threads 1” to disable multiple-threading for decoding.
  4. The inference output supports limited number of pre-defined metadata format for use cases, including
    objects detection, emotion, age, gender, license plate, etc. and the format may not be identical with
    lower versions.
  5. There is memory leak to run inference on GPU with OpenVINO 2019 R3 release.

Release details are available in the attached release notes. Getting started is available on the Wiki or in the attached user guide.