Replies: 1 comment 2 replies
-
Hi @briveira, This is a really interesting question, and I can’t actually come up with any really good solution to it. One of the ideas would be to use a gstreamer pipeline in the camera to produce a rtsp stream. That stream would however be separate from the standard streams in the camera. Standard streams do use gstreamer in all cameras, so it would be relatively similar. If you need metadata etc. exactly as from the camera you can have a gstreamer pipeline that streams video, audio and metadata from the cameras rtsp streams and then makes any graphics overlay and audio modifications needed before streaming it on a new rtsp stream. What kind of synchronization is it that you are trying to achieve? Is it within a single camera or are you trying to sync multiple cameras with each other? If it is synchronization within a single stream, do you really need to run those tests against a camera, it might be easier to produce similar streams locally? For inter-stream synchronization, I think there are two primary things you need to consider: Timestamps are set from the cameras local time, even with NTP sync this can differ quite a lot, especially if the network is unpredictable. I’m not sure when the timestamps are set in the stream, if they are set in the kernel module, they should be relatively accurate, but if set in userspace they might jitter depending on system load of the camera etc. If you are writing your own data to the buffers using e.g., graphics overlays, you need to consider that the buffers might already be a bit late when you get them, so using the system time when you write to the buffer will likely not help you much. |
Beta Was this translation helpful? Give feedback.
-
Hi there!
We are planning to implement a small ACAP app allowing us to debug/test our existing remote video and audio syncronization system.
We are trying implement something like https://www.youtube.com/watch?v=ucZl6vQ_8Uo running in-camera,
We know we can use a simple date+time overlay (using just VAPIX) or ACAP Cairo support to implement graphic overlays in camera, but we also would need to insert the synchro beeps (or anything similar) into the audio output stream, so both changes (graphic overlay and "audio overlay") are present in the rtsp stream.
Does anyone know if this can be actually done? AxAudio Api seems of no use: generating audio output and microphone input would do the trick, but it would be quite invasive to do on remote cameras and would only work in some cameras/hw setups.
Of course, playing a prerrendered video with audio (like the one in youtube above) from sd card over the main RTSP streams would also do the trick (just needing an SD card installed). Maybe something like VAPIX's /axis-cgi/videooutput/addsequenceelement.cgi?= would allow to do that for video, but we would also need audio :(
thanks in advance
Beta Was this translation helpful? Give feedback.
All reactions