-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Selection of camera for Phase 2 #1
Comments
As well as the webcam utility, GoPro have an open source library for communication with a GoPro including some demos. https://github.com/gopro/OpenGoPro |
This is also relevant ... https://www.youtube.com/watch?v=PCgB2Y5girY There's a lengthy discussion in the comments about this special mode for indoor use on a tripod where all sorts of processing is turned off to avoid overheating. |
This is also relevant ... https://gopro.com/en/rs/news/guide-to-hero10-black-frame-rates |
And another interesting article. https://camerajabber.com/buyersguides/which-cameras-shoot-4k-at-120fps/ The ones I consider affordable in a WAVE context are; GoPro Hero10 Black Kandao QooCam 8K ("Vloggers will also appreciate the QooCam 8K’s 3.5mm mic port, making it one of the few 360 cameras to offer one. ") |
Here are some questions for people to consider ....
|
GoPro does not support high frame rate streaming, it only supports 30fps and 60fps. |
I don't think a consumer product will cut it. Go Pro can only do it when recording to an internal storage device. |
@pmaness That's a really great reference. Having read the article, if you search for "USB 3.0 Vision" you can find lots of industrial cameras. |
When we looked into this a while ago, our concerns with the separate capture paths for audio and video approach was that if the sync signalling is being output by the device-under-test and the device is out of sync from the start then any automatic aligning of the video and audio inputs will effectively "fix" the synchronisation, leading to a false result. I guess you could introduce a manual step to the start of any session that includes AV sync tests, and require the user to manually confirm the AV sync of the test signal prior to the actual tests being run. Our other concern was that if the AV was in sync at the start but then drifted out during a test session, could we be sure that it was due to the device and not to the test rig set up, the input processing on the PC of the audio and video signals, etc. The aim of using a camera was to ensure that the recorded audio and video was being synchronised at the point of capture. Part of the camera selection process would be to record a known "good" test media and manually verify that the candidate camera model was successfully synchronising the AV over a lengthy recording. However, if there is a reliable way of using industrial cameras then they would be an option. |
Choosing a camera depends on requirements details can be found https://github.com/cta-wave/dpctf-deploy/wiki/Choosing-a-Camera. |
Here are a few comments for your consideration ...
|
Thanks @jpiesing for the comments. |
The row for streaming or local recording is IMHO still confusing. I was expecting something like this;
|
The intention is for Phase 2 to be a more automated approach and to include audio and AV sync testing. This places more stringent requirements on the camera used for the AV capture. The options today seem very limited.
Minimum criteria used are:
Point (1) rules out a number of manufacturers altogether, as they only support camera control from their own proprietary applications.
Point (2) narrows the range of possible models.
Experimentation with a GoPro camera failed on point (3). Recordings of large screen TVs were acceptable, but low resolution presentations on phones produced blurred images and QR code identification failed. We believe this is because the lenses and sensors in such "action" cameras do not allow sufficiently sharp focussing at the required short distances.
Point (4) looks to rule out manufacturing/surveillance type cameras.
Also a number of candidate cameras from e.g. Canon and BlackMagic turned out to either not capture audio at all when recording at 100/120fps, or to not capture synchronised AV. (the typical use case for high framerate capture is to be played back at lower framerates to give smooth slow motion).
An exception to this is Sony which classes 200+fps as high framerate, and 100/120fps as a standard recording, i.e. with synchronised AV.
However there is currently a limitation with Sony. They used to have a camera control API (which is still supported by some models) but this has now been archived and is receiving no further support (https://developer.sony.com/develop/cameras/#overview-content ).
In 2020 Sony announced a new Camera Remote SDK (https://support.d-imaging.sony.co.jp/app/sdk/en/index.html ) This is available for both Linux and Windows and is free to use. However, because this is so new currently only 4 camera models support it. The cheapest of these, which on paper appears to meet all our criteria, is the Sony ILCE-7C at around $2,100 including lens (https://www.sony.com/electronics/interchangeable-lens-cameras/ilce-7c ).
We believe this situation will resolve itself in time. It seems reasonable to expect that as manufacturers release more advanced models then there will be wider support for 120/100fps AV recording, and we can expect that Sony will be releasing newer models that also support their new SDK.
We will be monitoring the situation to assess what is being released. We would also welcome any suggestions/recommendations from members of the group.
The text was updated successfully, but these errors were encountered: