You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Nice to find the repository that reproduces that paper and to see such a detailed report-presentation, thanks a lot!
May I ask if you guys have tried changing the SVM to a deep learning network? At the same time, I would like to ask whether the detection speed of a single frame image really takes about 3 minutes. Based on what equipment is used to measure this data, it feels a bit too slow?
The text was updated successfully, but these errors were encountered:
Hi!
I can confirm that the time required for the full analysis of a frame, after the eye-blinking phase, is around 3 minutes.
The project was running on a laptop with the following hardware:
-Intel i7 5th gen
-Nvidia GTX960M
-8GB RAM
Please notice that the majority of the time is spent to compute the descriptors. This computation is primarily based on code that was written from scratch, without relying on external libraries. Therefore, interesting improvements can probably be obtained by relying on more efficient libraries.
As the report states, we tried it using an SVM. Unfortunately, we did not pursue the approach based on neural networks.
Nice to find the repository that reproduces that paper and to see such a detailed report-presentation, thanks a lot!
May I ask if you guys have tried changing the SVM to a deep learning network? At the same time, I would like to ask whether the detection speed of a single frame image really takes about 3 minutes. Based on what equipment is used to measure this data, it feels a bit too slow?
The text was updated successfully, but these errors were encountered: