-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using the pre-trained models #11
Comments
bump,i knew there is a photo detection demo, but I also want to know how to feed videos. |
@akiratsuraii would you point me towards the demo please? Can't seem to find it either? |
ME-GraphAU/OpenGraphAU/demo |
I'm interested with Videos too |
You might want to checkout my repo, I have implemented ME-GraphAU on a video in my project. No changes been made just minor refactor and uses their model to predict the frame when reading the video. |
Appreciate brother, I’m going to test it soonOn May 2, 2024, at 8:52 PM, Andreas Susanto ***@***.***> wrote:
You might want to checkout my repo, I have implemented ME-GraphAU on a video in my project. No changes been made just minor refactor and uses their model to predict the frame when reading the video.
https://github.com/Andreas-UI/ME-GraphAU-Video
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: ***@***.***>
|
Hello,
I just wanted to ask if you cant provide some instructions on how to use the pre-trained models on new videos to extract the action units.
I would like to feed in videos and output the action units. do you i need to retrain the model. what kind of format should the model be fed? Is there already a script you used to input video and output the action units?
The text was updated successfully, but these errors were encountered: