-
Notifications
You must be signed in to change notification settings - Fork 374
about bvh #50
Comments
Thanks for asking. |
There is another solution called vibe, that has a script to convert PKL to FBX file. https://github.com/mkocabas/VIBE#fbx-and-gltf-output-new-feature I was planning to try to adapt their solution to work with Frankmocap. Anyway if somebody have more time and courage than me... fell free to try 😊 |
Yesterday I was able to make it work. I could use the PKL list of files created by Frankmocap (body only) and using an altered version of the VIBE fbx output, create a FBX file with the movement eXVKp6DnN6.mp4To do that, I folowed the instructions in and then created another file with some alterations that could make possible to use FrankMocap information. The file I created with this changes is here |
@carlosedubarreto Hello! I would like to try to develop the converter for smpl-x: do you know any project that uses some kind of transformation from pkl to fbx that I can base myself on? any idea where to start? I appreciate the help in advance. |
Hello @italosalgado14 I used their code as a reference for a Blender Addon and it´s the one that works better. If you'd like to see the code I converted, you can see it here If you use blender, you could get the ready to install version here (and the most updated, as I dont remember if the github one is updated) if you want to see some results, my patreon got some videos of my tests (i also have it on youtube, but patreon is easier to find because of the tags :) |
Hi @carlosedubarreto man... Do u have any idea or tips for doing this for the HANDMOCAP output or for the hands? |
I started to try last year, but I stopped, Because I was not going to keep using Frankmocap. It takes longer to process, you have to make intrinsic and extrensic calibration, but at the and, I liked more their results. So, for easymocap I did the finger tracking 😊 Here are som examples where I did retargeting on the mocap data, but didnt make any tweak on the mocap data (that is raw) another raw animation example Finger tracking using 2 low quality camera (probably the reason the finger tracking is not so good) another test with raw data Test with 5 low quality cameras and if you are interestes, I did a pack to use with blender. Its free. Here are some tutorials on how to use The fast tutorial (15 minutes) A tipo on how to quickly fix the extrinsic calibration data A more detailed video on how to use the blender addon with easymocap Hope its usefull 😊 |
Hello @carlosedubarreto, looking at the links you provided, as you mentioned those were using Easy mocap. But, Easy Mocap requires multi camera setup and much robust camera calibration. How do you think about this tradeoff? |
Hello @lakpa-tamang9 My main problem ATM is that easymocap need lots of time (it takes long to process the data) and HD space. I dont remember the amount of time frankmocap need to process but I'm almost sure its much faster than easymocap. This is the biggest tradeof IMHO. |
@carlosedubarreto, What about the multi-camera setup? In real practice for a real time application, there might not be such scenario of creating a multiple camera setups. |
I dont see multi-camera setup as a problem. It can be as high or as low amount of cameras as you'd like. The big problems that I see are the ones I stated before. The multicamera I dont see as a problem. But a good thing to get a precise pose estimation in 3d, after all with a single camera, the algorithm have to guess, and with multi cam they will not guess. So i prefer the certainty of multicam than the guessing of a single cam (I'm not sure if the algorithm using sigle cam "guesses", but that is the impression that I've got) |
@carlosedubarreto yes ofcourse, multicamera leads to precise pose estimation, however, recently several 3D Mocaps have been popular using 2D RGB videos.
|
Hello @lakpa-tamang9 From all youve said, I just didnt test Radical and move.ai But, move.ai, you need mutiple cameras, like easymocap. About how hey could achieve this, I think its due to have the algorithm embbed with the human joint constraints. Its just something I thought, because I dont see another way to do it. If you try mediapipe, it has an amazing 2d pose estimation, but when you try to use the 3d pose estimation, its when things get..... very unstable (IMHO) and it seemed to me that, that happens, in a tentative to guess that third dimensionality.. That´s why I think, if a person is going to use some image to animation tool, there is no way to get an amazing result with single camera solution. I think you can get a very good result... but not a great one... For me, to have a great one, you would at least 2 cameras so the computer can know for sure the place of points in the real 3d space. I tell all of this based on my feeling from the tools I've got in contact since I start this jorney of trying to create animation from videos. And I could be totally wrong on my thinking... and I would be glad to be wrong... Thisng would be much more simple with single cameras solutions. I've bought 4 cameras just for that work, and I have so much work setting it up, that I'm not actually using them. Because I'll spend hours setting it up, and using the results of maybe a 5 minutes footage... 😓 But one more thing, I say I would spend hours setting up and converting the video to movement, and even that it would be an advantage, because If I were to animate by hand, I wouldnt be able to create the animation in dozens of hours... I tried it and the result of my animation was not good 😊 |
@carlosedubarreto Hi carl. For the previous video which outputs fbx from frankmocap, can u tell me how to make the pose more neutral rather than fixed body hip at the center? ( Just like person on the ground) |
@jinfagang For example, take a look at this video using Easymocap And this one using MPP2SOS I think they are much better than other offline solutions And if you want to use it in an easier way, you can use these 2 solutions inside blender. CEB Easymocap (its free): https://carlosedubarreto.gumroad.com/l/ceb_easymocap_blender |
Thanks for your excellent work. Can you provide the code about converting training dataset to .bvh file.
The text was updated successfully, but these errors were encountered: