Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inochi2D Support #14

Open
LovelyA72 opened this issue Sep 1, 2024 · 6 comments
Open

Inochi2D Support #14

LovelyA72 opened this issue Sep 1, 2024 · 6 comments

Comments

@LovelyA72
Copy link

Inochi2D is a free(in freedom) and open source alternative to Live2D. Currently under development but already being very capable of displaying 2D puppets.

Everything they do are opened source here:
https://github.com/Inochi2D/

@t41372
Copy link
Owner

t41372 commented Sep 2, 2024

The project looks quite interesting, but I noticed that there isn't an easy way to integrate it with the web, which is the frontend implementation used in Open-LLM-VTuber.

@ShadowMarker789
Copy link

If you can emit blendshapes via VMC, Inochi-Session can ingest those to animate a puppet.

E.g.:
image

@t41372
Copy link
Owner

t41372 commented Sep 7, 2024

The current impelmentation of facial expression with Live2D calls the pre-defined facial expressions shipped with the Live2D model. I might need to do some research to emit blendshape.

@ShadowMarker789
Copy link

The current impelmentation of facial expression with Live2D calls the pre-defined facial expressions shipped with the Live2D model. I might need to do some research to emit blendshape.

Blendshapes emitted via VMC protocol are arbitrary key-value pairs, with most values being scalar floats.

@t41372
Copy link
Owner

t41372 commented Sep 9, 2024

Yeah, I can see that in your screenshot, but I don't have the pre-defined values for facial expressions, such as happy, sad, or something else. How do I get those things? Is it possible to get those things by recording them in some face-tracking software or just getting them from somewhere? I'm really new to these things...

Also, I don't know how to display them on my web frontend. This is actually the biggest problem for me. I think blendshape wouldn't be too hard, but I actually have no idea how to display inochi2d on the web.

@ShadowMarker789
Copy link

Yeah, I can see that in your screenshot, but I don't have the pre-defined values for facial expressions, such as happy, sad, or something else. How do I get those things? Is it possible to get those things by recording them in some face-tracking software or just getting them from somewhere? I'm really new to these things...

Also, I don't know how to display them on my web frontend. This is actually the biggest problem for me. I think blendshape wouldn't be too hard, but I actually have no idea how to display inochi2d on the web.

Inochi2D is a bit more modular than Live2D in this way...

But Inochi2D itself is more akin to a standard (including file format) that has example implementations. There is a rust implementation in the works that could run on the web but it's still in-progress and isn't complete.

Examples of blendshapes can be found in the iOS specification, with many folk using iPhones specifically for face tracking.

https://developer.apple.com/documentation/arkit/arblendshapelocationjawopen?language=objc

Now, the typical workflow is as follows:

  1. The face tracker tracks the face.
  2. The face tracker emits blendshapes via a protocol (e.g.: VMC)
  3. These blendshapes are ingested by the animating program (e.g.: Inochi2D-Session)
  4. Blendshapes are mapped to Parameters in the model with various transformations applied.
    4.a. E.g.: HeadLeftRight and HeadRoll can be combined into a BodyLean Parameter
  5. The Puppet is Animated by those Parameters and how they are configured to transform the puppet.

That said, you can simply push out values for Happiness (-1 .. 0 .. 1) and say it's Inochi2D's job to map this shape into puppet animation parameters.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants