-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FaceMesh - Avatar puppeteering using blendshapes #1678
Comments
Hey @estiez-tho! As you correctly observed, our face blendshape prediction technology used for the GIFs is not open-sourced in MediaPipe yet. I'll defer the question of whether it'll be OSSed and the timeline to @chuoling and @mgyong |
@estiez-tho Sorry currently no plans to open source blendshape tech. |
@mgyong Is the output of face_geometry pipeline only a rigid transformation of the canonical face mesh? Or it contains nonlinear deformation such as mouse open-close or eye blink motion? |
Hey @Zju-George, At this point, it's only a rigid transformation of the canonical face mesh. It aims not to react to facial expression changes (like opening / closing mouth or eye blinking), just on the head pose changes |
@kostyaby I see. Thank you for your reply! |
Can you general idea of telling me how to set the morph weights value using facial landmarks ? I know the basic method is to calculate difference value of certain landmarks. But there are too many landmarks changed, is there any algorithm to calculate ? |
I am also having a similar problem, and this one looks promising (I haven't tried it yet though) |
For anyone looking for a plug-and-play blendshape sdk you can get it here. https://joinhallway.com/ |
@tu-nv @wingdi |
I've implemented that solution and it outputs much less data than something like Hallway. I also found it not too reliable, but I could be wrong about that. Since I haven't been given access to the Hallway SDK yet, I went with mocap4face and it seems to be the best so far. |
mocap4face is shutting down its sdk @brunodeangelis |
could you help me?supply me the code of mocap4face,because mocap4face is shutting down now. |
It's been a few months, and I don't remember why but I didn't use mocap4face in the end. I used Hallway's desktop app which allows for OSC data streaming. That was the solution to my intended use case. |
Any other solution for 52 |
Did any get update on this issue? |
|
NOTE: the new mediapipe task vision system supports the blendshapes natively |
will continue to track this issue here |
Hi,
I've recently experimented with the facemesh and face geometry module. I'm trying to implement a blendshape based model, in order to control virtual avatars (like the Animojis on Iphone X for instance).
I've come across this article on the Google AI blog article presenting the Mediapipe Iris detection module.
in which such an avatar is presented.
And I have also found this paper (made by Google Engineers, in June 2020), which describes the model used in the facemesh module, and in which it is mentionned that a blendshape model is used to control the said avatar (page 3, in the puppeteering section).
I was wondering if this blendshape model will be released any time soon, and if there are any resources to understand the model used. Also, what are the blendshapes used for this model ?
Thanks in advance.
The text was updated successfully, but these errors were encountered: