Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FaceMesh - Avatar puppeteering using blendshapes #1678

Closed
estiez-tho opened this issue Feb 28, 2021 · 18 comments
Closed

FaceMesh - Avatar puppeteering using blendshapes #1678

estiez-tho opened this issue Feb 28, 2021 · 18 comments
Assignees
Labels
legacy:face mesh Issues related to Face Mesh type:feature Enhancement in the New Functionality or Request for a New Solution type:research Model specific questions

Comments

@estiez-tho
Copy link

estiez-tho commented Feb 28, 2021

Hi,
I've recently experimented with the facemesh and face geometry module. I'm trying to implement a blendshape based model, in order to control virtual avatars (like the Animojis on Iphone X for instance).
I've come across this article on the Google AI blog article presenting the Mediapipe Iris detection module.

in which such an avatar is presented.

And I have also found this paper (made by Google Engineers, in June 2020), which describes the model used in the facemesh module, and in which it is mentionned that a blendshape model is used to control the said avatar (page 3, in the puppeteering section).

I was wondering if this blendshape model will be released any time soon, and if there are any resources to understand the model used. Also, what are the blendshapes used for this model ?

Thanks in advance.

@sgowroji sgowroji self-assigned this Mar 1, 2021
@sgowroji sgowroji added type:research Model specific questions type:support General questions labels Mar 1, 2021
@rmothukuru rmothukuru added the legacy:face mesh Issues related to Face Mesh label Mar 1, 2021
@sgowroji sgowroji added stat:awaiting response Waiting for user response type:feature Enhancement in the New Functionality or Request for a New Solution and removed type:support General questions stat:awaiting response Waiting for user response labels Mar 3, 2021
@sgowroji sgowroji assigned kostyaby and unassigned sgowroji Mar 4, 2021
@sgowroji sgowroji added the stat:awaiting googler Waiting for Google Engineer's Response label Mar 4, 2021
@kostyaby
Copy link

kostyaby commented Mar 4, 2021

Hey @estiez-tho!

As you correctly observed, our face blendshape prediction technology used for the GIFs is not open-sourced in MediaPipe yet. I'll defer the question of whether it'll be OSSed and the timeline to @chuoling and @mgyong

@kostyaby kostyaby assigned mgyong and chuoling and unassigned kostyaby Mar 4, 2021
@mgyong
Copy link

mgyong commented Mar 10, 2021

@estiez-tho Sorry currently no plans to open source blendshape tech.

@mgyong mgyong closed this as completed Mar 10, 2021
@sgowroji sgowroji removed the stat:awaiting googler Waiting for Google Engineer's Response label Mar 15, 2021
@Zju-George
Copy link

@mgyong Is the output of face_geometry pipeline only a rigid transformation of the canonical face mesh? Or it contains nonlinear deformation such as mouse open-close or eye blink motion?

@kostyaby
Copy link

Hey @Zju-George,

At this point, it's only a rigid transformation of the canonical face mesh. It aims not to react to facial expression changes (like opening / closing mouth or eye blinking), just on the head pose changes

@Zju-George
Copy link

@kostyaby I see. Thank you for your reply!

@wingdi
Copy link

wingdi commented Nov 9, 2021

@mgyong

Hi, this is my 3d face model's morph targets:
"targetNames" : [
                    "Face.M_F00_000_00_Fcl_ALL_Neutral",
                    "Face.M_F00_000_00_Fcl_ALL_Angry",
                    "Face.M_F00_000_00_Fcl_ALL_Fun",
                    "Face.M_F00_000_00_Fcl_ALL_Joy",
                    "Face.M_F00_000_00_Fcl_ALL_Sorrow",
                    "Face.M_F00_000_00_Fcl_ALL_Surprised",
                    "Face.M_F00_000_00_Fcl_BRW_Angry",
                    "Face.M_F00_000_00_Fcl_BRW_Fun",
                    "Face.M_F00_000_00_Fcl_BRW_Joy",
                    "Face.M_F00_000_00_Fcl_BRW_Sorrow",
                    "Face.M_F00_000_00_Fcl_BRW_Surprised",
                    "Face.M_F00_000_00_Fcl_EYE_Angry",
                    "Face.M_F00_000_00_Fcl_EYE_Close",
                    "Face.M_F00_000_00_Fcl_EYE_Close_R",
                    "Face.M_F00_000_00_Fcl_EYE_Close_L",
                    "Face.M_F00_000_00_Fcl_Eye_Fun",
                    "Face.M_F00_000_00_Fcl_EYE_Joy",
                    "Face.M_F00_000_00_Fcl_EYE_Joy_R",
                    "Face.M_F00_000_00_Fcl_EYE_Joy_L",
                    "Face.M_F00_000_00_Fcl_EYE_Sorrow",
                    "Face.M_F00_000_00_Fcl_EYE_Surprised",
                    "Face.M_F00_000_00_Fcl_EYE_Spread",
                    "Face.M_F00_000_00_Fcl_EYE_Iris_Hide",
                    "Face.M_F00_000_00_Fcl_EYE_Highlight_Hide",
                    "Face.M_F00_000_00_Fcl_EYE_Extra",
                    "Face.M_F00_000_00_Fcl_MTH_Up",
                    "Face.M_F00_000_00_Fcl_MTH_Down",
                    "Face.M_F00_000_00_Fcl_MTH_Angry",
                    "Face.M_F00_000_00_Fcl_MTH_Neutral",
                    "Face.M_F00_000_00_Fcl_MTH_Fun",
                    "Face.M_F00_000_00_Fcl_MTH_Joy",
                    "Face.M_F00_000_00_Fcl_MTH_Sorrow",
                    "Face.M_F00_000_00_Fcl_MTH_Surprised",
                    "Face.M_F00_000_00_Fcl_MTH_SkinFung",
                    "Face.M_F00_000_00_Fcl_MTH_SkinFung_R",
                    "Face.M_F00_000_00_Fcl_MTH_SkinFung_L",
                    "Face.M_F00_000_00_Fcl_MTH_A",
                    "Face.M_F00_000_00_Fcl_MTH_I",
                    "Face.M_F00_000_00_Fcl_MTH_U",
                    "Face.M_F00_000_00_Fcl_MTH_E",
                    "Face.M_F00_000_00_Fcl_MTH_O",
                    "Face.M_F00_000_00_Fcl_HA_Hide",
                    "Face.M_F00_000_00_Fcl_HA_Fung1",
                    "Face.M_F00_000_00_Fcl_HA_Fung1_Low",
                    "Face.M_F00_000_00_Fcl_HA_Fung1_Up",
                    "Face.M_F00_000_00_Fcl_HA_Fung2",
                    "Face.M_F00_000_00_Fcl_HA_Fung2_Low",
                    "Face.M_F00_000_00_Fcl_HA_Fung2_Up",
                    "Face.M_F00_000_00_Fcl_HA_Fung3",
                    "Face.M_F00_000_00_Fcl_HA_Fung3_Up",
                    "Face.M_F00_000_00_Fcl_HA_Fung3_Low",
                    "Face.M_F00_000_00_Fcl_HA_Short",
                    "Face.M_F00_000_00_Fcl_HA_Short_Up",
                    "Face.M_F00_000_00_Fcl_HA_Short_Low",
                    "EyeExtra_01.M_F00_000_00_EyeExtra_On"
                ]

Can you general idea of telling me how to set the morph weights value using facial landmarks ? I know the basic method is to calculate difference value of certain landmarks. But there are too many landmarks changed, is there any algorithm to calculate ?

@tu-nv
Copy link

tu-nv commented Jan 24, 2022

I am also having a similar problem, and this one looks promising (I haven't tried it yet though)
https://github.com/yeemachine/kalidokit

@opchronatron
Copy link

For anyone looking for a plug-and-play blendshape sdk you can get it here. https://joinhallway.com/
Uses the ARKit 52 standard.

@GeorgeS2019
Copy link

@tu-nv @wingdi
Please support this feature request for ARKit 52 blendshapes

@brunodeangelis
Copy link

I am also having a similar problem, and this one looks promising (I haven't tried it yet though) https://github.com/yeemachine/kalidokit

I've implemented that solution and it outputs much less data than something like Hallway. I also found it not too reliable, but I could be wrong about that.

Since I haven't been given access to the Hallway SDK yet, I went with mocap4face and it seems to be the best so far.

@emphaticaditya
Copy link

mocap4face is shutting down its sdk @brunodeangelis

@xuyixun21
Copy link

xuyixun21 commented Dec 12, 2022

我也有类似的问题,这个看起来很有希望(虽然我还没有尝试过)https://github.com/yeemachine/kalidokit

我已经实施了该解决方案,它输出的数据比 Hallway 之类的要少得多。我也发现它不太可靠,但我可能是错的。

因为我还没有获得访问 Hallway SDK 的权限,所以我选择了mocap4face,它似乎是迄今为止最好的。

I am also having a similar problem, and this one looks promising (I haven't tried it yet though) https://github.com/yeemachine/kalidokit

I've implemented that solution and it outputs much less data than something like Hallway. I also found it not too reliable, but I could be wrong about that.

Since I haven't been given access to the Hallway SDK yet, I went with mocap4face and it seems to be the best so far.

could you help me?supply me the code of mocap4face,because mocap4face is shutting down now.

@brunodeangelis
Copy link

could you help me?supply me the code of mocap4face,because mocap4face is shutting down now.

It's been a few months, and I don't remember why but I didn't use mocap4face in the end. I used Hallway's desktop app which allows for OSC data streaming. That was the solution to my intended use case.

@baronha
Copy link

baronha commented Feb 27, 2023

Any other solution for 52 blendshape support?
I have a problem here

@metamultiverse
Copy link

Did any get update on this issue?
I am interested for same solution. Hope they make it open source soon

@huhai463127310
Copy link

Did any get update on this issue? I am interested for same solution. Hope they make it open source soon

see keijiro/FaceMeshBarracuda#24 (comment)

@AlexisTM
Copy link

NOTE: the new mediapipe task vision system supports the blendshapes natively

@GeorgeS2019
Copy link

will continue to track this issue here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
legacy:face mesh Issues related to Face Mesh type:feature Enhancement in the New Functionality or Request for a New Solution type:research Model specific questions
Projects
None yet
Development

No branches or pull requests