Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ARKit 52 blendshapes support request #3421

Closed
GeorgeS2019 opened this issue Jun 12, 2022 · 47 comments
Closed

ARKit 52 blendshapes support request #3421

GeorgeS2019 opened this issue Jun 12, 2022 · 47 comments
Assignees
Labels
task::all All tasks of MediaPipe type:feature Enhancement in the New Functionality or Request for a New Solution

Comments

@GeorgeS2019
Copy link

GeorgeS2019 commented Jun 12, 2022

Please make sure that this is a feature request.

Describe the feature and the current behavior/state:

Is there a plan to output the facial blendshapes that are compatible with the ARKit 52 blendshapes?

Will this change the current api? How?

This is an addition to the existing API.

Who will benefit with this feature?

This provides facial mocap for avatars that use the ARKit 52 blendshapes.

Please specify the use cases for this feature:

Currently, Users who are using industry standards e.g. Character Creator 3 limit themselves only facial mocap apps from Apple App Store

This feature request will democratize Facial Mocap apps to Android and make them available through Google Store.

Related discussion

Related community effort to address this unmet need for android phone users

@GeorgeS2019 GeorgeS2019 added the type:feature Enhancement in the New Functionality or Request for a New Solution label Jun 12, 2022
@GeorgeS2019
Copy link
Author

Relevant attempt by VRM communities to adopt the ARKit 52 blendshapes

@sureshdagooglecom sureshdagooglecom added the task::all All tasks of MediaPipe label Jun 13, 2022
@sureshdagooglecom sureshdagooglecom added the stat:awaiting googler Waiting for Google Engineer's Response label Jun 13, 2022
@sureshdagooglecom
Copy link

Hi @GeorgeS2019 ,
Blendshapes come from Xeno (https://drive.google.com/open?id=1f030M8gbXgJN-5-JhcPkDnK5ehIwl3grNhRXSZBXvjw&resourcekey=0-q6Z33dZKrau_ngUh0gJDeA) .
The basic implementation is ready, but was waiting for a stable MediaPipe version of the app to submit. Pushing to M2 sounds good.

@GeorgeS2019
Copy link
Author

GeorgeS2019 commented Jun 17, 2022

@sureshdagooglecom and @jiuqiant

Hi Suresh and Juiqiang, could one of you clarify what you mean by M2?

MediaPipe has become an industry standard for developers. I believe many would not need to wait for the app you refer to but would use the basic implementation immediately as part of the MediaPipe SDK.

Please PR it so the community could build or use the nightly built version (after PR merge) immediately and start testing it so we could iteratively feedback on the basic implementation.

@GeorgeS2019
Copy link
Author

GeorgeS2019 commented Jun 17, 2022

For reference:

VSBuild discussion and source codes

MediaPipe Unity Plugin

@zk2ly
Copy link

zk2ly commented Jun 28, 2022

@sureshdagooglecom Please support this feature request for #3421

@zk2ly
Copy link

zk2ly commented Jun 30, 2022

@sureshdagooglecom

No access rights

@huhai463127310
Copy link

@sureshdagooglecom

No access rights

+1

@GeorgeS2019
Copy link
Author

GeorgeS2019 commented Jul 6, 2022

@kuaashish could you help address some of the questions here? Assuming that both @jiuqiant and @sureshdagooglecom are on a long holiday

@brunodeangelis
Copy link

+1 for this. It would really democratize the virtual character puppeteering market, allowing developers to build things we've not imagined yet

@GeorgeS2019
Copy link
Author

@brunodeangelis I am about to close this issue as our requests are being ignored.

@lucasjinreal
Copy link

I think 52 bs is not easily integrated with media pipe, but one can using a new model to regression these outputs with the input of 3d face pose.

@brunodeangelis
Copy link

I think 52 bs is not easily integrated with media pipe, but one can using a new model to regression these outputs with the input of 3d face pose.

Could you expand on this please? What type of model? Do you have an example?

@emphaticaditya
Copy link

emphaticaditya commented Aug 27, 2022

any update on this guys? Is it in progress or dropped? Really looking forward to it

@noeykan
Copy link

noeykan commented Sep 30, 2022

I'm also waiting for this feature to be released so much... Please update this issue!

@kuaashish kuaashish reopened this Nov 8, 2022
@GeorgeS2019
Copy link
Author

Hope to get an update soon on the support of this technology :-)

@dandingol03
Copy link

Hi @GeorgeS2019 , Blendshapes come from Xeno (https://drive.google.com/open?id=1f030M8gbXgJN-5-JhcPkDnK5ehIwl3grNhRXSZBXvjw&resourcekey=0-q6Z33dZKrau_ngUh0gJDeA) . The basic implementation is ready, but was waiting for a stable MediaPipe version of the app to submit. Pushing to M2 sounds good.

the link of the google drvie is invalid

@xuyixun21
Copy link

你好@GeorgeS2019, Blendshapes 来自 Xeno ( https://drive.google.com/open?id=1f030M8gbXgJN-5-JhcPkDnK5ehIwl3grNhRXSZBXvjw&resourcekey=0-q6Z33dZKrau_ngUh0gJDeA )。基本实现已准备就绪,但正在等待应用程序的稳定 MediaPipe 版本提交。推到 M2 听起来不错。

google drvie 的链接无效

+1

@puhuajiang
Copy link

Any update so far?

@srcnalt
Copy link

srcnalt commented Feb 16, 2023

This feature can be revolutionary. Really wondering why it gets ignored.

@GeorgeS2019
Copy link
Author

@GeorgeS2019
Copy link
Author

Python example using blendshape

@GeorgeS2019
Copy link
Author

I need your support for this issue to allow more USERS interested of testing the blendshape in Game Engine doing it in faster iteration.

@GeorgeS2019
Copy link
Author

After close to 9 months of REQUEST and with MANY HERE supporting the IDEA <======.
I am closing this issue NOW!
https://www.phizmocap.dev/

msedge_GG8r4VZSI3

@baronha
Copy link

baronha commented Mar 25, 2023

After close to 9 months of REQUEST and with MANY HERE supporting the IDEA <======. I am closing this issue NOW! https://www.phizmocap.dev/

Are you using mediapipe for this project? I found it used from an archived package mocap4face of facemoji

@GeorgeS2019
Copy link
Author

@baronha

#4200 (comment)

@GeorgeS2019
Copy link
Author

Consider supporting this issue

@GeorgeS2019
Copy link
Author

#4210

@kuaashish kuaashish removed the stat:awaiting googler Waiting for Google Engineer's Response label Mar 31, 2023
@weeeeigen
Copy link

I have installed mediapipe=0.9.2.1 by pip to use this good blendshape estimation. If anyone knows how to move it, it would be helpful if you could tell me.

@FishWoWater
Copy link

@GeorgeS2019 @fadiaburaid fa Thank you for your efforts to make this amazing feature possible! Do you have any python / javascript example to use this tflite model? It's seems that official guide is still in progress and the python example can't run

@GeorgeS2019
Copy link
Author

GeorgeS2019 commented May 18, 2023

To Everyone, this is PART 2: Text To Speech to Facial BlendShapes, please support!!!!!!

@kuaashish
Copy link
Collaborator

@GeorgeS2019,

Could you let us know if we can mark issue as resolved and close? However, We are following up on another issue raised #4428. Thank you!!

@kuaashish kuaashish assigned kuaashish and unassigned jiuqiant Jun 19, 2023
@kuaashish kuaashish added the stat:awaiting response Waiting for user response label Jun 19, 2023
@GeorgeS2019
Copy link
Author

@kuaashish
Thank you for your support. I need it to present global crisis

@kuaashish kuaashish removed the stat:awaiting response Waiting for user response label Jun 20, 2023
@FishWoWater
Copy link

FishWoWater commented Jul 6, 2023

@schmidt-sebastian @kuaashish Hi, researchers! Could you please give me some hint on the training details of this blendshape estimator? I have referred to your solution page but still couldn't get the point. How did you generate the ground-truth 3D blendshapes(52 coefficients) from 3D mesh reconstruction? Is it an optimization-based logic, where a rigged face's blendshapes are optimized to deform the canonical mesh towards the captured 3D mesh?

Or is it follows the framework in this paper except that body parts are removed and skinning functions are specified by the artists instead of being learnable.

image

@PooyaDeperson
Copy link

Hello just a heads-up! There's a new page with high-quality references and a detailed guide on creating ARKit's 52 facial blendshapes
https://pooyadeperson.com/the-ultimate-guide-to-creating-arkits-52-facial-blendshapes/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
task::all All tasks of MediaPipe type:feature Enhancement in the New Functionality or Request for a New Solution
Projects
None yet
Development

No branches or pull requests