-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ARKit 52 blendshapes support request #3421
Comments
Relevant attempt by VRM communities to adopt the ARKit 52 blendshapes |
Hi @GeorgeS2019 , |
@sureshdagooglecom and @jiuqiant Hi Suresh and Juiqiang, could one of you clarify what you mean by M2? MediaPipe has become an industry standard for developers. I believe many would not need to wait for the app you refer to but would use the basic implementation immediately as part of the MediaPipe SDK. Please PR it so the community could build or use the nightly built version (after PR merge) immediately and start testing it so we could iteratively feedback on the basic implementation. |
For reference: |
@sureshdagooglecom Please support this feature request for #3421 |
No access rights |
+1 |
@kuaashish could you help address some of the questions here? Assuming that both @jiuqiant and @sureshdagooglecom are on a long holiday |
+1 for this. It would really democratize the virtual character puppeteering market, allowing developers to build things we've not imagined yet |
@brunodeangelis I am about to close this issue as our requests are being ignored. |
I think 52 bs is not easily integrated with media pipe, but one can using a new model to regression these outputs with the input of 3d face pose. |
Could you expand on this please? What type of model? Do you have an example? |
any update on this guys? Is it in progress or dropped? Really looking forward to it |
I'm also waiting for this feature to be released so much... Please update this issue! |
Hope to get an update soon on the support of this technology :-) |
the link of the google drvie is invalid |
+1 |
Any update so far? |
This feature can be revolutionary. Really wondering why it gets ignored. |
I need your support for this issue to allow more USERS interested of testing the blendshape in Game Engine doing it in faster iteration. |
After close to 9 months of REQUEST and with MANY HERE supporting the IDEA <======. |
Are you using mediapipe for this project? I found it used from an archived package |
Consider supporting this issue |
I have installed mediapipe=0.9.2.1 by pip to use this good blendshape estimation. If anyone knows how to move it, it would be helpful if you could tell me. |
@GeorgeS2019 @fadiaburaid fa Thank you for your efforts to make this amazing feature possible! Do you have any python / javascript example to use this tflite model? It's seems that official guide is still in progress and the python example can't run |
To Everyone, this is PART 2: Text To Speech to Facial BlendShapes, please support!!!!!! |
Could you let us know if we can mark issue as resolved and close? However, We are following up on another issue raised #4428. Thank you!! |
@kuaashish |
@schmidt-sebastian @kuaashish Hi, researchers! Could you please give me some hint on the training details of this blendshape estimator? I have referred to your solution page but still couldn't get the point. How did you generate the ground-truth 3D blendshapes(52 coefficients) from 3D mesh reconstruction? Is it an optimization-based logic, where a rigged face's blendshapes are optimized to deform the canonical mesh towards the captured 3D mesh? Or is it follows the framework in this paper except that body parts are removed and skinning functions are specified by the artists instead of being learnable. |
Hello just a heads-up! There's a new page with high-quality references and a detailed guide on creating ARKit's 52 facial blendshapes |
Please make sure that this is a feature request.
Describe the feature and the current behavior/state:
Is there a plan to output the facial blendshapes that are compatible with the ARKit 52 blendshapes?
Will this change the current api? How?
This is an addition to the existing API.
Who will benefit with this feature?
This provides facial mocap for avatars that use the ARKit 52 blendshapes.
Please specify the use cases for this feature:
Currently, Users who are using industry standards e.g. Character Creator 3 limit themselves only facial mocap apps from Apple App Store
This feature request will democratize Facial Mocap apps to Android and make them available through Google Store.
Related discussion
Related community effort to address this unmet need for android phone users
The text was updated successfully, but these errors were encountered: