Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

face blendhapes support #935

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

newnon
Copy link

@newnon newnon commented Jun 27, 2023

support for face blendshapes

@newnon newnon force-pushed the featrure/face_blendshapes_support branch from 3de96ca to 2c1fd5e Compare June 27, 2023 07:20
@homuler homuler self-requested a review June 29, 2023 00:41
@boostcat
Copy link

boostcat commented Jul 5, 2023

@newnon Hello, I tried this submission code in Unity and Unity reported the following error. What is the reason?
Thank you very much for your answer!

@homuler Can you integrate this contributed code for Blendshape functionality? Thank you so much!

[libprotobuf ERROR external/com_google_protobuf/src/google/protobuf/text_format.cc:335] Error parsing text-format mediapipe.CalculatorGraphConfig: 199:60: Extension "mediapipe.LandmarksToTensorCalculatorOptions.ext" is not defined or is not an extension of "mediapipe.CalculatorOptions".
UnityEngine.Debug:LogError (object)
Mediapipe.Protobuf:LogProtobufMessage (int,string,int,string) (at Packages/com.github.homuler.mediapipe/Runtime/Scripts/External/Protobuf.cs:40)
Mediapipe.CalculatorGraphConfigExtension:ParseFromTextFormat (Google.Protobuf.MessageParser`1<Mediapipe.CalculatorGraphConfig>,string) (at Packages/com.github.homuler.mediapipe/Runtime/Scripts/Framework/CalculatorGraphConfigExtension.cs:15)
Mediapipe.Unity.GraphRunner:InitializeCalculatorGraph () (at Assets/MediaPipeUnity/Samples/Common/Scripts/GraphRunner.cs:234)
Mediapipe.Unity.GraphRunner/<Initialize>d__39:MoveNext () (at Assets/MediaPipeUnity/Samples/Common/Scripts/GraphRunner.cs:119)
Mediapipe.Unity.WaitForResult/<Run>d__20:MoveNext () (at Assets/MediaPipeUnity/Samples/Common/Scripts/WaitForResult.cs:51)
UnityEngine.MonoBehaviour:StartCoroutine (System.Collections.IEnumerator)
Mediapipe.Unity.WaitForResult:.ctor (UnityEngine.MonoBehaviour,System.Collections.IEnumerator,long) (at Assets/MediaPipeUnity/Samples/Common/Scripts/WaitForResult.cs:34)
Mediapipe.Unity.GraphRunner:WaitForInit (Mediapipe.Unity.RunningMode) (at Assets/MediaPipeUnity/Samples/Common/Scripts/GraphRunner.cs:109)
Mediapipe.Unity.ImageSourceSolution`1/<Run>d__12<Mediapipe.Unity.FaceMesh.FaceMeshGraph>:MoveNext () (at Assets/MediaPipeUnity/Samples/Common/Scripts/ImageSourceSolution.cs:60)
UnityEngine.MonoBehaviour:StartCoroutine (System.Collections.IEnumerator)
Mediapipe.Unity.ImageSourceSolution`1<Mediapipe.Unity.FaceMesh.FaceMeshGraph>:Play () (at Assets/MediaPipeUnity/Samples/Common/Scripts/ImageSourceSolution.cs:35)
Mediapipe.Unity.Solution/<Start>d__4:MoveNext () (at Assets/MediaPipeUnity/Samples/Common/Scripts/Solution.cs:27)
UnityEngine.SetupCoroutine:InvokeMoveNext (System.Collections.IEnumerator,intptr)
FaceMeshSolution: Mediapipe.MediaPipeException: FAILED_PRECONDITION: Mediapipe.MediaPipeException: Failed to parse config text. See error logs for more details
  at Mediapipe.CalculatorGraphConfigExtension.ParseFromTextFormat (Google.Protobuf.MessageParser`1[T] _, System.String configText) [0x0001e] in E:\Unity\Project\test\mediapipe\Packages\com.github.homuler.mediapipe\Runtime\Scripts\Framework\CalculatorGraphConfigExtension.cs:21 
  at Mediapipe.Unity.GraphRunner.InitializeCalculatorGraph () [0x00026] in E:\Unity\Project\test\mediapipe\Assets\MediaPipeUnity\Samples\Common\Scripts\GraphRunner.cs:234 
  at Mediapipe.Status.AssertOk () [0x00014] in E:\Unity\Project\test\mediapipe\Packages\com.github.homuler.mediapipe\Runtime\Scripts\Framework\Port\Status.cs:149 
  at Mediapipe.Unity.GraphRunner+<Initialize>d__39.MoveNext () [0x00078] in E:\Unity\Project\test\mediapipe\Assets\MediaPipeUnity\Samples\Common\Scripts\GraphRunner.cs:119 
  at Mediapipe.Unity.WaitForResult+<Run>d__20.MoveNext () [0x0007c] in E:\Unity\Project\test\mediapipe\Assets\MediaPipeUnity\Samples\Common\Scripts\WaitForResult.cs:51 
UnityEngine.Logger:LogError (string,object)
Mediapipe.Unity.MemoizedLogger:LogError (string,object) (at Assets/MediaPipeUnity/Samples/Common/Scripts/MemoizedLogger.cs:199)
Mediapipe.Unity.Logger:LogError (string,object) (at Packages/com.github.homuler.mediapipe/Runtime/Scripts/Unity/Logger.cs:90)
Mediapipe.Unity.ImageSourceSolution`1/<Run>d__12<Mediapipe.Unity.FaceMesh.FaceMeshGraph>:MoveNext () (at Assets/MediaPipeUnity/Samples/Common/Scripts/ImageSourceSolution.cs:88)
UnityEngine.SetupCoroutine:InvokeMoveNext (System.Collections.IEnumerator,intptr)

@homuler
Copy link
Owner

homuler commented Jul 5, 2023

I'm planning to port the FaceLandmarker Task, but since this PR is implemented in a different way, I cannot merge it as it is.

@newnon
Copy link
Author

newnon commented Jul 5, 2023

@newnon Hello, I tried this submission code in Unity and Unity reported the following error. What is the reason? Thank you very much for your answer!

@homuler Can you integrate this contributed code for Blendshape functionality? Thank you so much!

[libprotobuf ERROR external/com_google_protobuf/src/google/protobuf/text_format.cc:335] Error parsing text-format mediapipe.CalculatorGraphConfig: 199:60: Extension "mediapipe.LandmarksToTensorCalculatorOptions.ext" is not defined or is not an extension of "mediapipe.CalculatorOptions".
UnityEngine.Debug:LogError (object)
Mediapipe.Protobuf:LogProtobufMessage (int,string,int,string) (at Packages/com.github.homuler.mediapipe/Runtime/Scripts/External/Protobuf.cs:40)
Mediapipe.CalculatorGraphConfigExtension:ParseFromTextFormat (Google.Protobuf.MessageParser`1<Mediapipe.CalculatorGraphConfig>,string) (at Packages/com.github.homuler.mediapipe/Runtime/Scripts/Framework/CalculatorGraphConfigExtension.cs:15)
Mediapipe.Unity.GraphRunner:InitializeCalculatorGraph () (at Assets/MediaPipeUnity/Samples/Common/Scripts/GraphRunner.cs:234)
Mediapipe.Unity.GraphRunner/<Initialize>d__39:MoveNext () (at Assets/MediaPipeUnity/Samples/Common/Scripts/GraphRunner.cs:119)
Mediapipe.Unity.WaitForResult/<Run>d__20:MoveNext () (at Assets/MediaPipeUnity/Samples/Common/Scripts/WaitForResult.cs:51)
UnityEngine.MonoBehaviour:StartCoroutine (System.Collections.IEnumerator)
Mediapipe.Unity.WaitForResult:.ctor (UnityEngine.MonoBehaviour,System.Collections.IEnumerator,long) (at Assets/MediaPipeUnity/Samples/Common/Scripts/WaitForResult.cs:34)
Mediapipe.Unity.GraphRunner:WaitForInit (Mediapipe.Unity.RunningMode) (at Assets/MediaPipeUnity/Samples/Common/Scripts/GraphRunner.cs:109)
Mediapipe.Unity.ImageSourceSolution`1/<Run>d__12<Mediapipe.Unity.FaceMesh.FaceMeshGraph>:MoveNext () (at Assets/MediaPipeUnity/Samples/Common/Scripts/ImageSourceSolution.cs:60)
UnityEngine.MonoBehaviour:StartCoroutine (System.Collections.IEnumerator)
Mediapipe.Unity.ImageSourceSolution`1<Mediapipe.Unity.FaceMesh.FaceMeshGraph>:Play () (at Assets/MediaPipeUnity/Samples/Common/Scripts/ImageSourceSolution.cs:35)
Mediapipe.Unity.Solution/<Start>d__4:MoveNext () (at Assets/MediaPipeUnity/Samples/Common/Scripts/Solution.cs:27)
UnityEngine.SetupCoroutine:InvokeMoveNext (System.Collections.IEnumerator,intptr)
FaceMeshSolution: Mediapipe.MediaPipeException: FAILED_PRECONDITION: Mediapipe.MediaPipeException: Failed to parse config text. See error logs for more details
  at Mediapipe.CalculatorGraphConfigExtension.ParseFromTextFormat (Google.Protobuf.MessageParser`1[T] _, System.String configText) [0x0001e] in E:\Unity\Project\test\mediapipe\Packages\com.github.homuler.mediapipe\Runtime\Scripts\Framework\CalculatorGraphConfigExtension.cs:21 
  at Mediapipe.Unity.GraphRunner.InitializeCalculatorGraph () [0x00026] in E:\Unity\Project\test\mediapipe\Assets\MediaPipeUnity\Samples\Common\Scripts\GraphRunner.cs:234 
  at Mediapipe.Status.AssertOk () [0x00014] in E:\Unity\Project\test\mediapipe\Packages\com.github.homuler.mediapipe\Runtime\Scripts\Framework\Port\Status.cs:149 
  at Mediapipe.Unity.GraphRunner+<Initialize>d__39.MoveNext () [0x00078] in E:\Unity\Project\test\mediapipe\Assets\MediaPipeUnity\Samples\Common\Scripts\GraphRunner.cs:119 
  at Mediapipe.Unity.WaitForResult+<Run>d__20.MoveNext () [0x0007c] in E:\Unity\Project\test\mediapipe\Assets\MediaPipeUnity\Samples\Common\Scripts\WaitForResult.cs:51 
UnityEngine.Logger:LogError (string,object)
Mediapipe.Unity.MemoizedLogger:LogError (string,object) (at Assets/MediaPipeUnity/Samples/Common/Scripts/MemoizedLogger.cs:199)
Mediapipe.Unity.Logger:LogError (string,object) (at Packages/com.github.homuler.mediapipe/Runtime/Scripts/Unity/Logger.cs:90)
Mediapipe.Unity.ImageSourceSolution`1/<Run>d__12<Mediapipe.Unity.FaceMesh.FaceMeshGraph>:MoveNext () (at Assets/MediaPipeUnity/Samples/Common/Scripts/ImageSourceSolution.cs:88)
UnityEngine.SetupCoroutine:InvokeMoveNext (System.Collections.IEnumerator,intptr)

You need to recompile mediapipe binary to use this.

@boostcat
Copy link

boostcat commented Jul 6, 2023

@newnon Sorry, I don't quite understand what you mean, can you explain in detail? Thank you so much.
In addition, homuler can't merge your PR, can you reply to it to help it merge the PR that supports Blendshape?

@newnon
Copy link
Author

newnon commented Jul 6, 2023

@newnon Sorry, I don't quite understand what you mean, can you explain in detail? Thank you so much. In addition, homuler can't merge your PR, can you reply to it to help it merge the PR that supports Blendshape?

https://github.com/homuler/MediaPipeUnityPlugin/wiki/Installation-Guide#prerequisites

@arimintzu
Copy link

Thanks @newnon! Its working perfectly after rebuild the package myself

@rudrjain
Copy link

Hi @newnon @homuler for this solution, Its really helpful.

Now I want to get the weight of each of the 52 BS, can you please help me out by letting me know which function does this and How can I get those values.

Thanks in Advance

@panpawel88
Copy link

@rudrjain , I hope it helps:

private void OnFaceClassificationsFromBlendShapesOutput(object stream, OutputEventArgs<ClassificationList> eventArgs)
{
  Debug.Log($"Blendshapes count {eventArgs.value?.Classification?.Count}");
  if (eventArgs.value?.Classification != null)
  {
    foreach (var classification in eventArgs.value?.Classification)
    {
      Debug.Log($"{classification.Label} = {classification.Score}");
    }
  }
}

@panpawel88
Copy link

Hi @newnon, I've checked your PR and it works perfectly.

I'm quite new to MediaPipe, but the FaceLandmarker API also supports facial transformation matrixes (see: https://github.com/google/mediapipe/blob/085840388bbebd322889026fae235b3120a4bce7/mediapipe/tasks/web/vision/face_landmarker/face_landmarker_result.d.ts#L35). Is it possible to add this feature to your solution? I can try to implement it by myself but if you have any hints how to make that, then it would be easier for me to start.

Thanks in advance!

@newnon
Copy link
Author

newnon commented Jul 19, 2023

@panpawel88

you can implement it by yourself

const int LeftEarIdx = 356;
const int RightEarIdx = 127;
const int NoseTipIdx = 4;

private static void CalculateLandmarks(Vector3[] landmarks)
{
    var leftEar = landmarks[LeftEarIdx];
    var rightEar = landmarks[RightEarIdx];
    var noseTip = landmarks[NoseTipIdx];

    Vector3 headPos = (leftEar + rightEar) / 2;

    var headRot = Quaternion.LookRotation(headPos - noseTip, -Vector3.Cross(noseTip - rightEar, noseTip - leftEar));
    var euler = headRot.eulerAngles;
    headRot = Quaternion.Euler(euler.y, -euler.x, euler.z);
}

headPos - head position
headRot - head rotation quaternion

@emmasteimann
Copy link

@homuler Any updates on when you'll get the the FaceLandmarker Task implemented or this PR merged? Having the landmarks/blendshapes is going to be soooo amazing to have in this plugin!

@cdiddy77
Copy link

@homuler Any updates on when you'll get the the FaceLandmarker Task implemented or this PR merged? Having the landmarks/blendshapes is going to be soooo amazing to have in this plugin!

+infinity!

@Heroizme
Copy link

@homuler Any updates on when you'll get the the FaceLandmarker Task implemented or this PR merged? Having the landmarks/blendshapes is going to be soooo amazing to have in this plugin!

+1

@emmasteimann
Copy link

emmasteimann commented Jul 30, 2023

@newnon in your PR you forgot to also include face_blendshapes.bytes to StreamingAssets.

@emmasteimann
Copy link

emmasteimann commented Aug 1, 2023

Interestingly I got this running on Android, and the blendshape for jawOpen seems to be not returning any score. And on Mac and Android cheekPuff and couple others are returning no score. Not sure this is so much a implementation problem as a native mediapipe problem. But I'll continue to investigate. Just wanted to share with anybody curious about my successes/failure implementing this PR on multiple platforms.

Update: Yeah I just checked, there's nothing especially unique about the implementation, and it's using the default mediapipe tflite model. newnon/mediapipe@997ef9c
So I'm kinda clueless as to what's causing the lack of correctness in the tflite model, and why it works on Mac for some, but not on Android, but as far as I'm seeing there's other people experiencing incorrect scores on things (but not the same ones I'm experiencing google-ai-edge/mediapipe#4210).

Sloth sure dun look too happy on Android.
Screenshot_20230801-014704_TempleBar

Update again: the values are apparently getting returned, they're just super small. So I multiplied jawOpen by 10,000 and and clamped it from 0-100 and it seems to move it. It's likely the same for cheekPuff, etc. Still don't know why its different on Mac and Android, or if the result will be different on other devices, also it runs quite slowly and rarely in sync, usually a couple seconds delayed on my Samsun Galaxy S20.

Update yet again: I added a calibration step to my android app, and other then the extreme lag of a couple seconds behind which is another problem to solve, as long you calibrate where you set you min and max value and use a remap function to make that 0 to 100, you can fix the outliers. Perhaps just coming up with a gallery of a few "regularization" expressions the user needs to make, in order to calibrate, would be an intuitive way to handle this for the time being, instead of having them set all 52 shapes, or identifying and fix each problematic blend shapes.
Screenshot_20230801-145612_TempleBar

Actually thinking about it... since this is image based theres no reason I couldnt just find a way to the funnel 52 blendshape face images from somewhere like here https://arkit-face-blendshapes.com/ and use this to calibrate on device before it ever gets to the user.

@emphaticaditya
Copy link

emphaticaditya commented Aug 3, 2023

Interestingly I got this running on Android, and the blendshape for jawOpen seems to be not returning any score. And on Mac and Android cheekPuff and couple others are returning no score. Not sure this is so much a implementation problem as a native mediapipe problem. But I'll continue to investigate. Just wanted to share with anybody curious about my successes/failure implementing this PR on multiple platforms.

Update: Yeah I just checked, there's nothing especially unique about the implementation, and it's using the default mediapipe tflite model. newnon/mediapipe@997ef9c So I'm kinda clueless as to what's causing the lack of correctness in the tflite model, and why it works on Mac for some, but not on Android, but as far as I'm seeing there's other people experiencing incorrect scores on things (but not the same ones I'm experiencing google/mediapipe#4210).

Sloth sure dun look too happy on Android. Screenshot_20230801-014704_TempleBar

Update again: the values are apparently getting returned, they're just super small. So I multiplied jawOpen by 10,000 and and clamped it from 0-100 and it seems to move it. It's likely the same for cheekPuff, etc. Still don't know why its different on Mac and Android, or if the result will be different on other devices, also it runs quite slowly and rarely in sync, usually a couple seconds delayed on my Samsun Galaxy S20.

Update yet again: I added a calibration step to my android app, and other then the extreme lag of a couple seconds behind which is another problem to solve, as long you calibrate where you set you min and max value and use a remap function to make that 0 to 100, you can fix the outliers. Perhaps just coming up with a gallery of a few "regularization" expressions the user needs to make, in order to calibrate, would be an intuitive way to handle this for the time being, instead of having them set all 52 shapes, or identifying and fix each problematic blend shapes. Screenshot_20230801-145612_TempleBar

Actually thinking about it... since this is image based theres no reason I couldnt just find a way to the funnel 52 blendshape face images from somewhere like here https://arkit-face-blendshapes.com/ and use this to calibrate on device before it ever gets to the user.

Any idea how to reduce the lag??
This Runs pretty flawlessly on web M1 pro - https://mediapipe-studio.webapps.google.com/studio/demo/face_landmarker

@rudrjain
Copy link

rudrjain commented Aug 3, 2023

@rudrjain , I hope it helps:

private void OnFaceClassificationsFromBlendShapesOutput(object stream, OutputEventArgs<ClassificationList> eventArgs)
{
  Debug.Log($"Blendshapes count {eventArgs.value?.Classification?.Count}");
  if (eventArgs.value?.Classification != null)
  {
    foreach (var classification in eventArgs.value?.Classification)
    {
      Debug.Log($"{classification.Label} = {classification.Score}");
    }
  }
}

thanks @panpawel88 it helped me in my project

@emmasteimann
Copy link

Looks like homulers integrating things: https://github.com/homuler/MediaPipeUnityPlugin/tree/feature/tasks/face-landmarker !! YAY! IT'S ON IT WAAY HEY HEEEEY!! :-) So excited! So my guess is this PR will go poof soon, in lieu of a fully integrated face landmarker task with blendshapes!

@jefflim69
Copy link

How can I call this function ? Please let me know Please !! Thank you .
private void OnFaceClassificationsFromBlendShapesOutput(object stream, OutputEventArgs eventArgs)
{
Debug.Log($"Blendshapes count {eventArgs.value?.Classification?.Count}");
if (eventArgs.value?.Classification != null)
{
foreach (var classification in eventArgs.value?.Classification)
{
Debug.Log($"{classification.Label} = {classification.Score}");
}
}
}

@jefflim69
Copy link

@rudrjain , I hope it helps:

private void OnFaceClassificationsFromBlendShapesOutput(object stream, OutputEventArgs<ClassificationList> eventArgs)
{
  Debug.Log($"Blendshapes count {eventArgs.value?.Classification?.Count}");
  if (eventArgs.value?.Classification != null)
  {
    foreach (var classification in eventArgs.value?.Classification)
    {
      Debug.Log($"{classification.Label} = {classification.Score}");
    }
  }
}

How can I use this ? Would you explain about this code for using for example ? Thanks

@emmasteimann
Copy link

emmasteimann commented Aug 24, 2023

@jefflim69 you should check out the latest on master. Blendshapes are in master now. So theres technically no reason for this PR anymore. And honestly whatever homuler did is working really really well with any weird latency or need for smoothing like this PR. You will need to rebuild the repo (as a new tagged release has not been made) to get it included in the dylib/dll/so/framework/aar, at least I did. I just used the github action to rebuild it.

On this line: https://github.com/homuler/MediaPipeUnityPlugin/blob/2f3786687a4852d3a2fdd9888eb968b895c55dca/Assets/MediaPipeUnity/Samples/Scenes/Tasks/Face%20Landmark%20Detection/FaceLandmarkerRunner.cs#L113C28-L113C28

From OnFaceLandmarkDetectionOutput you can retrieve the blendshapes. Here's approximately what I did:

private void OnFaceLandmarkDetectionOutput(FaceLandmarkerResult result, Image image, int timestamp)
    {
      _faceLandmarkerResultAnnotationController.DrawLater(result);
      foreach (Tasks.Components.Containers.Classifications classifications in result.faceBlendshapes)
      {
        foreach (Category cat in classifications.categories)
        {
          Debug.Log($"{cat.categoryName} = {cat.score}");
        }
      }

      foreach (var normalizedLandmarks in result.faceLandmarks)
      {
        CalculateLandmarks(normalizedLandmarks.landmarks);
      }
    }
    
    const int LeftEarIdx = 356;
    const int RightEarIdx = 127;
    const int NoseTipIdx = 4;

    private void CalculateLandmarks(IReadOnlyList<Tasks.Components.Containers.NormalizedLandmark> landmarks)
    {
      var leftEar = new Vector3(landmarks[LeftEarIdx].x, landmarks[LeftEarIdx].y, landmarks[LeftEarIdx].z);
      var rightEar = new Vector3(landmarks[RightEarIdx].x, landmarks[RightEarIdx].y, landmarks[RightEarIdx].z);
      var noseTip = new Vector3(landmarks[NoseTipIdx].x, landmarks[NoseTipIdx].y, landmarks[NoseTipIdx].z);

      Vector3 headPos = (leftEar + rightEar) / 2;

      var headRot = Quaternion.LookRotation(headPos - noseTip, -Vector3.Cross(noseTip - rightEar, noseTip - leftEar));
      var euler = headRot.eulerAngles;
      headRot = Quaternion.Euler(euler.y, -euler.x, euler.z);
    }

That gives you the head rotation and blendshapes. For eye rotation you can approximate it from the blendshapes the way SpookyCorgi did with Phiz:
https://github.com/SpookyCorgi/phiz/blob/0c09ae2117c677663da588cada4f8c477c682369/site/src/routes/capture/%2Bpage.svelte#L140

or you can approximate from eye/iris detection.

Hope that helps.

@baig97
Copy link

baig97 commented Aug 24, 2023

@jefflim69 you should check out the latest on master. Blendshapes are in master now. So theres technically no reason for this PR anymore. And honestly whatever homuler did is working really really well with any weird latency or need for smoothing like this PR. You will need to rebuild the repo (as a new tagged release has not been made) to get it included in the dylib/dll/so/framework/aar, at least I did. I just used the github action to rebuild it.

On this line: https://github.com/homuler/MediaPipeUnityPlugin/blob/2f3786687a4852d3a2fdd9888eb968b895c55dca/Assets/MediaPipeUnity/Samples/Scenes/Tasks/Face%20Landmark%20Detection/FaceLandmarkerRunner.cs#L113C28-L113C28

From OnFaceLandmarkDetectionOutput you can retrieve the blendshapes. Here's approximately what I did:

private void OnFaceLandmarkDetectionOutput(FaceLandmarkerResult result, Image image, int timestamp)
    {
      _faceLandmarkerResultAnnotationController.DrawLater(result);
      foreach (Tasks.Components.Containers.Classifications classifications in result.faceBlendshapes)
      {
        foreach (Category cat in classifications.categories)
        {
          Debug.Log($"{cat.categoryName} = {cat.score}");
        }
      }

      foreach (var normalizedLandmarks in result.faceLandmarks)
      {
        CalculateLandmarks(normalizedLandmarks.landmarks);
      }
    }
    
    const int LeftEarIdx = 356;
    const int RightEarIdx = 127;
    const int NoseTipIdx = 4;

    private void CalculateLandmarks(IReadOnlyList<Tasks.Components.Containers.NormalizedLandmark> landmarks)
    {
      var leftEar = new Vector3(landmarks[LeftEarIdx].x, landmarks[LeftEarIdx].y, landmarks[LeftEarIdx].z);
      var rightEar = new Vector3(landmarks[RightEarIdx].x, landmarks[RightEarIdx].y, landmarks[RightEarIdx].z);
      var noseTip = new Vector3(landmarks[NoseTipIdx].x, landmarks[NoseTipIdx].y, landmarks[NoseTipIdx].z);

      Vector3 headPos = (leftEar + rightEar) / 2;

      var headRot = Quaternion.LookRotation(headPos - noseTip, -Vector3.Cross(noseTip - rightEar, noseTip - leftEar));
      var euler = headRot.eulerAngles;
      headRot = Quaternion.Euler(euler.y, -euler.x, euler.z);
    }

That gives you the head rotation and blendshapes. For eye rotation you can approximate it from the blendshapes the way SpookyCorgi did with Phiz: https://github.com/SpookyCorgi/phiz/blob/0c09ae2117c677663da588cada4f8c477c682369/site/src/routes/capture/%2Bpage.svelte#L140

or you can approximate from eye/iris detection.

Hope that helps.

Hey, I have been looking to implement blendshapes in my project but didn't know it wasn't included in the latest release (I assumed it was since they were introduced in mediapipe 0.9.3 I guess).

I am going with github actions too but I am not too familiar with how arguments to workflows work. Looking at build-package.yml file, I am assuming the workflow would take default values if I leave that field empty right? Also, how do I turn off building of certain plugins.

Thanks.

(Pic for reference)
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.