Skip to content
This repository was archived by the owner on Sep 1, 2021. It is now read-only.

Commit 93c3abb

Browse files
authored
Update README.md
1 parent 2aa39fd commit 93c3abb

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -65,14 +65,14 @@ Enjoy your VTuber life!
6565
# Functionalities details
6666
In this section, I will describe the functionalities implemented and a little about the technology behind.
6767

68-
## 1. Head pose estimation
68+
## Head pose estimation
6969
Using [head-pose-estimation](https://github.com/yinguobing/head-pose-estimation) and [face-alignment](https://github.com/1adrianb/face-alignment), deep learning methods are applied to do the following: face detection and facial landmark detection. A face bounding box and the 68-point facial landmark is detected, then a PnP algorithm is used to obtain the head pose (the rotation of the face). Finally, kalman filters are applied to the pose to make it smoother.
7070

7171
The character's head pose is synchronized.
7272

7373
As for the visualization, the white bounding box is the detected face, on top of which 68 green face landmarks are plotted. The head pose is represented by the green frustum and the axes in front of the nose.
7474

75-
## 2. Gaze estimation
75+
## Gaze estimation
7676
Using [GazeTracking](https://github.com/antoinelame/GazeTracking), The eyes are first extracted using the landmarks enclosing the eyes. Then the eye images are converted to grayscale, and a pixel intensity threshold is applied to detect the iris (the black part of the eye). Finally, the center of the iris is computed as the center of the black area.
7777

7878
The character's gaze is not synchronized. (Since I didn't find a way to move unity-chan's eyes)

0 commit comments

Comments
 (0)