-
-
Notifications
You must be signed in to change notification settings - Fork 323
Embedding
To see a demo of all all face embedding features, see /demo/embedding.js
It highlights functionality such as:
- Loading images
- Extracting faces from images
- Calculating face embedding descriptors
- Finding face similarity and sorting them by similarity
- Finding best face match based on a known list of faces and printing matches
To use face simmilaity compare feature, you must first enable face.embedding
module
and calculate embedding vectors for both first and second image you want to compare
To achieve quality results, it is also highly recommended to have face.mesh
and face.detection.rotation
enabled as calculating feature vectors on non-quality inputs can lead to false results
Similarity match above 65% is considered good match while similarity match above 55% is considered best-guess
For example,
const myConfig = {
face: {
enabled: true,
detector: { rotation: true, return: true },
mesh: { enabled: true },
description: { enabled: true },
// embedding: { enabled: true }, // alternative you can use embedding module instead of description
},
};
const human = new Human(myConfig);
const firstResult = await human.detect(firstImage);
const secondResult = await human.detect(secondImage);
const similarity = human.similarity(firstResult.face[0].embedding, secondResult.face[0].embedding);
console.log(`faces are ${100 * similarity}% simmilar`);
If the image or video frame have multiple faces and you want to match all of them, simply loop through all results.face
for (let i = 0; i < secondResult.face.length; i++) {
const secondEmbedding = secondResult.face[i].embedding;
const similarity = human.similarity(firstEmbedding, secondEmbedding);
console.log(`face ${i} is ${100 * similarity}% simmilar`);
}
Additional helper function is human.enhance(face)
which returns an enhanced tensor
of a face image that can be further visualized with
const enhanced = human.enhance(face);
const canvas = document.getElementById('orig');
human.tf.browser.toPixels(enhanced.squeeze(), canvas);
Face descriptor or embedding vectors are calulated feature vector values uniquely identifying a given face and presented as array of 128 float values
They can be stored as normal arrays and reused as needed
Similarity function is based on general Minkowski distance between all points in vector
Minkowski distance is a nth root of sum of nth powers of distances between each point (each value in 192-member array)
Default is Eucliean distance which is a limited case of Minkowski distance with order of 2
Changing order
can make similarity matching more or less sensitive (default order is 2nd order)
For example, those will produce slighly different results:
const similarity2ndOrder = human.similarity(firstEmbedding, secondEmbedding, 2);
const similarity3rdOrder = human.similarity(firstEmbedding, secondEmbedding, 3);
How similarity is calculated:
const distance = ((firstEmbedding.map((val, i) => (val - secondEmbedding[i])).reduce((dist, diff) => dist + (diff ** order), 0) ** (1 / order)));
Once embedding values are calculated and stored, if you want to use stored embedding values without requiring Human
library you can use above formula to calculate similarity on the fly
Once you run face embedding analysis, you can store results in an annotated form
to be used at the later time to find the best match for any given face
Format of annotated database is:
const db = [
{ name: 'person a', source: 'optional-tag', embedding: [...]}
{ name: 'person b', source: 'optional-tag', embedding: [...]}
...
]
where embedding is a result received in face.embedding
after running detection
Note that you can have multiple entries for the same person and best match will be used
To find the best match, simply use function while providing embedding descriptor to compare and pre-prepared database
Last parameter is optional and notes a minimal threshold for a match
const best = human.match(current.embedding, db, 0)
// return is object: { name: 'person a', similarity '0.99', source 'some-image-file' }
Database can be further stored in a JS or JSON file and retrieved when needed to have
a permanent database of faces that can be expanded over time to cover any number of known faces
For example, see /demo/embedding.js
and example database /demo/faces.json
:
// download db with known faces
let res = await fetch('/demo/faces.json');
db = (res && res.ok) ? await res.json() : [];
To achieve optimal result, Human
performs following operations on an image before calulcating feature vector (embedding):
- Crop to face
- Find rought face angle and straighten face
- Detect mesh
- Find precise face angle and again straighten face
- Crop again with more narrow margins
- Convert image to grayscale to avoid impact of different colorizations
- Normalize brightness to full range for all images
Human
contains a demo that enumerates number of images,
extracts all faces from them, processed them and then allows
for a selection of any face which sorts faces by similarity
Demo is available in demo/embedding.html
which uses demo/embedding.js
as JavaSript module
Human Library Wiki Pages
3D Face Detection, Body Pose, Hand & Finger Tracking, Iris Tracking, Age & Gender Prediction, Emotion Prediction & Gesture Recognition