Skip to content
Vladimir Mandic edited this page Nov 13, 2020 · 29 revisions

Usage

Human library does not require special initialization
All configuration is done in a single JSON object and all model weights are dynamically loaded upon their first usage
(and only then, Human will not load weights that it doesn't need according to configuration).


There is only ONE method you need:

  // 'image': can be of any type of an image object: HTMLImage, HTMLVideo, HTMLMedia, Canvas, Tensor4D  
  // 'config': optional parameter used to override any options present in default configuration  
  // configuration is fully dynamic and can change between different calls to 'detect()'  
  const result = await human.detect(image, config?)

or if you want to use promises

  human.detect(image, config?).then((result) => {
    // your code
  })

Additionally, Human library exposes several objects and methods:

  human.config        // access to configuration object, normally set as parameter to detect()
  human.defaults      // read-only view of default configuration object
  human.models        // dynamically maintained list of object of any loaded models
  human.tf            // instance of tfjs used by human
  human.state         // <string> describing current operation in progress
                      // progresses through: 'config', 'check', 'backend', 'load', 'run:<model>', 'idle'
  human.load(config)  // explicitly call load method that loads configured models
                      // if you want to pre-load them instead of on-demand loading during 'human.detect()'
  human.image(image, config?) // runs image processing without detection and returns canvas
  human.warmup(config, image? // warms up human library for faster initial execution after loading
                              // if image is not provided, it will generate internal sample
  human.simmilarity(embedding1, embedding2) // runs simmilarity calculation between two provided embedding vectors
                                            // vectors for source and target must be previously detected using
                                            // face.embedding module

Note that when using Human library in NodeJS, you must load and parse the image before you pass it for detection and dispose it afterwards
Input format is Tensor4D[1, width, height, 3] of type float32

For example:

  const imageFile = '../assets/sample1.jpg';
  const buffer = fs.readFileSync(imageFile);
  const decoded = tf.node.decodeImage(buffer);
  const casted = decoded.toFloat();
  const image = casted.expandDims(0);
  decoded.dispose();
  casted.dispose();
  logger.log('Processing:', image.shape);
  const human = new Human.Human();
  const result = await human.detect(image, config);
  image.dispose();

Clone this wiki locally