AvatarWebKit is an SDK, developed by Hallway, optimized for the web that provides real-time blend shapes from a camera feed, video or image. The SDK also gives head X/Y position, depth (Z) and rotation (pitch, roll, yaw) for each frame. AvatarWebKit runs at 60 FPS and provides ARKit-compatible 52 blend shapes.
In the future, the SDK will be able to provide rigid body frame and hand positions as well.
Hallway drives our avatar technology using Machine Learning models that predict highly accurate blend shapes from images & video feeds in real-time. The ML pipeline is optimized for real-time video to achieve both high framerate and lifelike animations.
Our vision for the future is an "open metaverse" where you can take your character with you anywhere. We believe tools like AvatarWebKit will help pave that road. The models we've provided here are available to use in your applications for free. Contact us to get in touch about making your characters compatible with Hallway!
# yarn
yarn add @quarkworks-inc/avatar-webkit
# npm
npm install @quarkworks-inc/avatar-webkit
-
Start your predictor:
import { AUPredictor } from '@quarkworks-inc/avatar-webkit'
// ...
let predictor = new AUPredictor({
apiToken: <YOUR_API_TOKEN>,
shouldMirrorOutput: true,
})
let stream = await navigator.mediaDevices.getUserMedia({
audio: false,
video: {
width: { ideal: 640 },
height: { ideal: 360 },
facingMode: 'user'
}
})
predictor.onPredict = (results => {
console.log(results)
})
// or if you like RX
predictor.dataStream.subscribe(results => {
console.log(results)
})
predictor.start({ stream })
- Basic example running predictor w/o rendering
- Predictor + React + Three.js (basic)
- Video Call Style UI
- Using our rendering kit module
An API key is your unique identifier that will allow your code to authenticate with our server when using the SDK. You can sign up for one here.
We recommend Chromium based browsers for best performance, but all other major browsers are supported. We are currently working on performance improvements for Safari, Firefox and Edge.
The models will currently run on mobile but need to be optimized. We are working on configuration options which will allow you to choose to run lighter models.
We do not have an official SDK yet, but our ML pipeline is native-first and the models are used in our Mac OS app Hallway Tile. We have the capability to create SDKs for most common platforms (eg macOS/Windows/Linux, iOS/Android). Each SDK will follow the same data standard for BlendShapes/predictions and will include encoders for portability between environments. This means you can do some creative things between native, web, etc!
If you are interested in native SDKs, we'd love to hear from you!
Yes, depending on your needs. There may be a couple rough edges at the moment, but the SDK has been in use internally at our company for over a year and in production with several pilot companies.
We are currently making no SLAs for the SDK, but we are happy to cooperate with you on any improvements you need to get it going in production.
YES!!! We are in an open beta currently and would love to hear your feedback. Contact us on Discord or by email.
We are active daily on our and can help with any problems you may have! If discord doesn’t work for you, reach out to
Our team is primarily in U.S. timezones, but we are pretty active on Discord and over email! We've love to hear your thoughts, feedback, ideas or provide any support you need.
If you are using Three.js, we've released this open source tooling module you can import freely. This pairs especially well with video-call style apps, as we provide a three world setup that works well for rendering multiple avatars on screen at once Zoom-style.
More coming :)