The NeuroSync Player allows for real-time streaming of facial blendshapes into Unreal Engine 5 using LiveLink - enabling facial animation from audio input.
- Real-time facial animation
- Integration with Unreal Engine 5 via LiveLink
- Supports blendshapes generated from audio inputs
To generate facial blendshapes from audio, you'll need the NeuroSync audio-to-face blendshape transformer model. You can:
- Apply for Alpha API access to use the model without locally hosting it.
-
- Or, if you'd like to host the model locally, you can set up the NeuroSync Local API.
The player can connect to either the local API or the alpha API depending on your needs. To switch between the two, simply change the boolean value in the utils/neurosync_api_connect.py file:
Visit neurosync.info to sign up for alpha access.
