Skip to content

hmssg/NeuroSync_Player

Repository files navigation

NeuroSync Player

Overview

The NeuroSync Player allows for real-time streaming of facial blendshapes into Unreal Engine 5 using LiveLink - enabling facial animation from audio input.

Alt text

Features:

  • Real-time facial animation
  • Integration with Unreal Engine 5 via LiveLink
  • Supports blendshapes generated from audio inputs

NeuroSync Model

To generate facial blendshapes from audio, you'll need the NeuroSync audio-to-face blendshape transformer model. You can:

Switching Between Local and Non-Local API

The player can connect to either the local API or the alpha API depending on your needs. To switch between the two, simply change the boolean value in the utils/neurosync_api_connect.py file:

Visit neurosync.info to sign up for alpha access.

About

The NeuroSync Player allows for real-time streaming of facial blendshapes into Unreal Engine 5 using LiveLink - enabling facial animation from audio input.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages