Skip to content

DizruptForReal/Gesture-Generation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Gesture-Generation

Gesture Generation and recognition for HCI: Main module: Takes and audio sample, either user recorded using the recorder module or a pre recorded audio. Set the mood from the list of possible moods/body languages from the data set Generates the output skeleton fram in a .bvh file format supported by softwares like Autodesk MotionBuilder.

RecorderPy: Record user audio from default audio input device Noise reduction applied to remove background and white noise Generate a command with the given audio sample and the motion capture file

Link to folder containing sample outputs: (https://drive.google.com/drive/folders/1mUO8o53v_g_P_f9gVf_ZL7BsND3Dijy8?usp=sharing)

Link to video Demo: (https://drive.google.com/file/d/12Vm0ItBcsCewfZqgUqOZ6GXK4PlFFbTz/view?usp=sharing)

About

Gesture Generation and recognition for HCI

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages