Gesture Generation and recognition for HCI: Main module: Takes and audio sample, either user recorded using the recorder module or a pre recorded audio. Set the mood from the list of possible moods/body languages from the data set Generates the output skeleton fram in a .bvh file format supported by softwares like Autodesk MotionBuilder.
RecorderPy: Record user audio from default audio input device Noise reduction applied to remove background and white noise Generate a command with the given audio sample and the motion capture file
Link to folder containing sample outputs: (https://drive.google.com/drive/folders/1mUO8o53v_g_P_f9gVf_ZL7BsND3Dijy8?usp=sharing)
Link to video Demo: (https://drive.google.com/file/d/12Vm0ItBcsCewfZqgUqOZ6GXK4PlFFbTz/view?usp=sharing)