Skip to content
Alexander Refsum Jensenius edited this page Jan 10, 2024 · 10 revisions

The Musical Gestures Toolbox for Python is a collection of high-level modules targeted at researchers working with video recordings. It includes visualization techniques such as motion videos, motion history images, and motiongrams; techniques that, in different ways, allow for looking at video recordings from different temporal and spatial perspectives. It also includes basic computer vision analysis, such as extracting quantity and centroid of motion and using such features in analysis.

The toolbox was initially developed to analyze music-related body motion (of musicians, dancers, and perceivers) but is equally useful for other disciplines working with video recordings of humans, such as linguistics, pedagogy, psychology, and medicine.

Functionality

The Musical Gestures Toolbox contains functions to analyze and visualize video, audio, and motion capture data. There are three categories of functions:

  • Preprocessing (trimming, cropping, color adjustments, etc.)
  • Visualization (video playback, image display, plotting)
  • Processing (videograms, average images, motion images, etc.)

We have prepared a MGT example Jupyter Notebook to showcase how to use the toolbox. The notebook might not display properly on GitHub because of its large size, so we suggest you run it locally. You can also run the notebook online [in Colab](https://colab.research.google.com/github/fourMs/MGT-python/blob/master/musicalgestures/MusicalGesturesToolbox.ipynb.

Backend

The speed and efficiency of the MGT are possible thanks to the excellent FFmpeg project. Many of the toolbox functions are Python wrappers/bindings on FFmpeg commands called in a subprocess.

Problems

Please help improve the toolbox by adding bugs and feature requests in the issues section.