Skip to content

Latest commit

 

History

History
18 lines (14 loc) · 2.87 KB

long_description.md

File metadata and controls

18 lines (14 loc) · 2.87 KB

GIMLeT – Gestural Interaction Machine Learning Toolkit

GIMLeT is a set of tools for easy gesture analysis, interactive machine learning, and gesture-sound interaction design. It is written for Max, a visual programming environment very popular among artists and researchers in the field of interaction design, electronic music, and media arts. GIMLeT features a modular design that allows to easily share meaningfully structured data between gesture tracking devices, wearable sensors, interactive machine learning, and sound synthesis modules. This makes it a useful resource for professional applications in the arts as well as for teaching the basics of interactive machine learning to students without a computer programming background. The project was initiated as a collaboration between Federico Visi and the Hochschule für Musik und Theater Hamburg, Germany, within the framework of the KiSS: Kinetics in Sound and Space project. The design of the software was inspired by the work of Rebecca Fiebrink and her Wekinator software and the research of Atau Tanaka and Michael Zbyszyński, with whom Visi collaborated while at Goldsmiths, University of London. Further development was carried out by FV as part of a postdoctoral research position at GEMM))) Gesture Embodiment and Machines in Music – Piteå School of Music – Luleå University of Technology, Sweden. The package is being used and developed further in several projects including:

GIMLeT is free and open source and is available on Github: https://github.com/federicoVisi/GIMLeT

For more information on interactive machine learning of musical gesture please refer to this book chapter:

Visi, F. G., & Tanaka, A. (2021). Interactive Machine Learning of Musical Gesture. In E. R. Miranda (Ed.), Handbook of Artificial Intelligence for Music: Foundations, Advanced Approaches, and Developments for Creativity. Springer, 2021. Preprint on ArXiV (open access): http://arxiv.org/abs/2011.13487 Final version on SpringerLink (paywall): https://link.springer.com/chapter/10.1007/978-3-030-72116-9_27