Proposition for new version of MiM+LMA feature #6
musicinmotion-dev
started this conversation in
Ideas
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Recording video, sound and sensor data from my bell, more like a whole piece or a movement. Then extracting objects using Schaeffer/Godøy:s concept for gestural-sonorous objects picking parts between 0,5-5 sec. And then cut out the sensor data of these objects. Analysing them using LMA together with Anchen Froneman. Setting the sliders in your GIMLet model and then saving each object in a ”library”. Then using these prerecorded objects with sensor data to train the model.
In this way, we could evaluate the LMA level after recording them and then being able to pick a set of ”objects” for training the model. Another benefit of this is that we could train the model with the same data but using different feature sets and get a better understanding of how the model changes when using different features.
Beta Was this translation helpful? Give feedback.
All reactions