-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow for more control over interaction with system MIDI #59
Comments
Copied from #60 after deciding not to use MIDI, and instead contemplate a better Erlang solution: The Erlang VM is accurate to the millisecond. In 4/4 time a fast song at say 350 bpm would have the following note durations in any given measure:
A 64th note at 350 bpm is insane and probably not something that's going to happen very often, if at all. If it does, then Erlang is probably not the target platform for the developer who needs this capability. That being said, we should test this with actual notes + human ears over time. But we'll need to build something to test, first. Instead of creating separate services for this, with code running in new Erlang processes in this or some other supervision tree, it would be nice to run a timer + queue at the same place where notes are being sent to the system MIDI device. However ... As soon as we need to write synchronised timings to multiple devices simultaneously, we need a little separation from timer-and-sender in the same Erlang process. Two instruments on separate devices should be able to play completely in-time with each other, so whatever is keeping time (and tracking beats, time sig, tempo, measures, etc.) should be able to provide clock into fast enough that the multiple devices can play in-time. However ... The Erlang NIF doesn't offer any native concurrency support for writing to system MIDI devices, so having a bunch of processes that request timing info from a central source and then send messages themselves doesn't really match up well with the lowest level dependency. It might make more sense to have a single Erlang process that dispatches to system MIDI. If that's the case, then we probably want something like the following:
To support this, we would need the following:
|
Instead of creating a new gen_server, we might just be able to re-use the current
So I guess the question is: "Do the gen_server's current responsibilities conflict with those of queuing MIDI messages on behalf of all device connections and making time calculations such that those messages get sent at the right time and in the right order?" Feels like a good fit. Don't think we'd even need to rename from the current |
Hrm, the architecture proposed in this ticket does call into question the current setup of the supervision tree being used to model MIDI device connections ... at the very least, I think we'll need to rename the worker gen_server (since it will be even less of a real "connection" that before). If the gen_server is going to take all messages, do time maths on them, encode them, queue them up, and ultimately send them to the OS' underlying MIDI system, does it still make sense for the worker processes (the Yeah, I think so. It's not only a good model, it's a good implementation (separation of concerns, etc.). I think we just need to rename |
Currently, when a sequence of notes is sent to undermidi to play, there is no way to interrupt that, except for:
timer:apply_after
for notes off) or making the (hacky)timer:sleep
calls for pauses between notes, orThat's not really tenable, long-term.
There are a couple of use cases that are driving this work:
To support these use cases, we need to do a couple of significant refactors:
gen_server
(sibling to the device manager supervision tree){midi, ...}
and determine wether to:uth.note:duration-fn
)uth.note
with note and rest namesundermidi.device.conn
toundermidi.device.client
Related tasks:
The text was updated successfully, but these errors were encountered: