You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I appear to have some issues with OSC, or if not a bug affecting just me, concerns about how it's being handled in the backend.
I'm including a basic Python script as a gist here to illustrate an MRE.
Problem 1
Problem one is that it looks like OSC data may be exiting the input node having been popped from a queue - rather than using the latest-timestamped sample.
If I send a load of data to an OSC endpoint, i.e., faster than the sample rate of the OSC input node, and then the sending stream terminates, the node keeps producing value (this obviously wouldn't be normal behaviour when using a real device or upstream pipeline, but I uncoverd it in the process of preping an MRE for situation 2, and network issues do occur where data will arrive in bursts). It would appear that the engine is buffering the messages - I presume these are then read off according to the stated sample rate for the OSC node. I will happily take a look at the code to see what is going on but was already in the middle of something today so I'm just looking at it black-box for now.
I should also mention that these messages aren't using bundling (IIRC I don't think Neuromore supports this yet anyway) - so there is no timestamp on the OSC packets. If there were, it might make sense to try to reconstruct them in an appropriate order and with the right sample rate to better represent the real stream and smooth over laytency disturbances in the network, but in this situation that shouldn't be the case, and our best timestamp on the data is the point it comes off the wire and into the Engine.
Problem 2
The second problem is related and with the first inidcates that possibly a buffer is being used. If I send some samples., and then my sending stream sleeps for a while, and then continues sending data - the signal view plots this second set of values right after the first. The time when the stream was paused is not displayed on the graph, instead it is as if it has resumed with the timestmap on when the next packet was expected before the stream was cut - which is a long way off when said packet was received.
If these are reproducible for others, this would seem a pretty critical problem to me. The accuracy of the classifier obviously relies on live realtime data. Let's say I set the data rate at 250Hz, but I have a device sending at 256Hz, or even just dithering around 250.5 Hz on average - it wouldn't take very long for a considerable latency to develop in the pipeline and render the feedback null. I think OSC packets should more or less always be taken as fresh as possible, as historical data is of limited use. I could see an argument that if we have to re-calculate slow means etc. then we'd want to recover that data, but at the same time if my classifier is using the latest value then it needs exactly that to function properly. So I think maybe some stale data could be kept around if there is a short gap, but only to use as a background buffer for nodes that use some sort of sliding window into historical data. But maybe these bugs are related to my setup as I'd presume there would be handling for this already.
Cheers
System
Studio 1.7.3 (from Debian package)
AMD64 - 32 cores and 64G RAM.
Ubuntu 22.04
The text was updated successfully, but these errors were encountered:
I appear to have some issues with OSC, or if not a bug affecting just me, concerns about how it's being handled in the backend.
I'm including a basic Python script as a gist here to illustrate an MRE.
Problem 1
Problem one is that it looks like OSC data may be exiting the input node having been popped from a queue - rather than using the latest-timestamped sample.
If I send a load of data to an OSC endpoint, i.e., faster than the sample rate of the OSC input node, and then the sending stream terminates, the node keeps producing value (this obviously wouldn't be normal behaviour when using a real device or upstream pipeline, but I uncoverd it in the process of preping an MRE for situation 2, and network issues do occur where data will arrive in bursts). It would appear that the engine is buffering the messages - I presume these are then read off according to the stated sample rate for the OSC node. I will happily take a look at the code to see what is going on but was already in the middle of something today so I'm just looking at it black-box for now.
I should also mention that these messages aren't using bundling (IIRC I don't think Neuromore supports this yet anyway) - so there is no timestamp on the OSC packets. If there were, it might make sense to try to reconstruct them in an appropriate order and with the right sample rate to better represent the real stream and smooth over laytency disturbances in the network, but in this situation that shouldn't be the case, and our best timestamp on the data is the point it comes off the wire and into the Engine.
Problem 2
The second problem is related and with the first inidcates that possibly a buffer is being used. If I send some samples., and then my sending stream sleeps for a while, and then continues sending data - the signal view plots this second set of values right after the first. The time when the stream was paused is not displayed on the graph, instead it is as if it has resumed with the timestmap on when the next packet was expected before the stream was cut - which is a long way off when said packet was received.
If these are reproducible for others, this would seem a pretty critical problem to me. The accuracy of the classifier obviously relies on live realtime data. Let's say I set the data rate at 250Hz, but I have a device sending at 256Hz, or even just dithering around 250.5 Hz on average - it wouldn't take very long for a considerable latency to develop in the pipeline and render the feedback null. I think OSC packets should more or less always be taken as fresh as possible, as historical data is of limited use. I could see an argument that if we have to re-calculate slow means etc. then we'd want to recover that data, but at the same time if my classifier is using the latest value then it needs exactly that to function properly. So I think maybe some stale data could be kept around if there is a short gap, but only to use as a background buffer for nodes that use some sort of sliding window into historical data. But maybe these bugs are related to my setup as I'd presume there would be handling for this already.
Cheers
System
The text was updated successfully, but these errors were encountered: