TimescaleDB bridge #116
Replies: 8 comments 6 replies
-
|
Sounds awesome. Yes, documentation and what not is the weak point of ppg... and conceptially it's a bit difficult to explain. https://github.com/HotNoob/PythonProtocolGateway/blob/v1.1.11/documentation/usage/configuration_examples/modbus_rtu_to_mqtt.md i have an image here, a bit hard to find, that shows a bit about the flows. reading / writing. the mixing of active and passive protocols... overall, the inter-transport behaviour is that everything is broadcasted to every other device on the same bridge... Yeah... i confuse myself just thinking about it as well :P yes for the config, i think its not too difficult to add a /config folder without breaking things. PythonProtocolGateway/Dockerfile Line 11 in 29e5e21 and then /config could be added to the dockerfile. |
Beta Was this translation helpful? Give feedback.
-
|
I think the PPG documentation is fine. It was more than enough to get me started. What I had trouble with was sparse comments in the transport classes. If you could add some program flow comments to the functions within the classes to how the data moves within PPG, that would be a great help. For example the write_enabled flag is a bit confusing as I took it to mean writing or not writing back to the inverter. But there is some program logic that seems to stop other routines if it is set to false. Or maybe not, as it was hard for me to tell what was in scope and when, with the flag. Anyway, I do have TimescaleDB working, I'd just like to understand 100% of the program flow to track down inevitable bugs as they pop up. Thx |
Beta Was this translation helpful? Give feedback.
-
|
One other thing I hope you can help with. I currently batch together the various scrapes that populate "data" within the write_data method, for appending to a timescaledb table. For the MQTT bridge, this isn't important as it's data in and then data out. But for timescaledb, it's important to bundle all the register reads, in the given loop, so that all read metrics can be inserted as a single row in a table. I do this now by copying the inverter transports read_interval to a setting that controls the append of a given batch to the timescaledb table. However, this doesn't ensure that all the data within a particular inverter register read loop is captured in the same batch due to timing differences. What would help here is if you could create an attribute that would perhaps be called ReadComplete that would initialize as false, and then go true at the end of the final register scrape in the inverter read loop. After a couple of seconds the flag would then go false again. I could use a callback of sorts to monitor the flag, where the state change would trigger my batch insert. Going off of the flag, I would capture all the data of a particular read loop without having to consider any timing discrepancies. My apologies if this functionality already exists, but I couldn't find anything in the code with a similar function. |
Beta Was this translation helpful? Give feedback.
-
|
over simplified example: |
Beta Was this translation helpful? Give feedback.
-
|
everything is dynamic, so there isnt exactly a "start of the loop" sort of thing. i'm not sure what sort of table structure is done. hard to say how to handle the data destortion. maybe a summary table & a table for each variable? if each table holds each variable, could check if a value changes before inserting. at least keeping the db smaller. or a table with variable name as a column. what about null values? if the info is stale? can timescaledb handle partial rows \w nullables? could implement all 3 table structures. |
Beta Was this translation helpful? Give feedback.
-
|
Here's a diagram of what I'm doing: `┌──────────────────────────────────────────────────────────────────────────────┐ ┌──────────────────────────────────────────────────────────────────────────────┐ ┌──────────────────────────────────────────────────────────────────────────────┐ ┌──────────────────────────────────────────────────────────────────────────────┐ ┌──────────────────────────────────────────────────────────────────────────────┐ ┌──────────────────────────────────────────────────────────────────────────────┐ ┌──────────────────────────────────────────────────────────────────────────────┐ ┌──────────────────────────────────────────────────────────────────────────────┐ |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
|
somewhat correct. the modbus transport doesnt send any data to other transports until its completely done, so you shouldnt have to worry about batch start/stop. that's only effecting the modbus transport itself. yeah its a little messy everything is returned outside of the batch read loop. if your seeing "partial" data results, that'll be from the variable timing, depending on the protocol, not everything is read every time. if its really problematic for timescaledb \w widemode, we could add a "variable_timing=off" sort of setting. and example of accessing transport_settings from protocol_settings class: just worry about data in/out within timescaledb class. transport.protocolSettings.registry_map should contain the complete list. the variable masks and screens are applied when its loaded. from within a transport, after transport_base has been initialized: dict of registry_map_entry's - attribute variable_name if you want, you can add a get_variable_names helper function to the protocol_settings class. |
Beta Was this translation helpful? Give feedback.


Uh oh!
There was an error while loading. Please reload this page.
-
I'm writing a TimescaleDB bridge and have it working pretty well. I'm refining some of the compression and rollup features and once I get those going, I'll test it for a couple of weeks before submitting a pull request. Due to the many features of the database, it turned into something a little more complicated than what I first expected, and with complexity comes fragility. With this in mind, I have three ideas that I hope you can help me with. First, I found tracking down the movement of data in the program was somewhat difficult to follow as there are not a lot of comments describing how metric data and also device data moves through the class structures. And given that the data is the focus of the app, I believe it would be helpful to see more verbose descriptions in the classes concerning data flow--in my case just so I'm 100% on what's what with the bridge I wrote.
The second idea is that I believe there needs to be more separation between transports and bridges in the structure of the program. For example, there are many references to "write", "read", "connect" etc. where it's difficult to tell, at first, if you're referring to a transport that is connecting to an inverter, or to a bridge obtaining data produced from a transport, or a bridge connecting to an endpoint of some type--like a database. I realize that technically, these are all transports, but I believe it would be a lot clearer if you could re-factored "naming" to something like "transport_write", "bridge_write" "transport_connected", "bridge_connected" etc. As I said, I have a TDB bridge working, but it took some effort to understand what was what in the classes, and to be honest, I'm still not sure if I understand the data flow.
Finally, for us Docker users, could you create a config folder wherein all the user configurable files can sit. This would make it much simpler creating volumes if the variable mask, config, maps and any other configuration files were grouped together in one spot. I would imagine that the config folder would hold example placeholder files that would in turn be referenced programmatically within PPG: currently, these files are referenced in their respective folders. The Docker volume would then rename to whatever specific map, cfg etc that is in the local folder.
Other than all of that, a really nice job on this gateway. I think it has a lot of potential as there are numerous endpoints and inverters out there that someone may wish to connect to. I may even take a crack at a Prometheus bridge. Thx again for this app.
Beta Was this translation helpful? Give feedback.
All reactions