-
Hi, I am currently implementing a black-box NSSM for a building system which attempts to estimate temperatures, CO2 levels and power consumption for specific building components. During my testing I found that the use of a f0 state observer is necessary to have good tracking of the current system's dynamics. I am having issues with implementing an MLP-based state observer which takes N previous samples and produces an estimate for an initial condition x0. The issue comes when trying to integrate this estimator with the Node and System architecture that Neuromancer uses. As I would need to handle time delays and proper data preprocessing and shaping. I saw an example of time delay estimators in the "system_id_building" branch but it seems this implementation uses a previous paradigm without the concept of Node and System. Does anyone have a recommendation on how to implement state observers that can be easily wrapped in a node and implement a time delay effectively increasing the number of samples in each batch? Is there a better approach? Thanks in advance for the help! Sebastian |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hi @SebsCubs thank you for your interest in the library! In the new API, we dropped the specific Estimator class. Instead, we abstracted all possible building blocks of different model architectures via the Node and System classes - see example use here. Closes to your intention in our current example set is this Deep Koopman example which uses neural net as observer that encodes observables into the latent space. In this example, we use two observer Nodes:
See this cell for the instantiation of these observer Nodes
and see this cell for the creation of the training dataset with different observables.
Be aware of the construction of the overall computational graph here. Specifically, the encoder and decoder Nodes are not part of the System class, we only wrap the Koopman operator (latent state dynamics model) to do the rollout. Encoders generate initial conditions, and decoders then decode latent trajectories. You can think of your neural state space model as a generalization of this deep Koopman operator architecture so you can employ similar architecture in your case. If you want to encode time delays explicitly, you can use indexing on the trajectories from the training data or from the encoder directly. Hope this helps :) All the best, |
Beta Was this translation helpful? Give feedback.
Hi @SebsCubs thank you for your interest in the library!
In the new API, we dropped the specific Estimator class. Instead, we abstracted all possible building blocks of different model architectures via the Node and System classes - see example use here.
Closes to your intention in our current example set is this Deep Koopman example which uses neural net as observer that encodes observables into the latent space. In this example, we use two observer Nodes: