MPC implementation #336
Replies: 5 comments
-
Hi @DarioSlaifsteinSk, I am glad you've been enjoying InfiniteOpt.jl! Getting closed-loop stability with general MINLPs for MPC is often tricky since it takes more that just putting it in a for loop for it to work even if the open loop (e.g., day ahead planning) problem works just fine. I myself have spent much time adjusting formulations and tuning stability parameters to achieve stable closed loop behavior for nonlinear MPC problems. It is difficult for me to give specific advise without a model, but this thesis gives a really good tutorial on how to achieve closed loop stability with mixed integer MPC problems and discusses the common pitfalls: https://sites.engineering.ucsb.edu/~jbraw/jbrweb-archives/theses/risbeck.pdf. Chapter 2 should be particularly helpful. I suspect your formulation will need some tweaks for closed-loop stability. The good news is that InfiniteOpt.jl can definitely support MPC problems, but unfortunately it is up to the user to provide a model that is asymptomatically stable. Some evidence to this is that Pyomo.dae is frequently used for nonlinear (mixed-integer) MPC and it sets ups the same sort of formulations as InfiniteOpt for solving optimal control problems. |
Beta Was this translation helpful? Give feedback.
-
Outside of Julia, I am aware of the Gekko package in Python (https://gekko.readthedocs.io/en/latest/). It is an optimization modeling package that is tailor made to support MPC formulations with all the bells and whistles that come with the territory. It also has an online course to support it: https://apmonitor.com/do/. |
Beta Was this translation helpful? Give feedback.
-
Thanks a lot! I'm also checking Grune (2017), which has a more theoretical focus I'd say |
Beta Was this translation helpful? Give feedback.
-
One follow-up is that I forgot ask whether you have tried tuning a control horizon in your MPC approach. Having the controls become fixed after a certain time is an important part of achieving closed-loop stability. Here, the key knobs to turn are the length of the control horizon and the overall time horizon. Typically, you'll want the difference between the two horizon to be enough such that the system can reach a new steady-state with the fixed control policy (assuming the system itself is stable). |
Beta Was this translation helpful? Give feedback.
-
Hi! |
Beta Was this translation helpful? Give feedback.
-
Hi,$\Delta t$ the optimization becomes erratic, either being solved in 0.1s (60% of the time), growing a decision tree of more than 800 nodes, or becoming infeasible in 2 nodes.
I've been using
InfiniteOpt.jl
for a while now, and I think it's great. Unfortunately, in the last 6 months I've been trying to implement a model predictive controller (MPC) with divergent results.I'm trying to control a nonlinear system with a MINLP formulation, and individual optimizations seem to work fine (i.e. day-ahead planning). However, once I try the same loop moving the time window by
Unfortunately, I don't have an MWE (yet) but you can check my last conf. paper if you are interested https://doi.org/10.1109/iecon51785.2023.10312455
I'm looking for MPC implementations in
Julia/JuMP
or similar because probably I'm missing some common practices in modelling and simulation. It can also be that my optimization is very sensitive to initial conditions and parameters (exogenous information/inputs), but I haven't seen so many resources in that direction either.I'm also considering doing a complete reformulation so if somebody knows another good Julia package for it let me know. So far, the only competitors are
OptimalControl.jl
andCasADI.jl
, which are not mature at all.Beta Was this translation helpful? Give feedback.
All reactions