Model Predictive Control
Model Predictive Control
Ogunlowore Olabanjo Jude Student Number: 20364462 Masc. Mechanical & Mechatronics Engineering
As the name implies, The Model Predictive Control strategy also known as the Receding Horizon Control is control method which is based on the sound knowledge of a systems model / characteristics. Based on a generated model of this system, we design a set of Control Input sequences iteratively at successive time steps over some horizon from a current state and use this as the control law in a feedback loop.
We predict the behavior of a process state / Output over a time horizon.
Model Predictive Control historically (1980s) came about as a controller form, from the level of accuracy of mathematical models scientist and engineers have been able to come up with over the years.
INTRODUCTION CONT
The studies on the MPC are manifold, which may be classified into several streams Industrial processes (Its original application) Adaptive MPC Synthesized MPC: Approach for Nonlinear, constrained and uncertain systems. Hybrid methods: Merges MPC and other forms of control technology as its controller.
Generally, Open-loop optimal solution is not robust Must be coupled with on-line state / model parameter update Requires on-line solution for each updated problem Analytical solution possible only in a few cases (LQ control)
Figure 2. The Components of a system model showing prediction from time steps.
MPC is based on the model and the prediction model is utilized. The MPC algorithm is based on the model derived. MPC pays more attention to the function than to the formulation of the model. The function of a prediction model is based on the past information and the future inputs to predict the future output. Any information as long as it has the function of prediction can be used as the predictor model irrespective of the concrete form. This simply means a transfer function of input output relationship or even a state space representation of a system qualifies The key point that MPC differs from other control techniques is that MPC adopts the receding horizon optimization and the control moves are implemented in a receding horizon manner. While the optimal control rationale is adopted , MPC does not discard the feedback in the traditional control techniques
Feedback is used to overcome disturbances and achieving closed-loop stability. The MPC utilizes feedback correction. The effect of feedback is realized in adaptive MPC by online updates of the system model and a PID feedback controller used as transparent control is applied. Model based Prediction: On the subject prediction, two questions have to be answered. How much we know (information from the past) to make a forecast and how far into the future we intend to look (prediction span/horizon). Predictions are based on the model. Past information and information about the state of the state of the system is used to do this. The main requirement is that the cost depends on the future control and the low value of cost implies good closed-loop performance. Here good is a predefined for the system in question.
Constraint Handling.
An MPC takes systematic account if constraints and allows for the compensations to give better performance and still keeps the robustness of the unconstrained control laws for each time step . Constraint handling depends on the MPC algorithm adopted Minimization of a quadratic function subject to linear constraints Convex and therefore fundamentally tractable Solution methods Active set method: Determination of the active set of constraints on the basis of the KKT condition Interior point method: Use of barrier function to trap the solution inside the feasible region, Newton iteration
Finite Horizon and Infinite Horizon showing relationship between Control and Prediction Horizon
Deciding on the type of horizon to be used to be implemented depends on some base parameters of the system and the control scheme decided upon. The MPC uses the receding horizon to predict its control inputs simply to capture more information on the new state of the system to decide on the most appropriate control law. This should include the setting time of the system and the potential dynamics of the system. A typically good horizon is a lot more than the control horizon and the reason for this is obvious. We see ahead and take careful steps.
( , )0 , : ( , ) (HJB)
The Cost function model form could be: Linear program ( linear in *+ ) Quadratic program ( quadratic in *+ )
The MPC process/plant/system model is the most important of all objects in the technique of MPC control. This is because we can only control a process as accurately as we can model it.
The model only needs to be modified if more accuracy is desired. A mathematical state space model is the technique of choice since it helps us to investigate other control synthesis and theoretically is very sound too.
We may decide to use even the classical transfer function if thats a more accurate model.
The state space model also significantly helps to implement disturbance sand noises into out model which gives it an edge again when compared to other methods.
MPC GENERAL FORMULATION AND NOTATION cont
Becomes::1 = + = + + Where :1 = :1 ; = :1 0 0 ; = ; = , 0
Becomes: :1 = + + = + +
Where :1 =
:1 ; = :1 0
; = ; = , 0 0
PREDICTION
Generally, optimal control problems in MPC are manifold and usually classified according to the optimization problem. Illustration from a discrete time space representation.
Considering a system described as ( + 1) = ((), ()) where (0,0) = 0 with state and input constraints as follows;
( + + 1) , ( + ) , 0 ( + |) Denotes the prediction of at a future time + 1 predicted at + = + , 0; ( + |), 0
Denotes the optimal state prediction used to predict the optimal solution of an MPC optimization problem. Using Toeplitz and Henkel matrices, we can simplify the algebra much better for use in MPC.
The property for the Finite Horizon case is that the cost function over a finite horizon is the sum of positivedefinite functions. No sort of constraint set are imposed on the terminal cost function
;
=
<
, +
+ +
- + +
The property for the Infinite Horizon case is that the cost function over an infinite horizon is the sum of positivedefinite functions.
The Infinite Horizon optimization problem does not have a close solution has an infinite number of decision variables are involved.
=
<
, +
+ +
st. ( + + 1|) , ( + |) , 0
We have a control law of this nature = * , + 1 , . . +
Stability guarantee The optimal cost function can be shown to be the control Lyapunov function Less parameter to tune More consistent, intuitive effect of weight parameters Close connection with the classical optimal control methods, e.g., LQG control
:2 = :1 + :1 ; :2 = :2
(2)
:2 = 2 + + :1 ; :2 = :2
(3)
Write prediction at ( + 3)
:3 = 2 :1 + :1 + :2 ; :3 = :3
(4)
Prediction Algorithm from Plant past observations at (k-1) to optimal output at time (k+N)
. .
. .
. . ;
. .
The behavior of the MPC controller is intrinsically nonlinear, since constraints on state and control variables are taken into account. However, if no constraints are present in the problem formulation, the controller is linear. Also, the controller behaves linearly during operation when no constraints are active. In the first case, the control law, could (and should) be calculated off-line, whereas in the second case, the optimization procedure must be done each sample. There are however, methods for avoiding on-line solution of the optimization problem. Using the observation that the MPC control law is piecewise linear in the states, it is possible to calculate, off-line, all possible control laws. The on-line optimization problem is then transformed into a search problem, where the objective is to find the appropriate partition in the state space, identifying the corresponding control law
Where a =0.8, b=0.1. Assume prediction and control horizon are 10 and 4, calculate the component of a predictive control sequence for future output Y, and the values , , and data vector from the set point information . Assume that at time ( 10 for this case) ( ) = 1 and the state vector ( ) = ,0.1 0.2- . Find the optimal solution with respect to the case where = Forming state space with augmented equations,
:1 = :1
0 1
= ,0 1
2 2 3 3 . + . . . . . 10 9
0 2 . . . 8
0 0 . . . 7
0 0 0 . . . 6
Using the plant parameters, we get; Coefficients of F and P are calculated as = ,1 1 2 = ,2 1 3 = ,3 1 . . . = , 1 Where 1 = , 2 = 2 + 1 , . . . , = + ;1
=
= ;1 ( ( ])) = ,7.2 6.4 0 0-
Computing the MPC gain; using MATLAB command for the same example, we have 1 1 . = 1 0 0 ( + );1 ( ) . . 1 = 1 0 0 ( + );1 ( ) MATLAB FUNCTION.
[Phi_Phi,Phi_F,Phi_R,A_e,B_e,C_e]= mpcgain(Ap,Bp,Cp.Nc,Np)
An Illustration will be performed using MATLAB software of the TOOLBOX and result analysis during the class session
MATLAB Design example: The plant state-space model is given by (k + 1) = 1 0 1 0.5 (k) + u(k) 1 1
y(k) = ,1
0 - (k)
Goto..MATLAB (OBVIOUSLY )
Courtesy: MPC tools 1.0 Reference Manual. Johan kesson. Department of Automatic Control. Lund Institute of technology. Same as used in study Manual MIT Open Course Ware = ( /)( + ) / = ( /) = ( /)( )
Figure 10.Simulation of the MPC controller applied to the nonlinear helicopter plant.
Goto..MATLAB AGAIN!!!!!!!
1 Using numerical optimization , design a control sequence for a finite time ahead (the horizon) beginning from a current state.
2 Implement an initial portion of that optimized control sequence. 3 Go back to the first step
Summary contComponents:
An estimator for the errors brought about from disturbance in the input variables of the plant. Depending on the type of disturbance, our estimator could be LQ, LQG, Kalman (IF Stochastic) as the case may be. The above simply means we assume the state of the system (full) is available.
The optimal solution in a typical MPC will be that from the prediction terms of an infinite horizon. This is where the property of a receding horizon comes to play as the time step changes the horizon but the range does not change.
From an MPC algorithm, if a feasible solution exist a t a time step say k, and the steady state is achieved x=0, then we may conclude that the system is stable.
Summary contTuning:
The turning parameter is usually based on the engineers knowledge of the system and model properties and this trial and error usually with a few twitches gives a very good result The horizon: WE cant control more than can perceive. This simply means the horizon of control should be considerably lower than the horizon of prediction.
Discretizing and linearizing the model could be tricky as we need to be sure that the information lost in this process is trivial for prediction and creating a MPC.
Incorporating constraints in the input and the output is a good point for the MPC. As we may RELAX SOME and thus violate them depending on our definition of optimal.