A qLPV Nonlinear Model Predictive Control with Moving Horizon Estimation

This paper presents a Model Predictive Control (MPC) algorithm for Nonlinear systems represented through quasi-Linear Parameter Varying (qLPV) embeddings. Input-to-state stability is ensured through parameter-dependent terminal ingredients, computed offline via Linear Matrix Inequalities. The online operation comprises three consecutive Quadratic Programs (QPs) and, thus, is computationally efficient and able to run in real-time for a variety of applications. These QPs stand for the control optimization (MPC) and a Moving-Horizon Estimation (MHE) scheme that predicts the behaviour of the scheduling parameters along the future horizon. The method is practical and simple to implement. Its effectiveness is assessed through a benchmark example (a CSTR system).


INTRODUCTION
Model Predictive Control (MPC) is a very powerful control method, with widespread industrial application. The core idea of MPC [1] is simple enough: a process model is used to predict the future output response of the process; then, at each instant, the control law is found through the solution of an online optimization problem, which is written in terms of the model, the process constraints and the performance goals. For the case of processes represented by Linear Time-Invariant (LTI) models, MPC is translated as a constrained Quadratic Programming Problem (QP), which can be evaluated in real-time by the majority of standard solvers.
Extra attention should be payed to the fact that the theoretical establishment MPC was especially consolidated after the proposition of "terminal ingredients", which served to demonstrate robust stability and recursive feasibility properties [2] . These properties are enabled when some conditions with respect to a terminal stage cost (·) and to a terminal constraint X are verified. Essentially, the terminal set must be robust positively invariant for the controlled system, the stage cost must be K-class lower bounded and the terminal cost (·) should be K-class upper bounded and Lyapunov-decreasing (it must decay along the horizon).
For many years, MPC was mostly seen in the process industry, regulating usually slower applications (with longer sampling periods). This was mainly due to the fact that the inherent optimization procedures were excessively costly (numerical-wise) and became impractical for real-time systems.
Nonlinear MPC (NMPC) algorithms yield complex optimization procedure, with exponential growth of the numerical burden. Nevertheless, the majority of system is indeed nonlinear and, thus, literature has devoted special attention to feasible NMPC design since the 00's [3] . Originally, NMPC algorithms were hardly able to run in real-time [4] , but recent research effort has focused to a great extent on ways to simplify or approximate, usually through Gauss-Newton, Lagrangian or multiple-shooting discretization approaches [5] , the online Nonlinear Programming Problem (NP) in order to make it viable for fast, time-critical processes. Some of these faster NMPC algorithms run within the range of a few milliseconds, resorting to solver-based solutions (as in ACADO [6] or GRAMPC [7] algorithms) or GPU-based schemes [8,9] .
Parallel to these approximated methods, another research route is now expanding to address the complexity drawback of "full-blown" NMPC strategies: using quasi-/Linear Parameter Varying (qLPV/LPV) model structures to embed the nonlinear dynamics, as in [10] , and thus facilitate the online optimization. Since LPV models retain linearity properties through the input/output channels, the optimization can be reduced to the complexity of a QP. A recent survey [11] details the vast possibilities of issuing NMPC through LPV structures. The basic requirement of these methods is that the nonlinearities must respect the Linear Differential Inclusion (LDI) property [12,13] , in such a way that they can be embedded into a qLPV realisation, appropriately "hidden" in scheduling parameters .
Instead of using a moving-window linearization strategy to yield fast NMPCs with time-varying models [14] , or of using approximated solutions of the NP iterations [6] , this paper follows the lines of the qLPV embedding framework, which allows for an exact description of the nonlinear system and, thereby, no time-consuming linearization or Jacobian computation needs to take place. As previously evidenced [11] , these qLPV methods are able to use the scheduling proxy ( ) = ( ( ), ( )) to compute the process predictions rapidly. In fact, these methods have recently been shown [15] to outrank (or perform equivalently as) fast NMPC solvers, such as ACADO. Some of these recent development are further detailed: • Some works [16,17] opt to consider a frozen/constant guess for the scheduling parameters along the future horizon and ensure, through the use of terminal ingredients, that the trajectories are sufficiently regulated, despite the uncertainty along the horizon; • Morato et al. [18] propose a method to determine an educated estimation for scheduling variables using a recursive Least-Squares procedure. A similar procedure is applied in [19] . The main drawback is that the results could be sub-optimal, meaning that local minima found through their QPs/ Sequential QPs (SQPs), which may not ensure sufficient performances. • The most prominent results are those reported in the recent works by Cisneros & Werner [20][21][22] . The original idea [20] is to iteratively use the prediction for the future state trajectories (output of the QP) to compute a guess of the scheduling parameters, using the nonlinear proxy ( + ) = ( ( + )). The method was extended [21] to reference-tracking and shown to yield a Second-order Cone Program (2 OCP) formulation for the resulting NMPC, which is easier to solve than an NP. The formulation was further smoothed in the most novel reference [22] , wherein the procedure is split into an offline preparation part, using Linear Matrix Inequalities (LMIs) to compute a robust positively invariant terminal set, and an online residing solely in re-iterating SQPs.

Contributions and rganization
oAs detailed in the prequel, the topic of NMPC through qLPV embedding has been studied by a handful of papers and deserves further attention. It seems that the development of these strategies can surely be established as a competitive category for nonlinear MPC design, regarding time-critical applications.
Pursuing this matter and motivated by the previous discussion, this paper proposes an alternative formulation to the recent algorithm by Cisneros and Werner [22] . In their work, the nonlinear proxy has to be evaluated online w.r.t. to the future state evolution prediction originated through the QP. The alternative procedure proposed herein relies on approximating the nonlinear proxy by a time-varying auto-regressive function, whose parameters are found through another QP, based on a Moving-Horizon Estimation (MHE) method. This alternative is able to slightly boost the numerical performances of the whole algorithm, which only needs to evaluate three QPs to find the control law.
Accordingly, the contributions presented are the following: • An alternative formulation for NMPC is proposed: using qLPV embeddings, the MPC operates together with an MHE layer, which estimates the future behaviour of the scheduling parameters. • The convergence of the algorithm is demonstrated. • A benchmark example is used to demonstrate the effectiveness of the proposed scheme, in terms of performance and numerical burden.
Regarding organization, this paper is structured as follows. Idn the next Section, the preliminaries and formalities are presented, especially regarding how nonlinear processes can be embedded into a qLPV representation through LDI. Moreover, the problem setup regarding MPC applied to such qLPV model is presented. Furthermore, the proposed MHE-MPC formulation and the discussion about stability and an offline LMI-solvable remedy for the computation of the terminal ingredients is addressed. Lastly, Section simulation results and general conclusions are drawn. The solution to this kind of problem is found by many solvers seen in the literature, based on Interior Point algorithms, quadratic search, etc.

Notation
In this work, the set of non-negative real number is denoted by R + , whist the set of non-negative integers including zero is denoted by N. The index set N [ , ] represents { ∈ N | ≤ ≤ }, with 0 ≤ ≤ . The identity matrix of size is denoted as I ; col{ , , } denotes the vectorization (collection) of the entries and diag{ } denotes the diagonal matrix generated with the line vector .
The value of a given variable ( ) at time instant + , computed based on the information available at instant , is denoted as ( + | ).
K refers to the class of positive and strictly increasing scalar functions that pass through the origin. A given function : R → R is of class K if (0) = 0 and lim → +∞ ( ) → +∞. A real-valued scalar function : R + → R + belongs to class K ∞ if it belongs to class K and it is radially unbounded (this is lim →+∞ ( ) → +∞. A function : R + × R + → R + belongs to class KL if, for each fixed ∈ R + , (·, ) ∈ K and, for each fixed ∈ R + , ( , ·) is non-increasing and holds for lim →+∞ ( , ) = 0.
C denotes the set of all compact convex subsets of R . A convex and compact set ∈ C with non-empty interior, which contains the origin, is named a PC-set. A subset of R is denoted a polyhedron if it is an intersection of a finite number of half spaces. A polytope is defined as a compact polyhedron. A polytope can be analogously represented as the convex hull of a finite number of points in R . A hyperbox is a convex polytope where all the ruling hyperplanes are parallel with respect to their axes.
Finally, consider two sets ⊂ R and ⊂ R . The Minkowski set addition is defined by ⊕ := { + | ∈ , ∈ }, while the Pontryagin set difference is defined by The cartesian product between two sets is defined as A × B.

PRELIMINARIES
In this Section, we detail how nonlinear processes can be described under a qLPV formalism; we also present some other formalities.

The Nonlinear System and its qLPV Embedding
We consider the following generic discrete-time nonlinear system: where ∈ N represents the sampling instant, : N → X ⊂ R represents the system states, : N → U ⊂ R is the vector of control inputs and : N → Y ⊆ R stands for the measured outputs of the process.
We begin by characterizing this process, which should satisfy the following key Assumptions: Assumption 1. The admissible zone for the states is given by a 2-norm upper bound on each entry , this is:

Assumption 2. The admissible region for the control inputs is given by a 2-norm upper bound on each entry , this is:
(3) Assumption 3. The nonlinear maps : X × U → X and : X × U → Y are continuous and continuously differentiable with respect to , i.e. class C ∞ Assumption 4. This nonlinear system is controllable in terms of and through the input trajectory .
To represent any nonlinear system, as the one in Eq. (1), under a qLPV formalism, this last Assumption must be verified, since it is the LDI property that furnishes the settings for such representation.
The LDI property is as follows: suppose that, for each , and and for every sampling instant , there exists a matrix ( , , ) : where H ⊆ R ( )×( + ) is the set within which the LDI property holds. Then, when there exists a matrix (·) that verifies Eq. (4), the nonlinear model from Eq. (1) can be equivalently expressed as: which is a qLPV formulation where : X × U → P ⊂ R represents the endogenous nonlinear function for the scheduling parameters. Note that ( ) is bounded and known online at each instant , but generally unknown for any future instant + , ∀ ∈ N [1,∞] .
We consider that the qLPV scheduling parameters have bounded rates of variations, this is: ( + 1) = ( ) + ( + 1), being ∈ P, ∀ their variation rates. This is very reasonable for any practical application. To assume that varies arbitrary implies in quite conservative control synthesis [23] .

PROBLEM SETUP
Regarding the qLPV embedded model in Eq. (5), we proceed by detailing how MPC can be applied to regulate and control this system. For the simplify of the reference tracking demonstrations, we drop the input-output energy transfer, i.e. ( ( )) = 0. We note that the processes with ( ( )) ≠ 0 can still be dealt with the proposed method, we no additional drawbacks.
The essential idea behind MPC is to consider a quadratic finite-horizon functional cost, which embeds the performance objectives of the system within this given horizon. The implementation resides in minimizing this cost with respect to a control signal sequence, using a model of the system in order to make predictions for the future variable values along the horizon. The optimization also includes the operational constraints of the process variables (admissibility region). Generically, we consider the following steady-state reference tracking performance cost: where and are positive definite weighting matrices and the pair ( , ) defines a known admissible steadystate reference target for the nonlinear system. The optimization cost considers a prediction horizon of steps and a positive terminal stage value The MPC framework considers a moving-window strategy. Therefore, at each sampling instant , since ( ), and ( ) are known, the corresponding optimization problem is solved, which gives the solution ∈ R × . This solution constitutes the following sequence of control inputs whose first input ( | ) = 1 is applied to the process. Then, the horizon slides forward and the procedure is updated. The complete optimization, at each sampling instant , is given as follows: s.t.

LPV Process Model
where X and (·) are the terminal ingredients, combined to ensure recursive feasibility of the algorithm (see Section Stability and Offline Preparations).
Due to Eq. (8), it follows that the future values for the qLPV scheduling variables ( + ) are not known for any ≥ 1. At each instant , the optimization operates based on the knowledge of ( ) and ( ), which can be used to compute ( ) through the nonlinear qLPV proxy (·). One could easily include this proxy into the optimization, making it also subject to ( + ) = ( ( + ), ( + )) together with the process model, but this would convert Eq. (8) into a Nonlinear Programming Problem, which is associated with numerical complexity issues (as previously discussed).
The NP execution is computationally unattractive [22] because of this general nonlinear dependence of the predicted states on the control inputs and on previous states. Therefore, following the lines of previous works [19,22] , this paper pursues a fast implementation of the LPV MPC optimization procedure in Eq. (8), which means that we do not seek to analytically include the nonlinear qLPV scheduling proxy (·) to the optimization, but rather to provide values for the complete evolution of the scheduling parameters along the prediction horizon, as if they were known (thus detaching the nonlinear dependency). This is, we aim to solve Eq. (8) based on ( ), ( ) and on the future "scheduling sequence" vector ∈ R × , being If the actual evolution of the scheduling sequence is as gives , the MPC ensures perfect regulation. Furthermore, it is formulated as a Quadratic Programming Problem, which can be tackled for many time-critical applications with modern solvers. In fact, we approximate the NP solution by one which resides in a "guess" for the scheduling sequence , which attractively converges to the actual value of this vector as the procedure iterates. The solution to estimate is based on a Moving Horizon Estimation algorithm, which is further detailed in Section The MHE-MPC Mechanism.
We must proceed by providing some complementary Assumptions regarding this qLPV MPC optimization problem setup. For such, we denote ∈ R × as the evolution of the state values along the prediction horizon, this is: Assumption 5. The qLPV scheduling proxy is set-wise and vector-wise applicable, this is, it holds as (X, U) and also as ( , ). The first operation stand for the application of (·) to the bounds of each entry set, while the later stands for the application of (·) to each sample of the entry vectors.
Assumption 6. The application of the scheduling proxy to the admissible zone for the states and inputs is a subset of the scheduling set, this is: Assumption 7. The admissible region X ×U is a subset of the image of the inverse of the scheduling proxy domain, being (·) bijective. This means that the inverse of the scheduling proxy always maps admissibility pairs ( , ) from admissible scheduling variables . This is mathematically expressed as follows: From the viewpoint of each sampling instant , the scheduling sequence can be directly evaluated as: where ★ comprises the instantaneous states and the state evolution until ( + − 1| ): which is directly given by [ ( ) ] with the last entry suppressed.
With the previous discussion in mind, we proceed by using the qLPV model from Eq. (5) and the definitions from Eqs. (7), (9) and (10) to analytically provide a solution to the state evolution which is explicitly dependent on the scheduling sequence.
For the LTI case, the state evolution , departing from ( ), is expressed on a linear dependent basis w.r.t. ( ) and to the sequence of control inputs , as follows: Analogously, for the qLPV case, since linearity is retained through the input-output channels (i.e. from to ), the state evolution can be given in a quite similar fashion, but with parameter dependence on appearing on the transition matrices, this is: where the parameter dependent matrices are given by 1 : . . .
In order to compute matrices A ( ) and B ( ), some nonlinear operations should be performed. Anyhow, the procedure to compute them can be done completely outside the MPC optimization. In this form, the MPC receives, at each instant , the following inputs: ( ), A ( ) and B ( ) (as well as the steady-state target given by and ); then, solving the optimization problem in Eq. (8), it results in , from which the first entry ( | ) is applied to the plant. For such goal, the MPC requires to internally explicitly minimize the cost function ( , , ) from Eq. (6), which can be written in the vector form as follows: . The Hessian, gradient and offset terms are analytically given by: − ( ) A ( )˘+ 2+ 2˘+ 2 ( ( + | )) .

Process constraints
The qLPV MPC problem solution proposed in this paper is formulated with respect to the scheduled state evolution equation, as gave Eq. (15), with ( ), (·) and (·) being as passed as inputs to the resulting MPC optimization. This means that the MPC optimization does not treat state evolution as optimization variables, but the whole problem is formulated singularly in terms of .
The terminal constraint ( + | ) ∈ X is stated in terms of as: Additional output constraints are also easily formulated. If the process has some outputs (which are not necessarily equal to , but could be) that must be hard constrained, i.e. ∈ Y . Then, assume that these outputs can be described as: In what follows, we take Then, the additional constraint is formulated as follows 2 : ( )B ( ) ) ∈Y ( ( )A ( ) ( )) .

Reference tracking
Finally, before showing the proposed mechanism to guess and solve the qLPV MPC problem, a comment must be made regarding reference tracking. The considered cost function ( , , ) from Eq. (6) (or its vector form of Eq. (16)) is set in order to minimize the variations from the desired set-point target = ( , ).
The majority of processes that require reference tracking, require it regarding the controlled outputs and not the states. This is, to ensure that ( ) tracks some steady-state value . Since the controlled outputs in Eq. (5) are given by ( ) = ( ( )) ( ) , we can find a linear (parameter varying) combination of the states that, if tracked, ensures that ( ) → . We denote the output tracking target as = ( , ), which is known.
Then, following the lines of previous reference tracking frameworks [24][25][26] , we use an offline reference optimization selector, which is set to find the set-point target that abides to the constraints and ensures an output tracking of . This nonlinear optimization procedure is as follows: This procedure ensures some steady-state = ( ) + ( ) ) that abides to the states constraints and guarantees that the output tracking goal is followed.
Note that this optimization procedure has a steady-state target point = ( , ) as output, and not the full state and input trajectories towards this target.
The state reference selection problem can be solved online, at each sampling instant, if the output reference goal changes over time. By doing so, an additional computational complexity appears, which can be smoothed if the scheduling parameter guess is used instead of solving the nonlinear optimization itself. A full discussion on periodically changing reference tracking for nonlinear MPC has been recently presented [27] . The focus of this paper is constant reference signals, either given in terms of states or outputs It is important to notice that, in order for the method to hold, the state reference must be contained inside the terminal set of the MPC problem from Eq. (8). This ensures that the stability and recursive feasibility guarantees (as verified in Section Stability and Offline Preparations) hold.

THE MHE-MPC MECHANISM
In the general qLPV embedding case of Eq. (5), the scheduling proxy (·) is an arbitrary function of both state and input. For notation ease, we drop the control input dependency, using taking ( ) = ( ( )). Anyhow, note that all that follows can be trivially extended to broader case.
The backbone idea of the method proposed in this paper follows the fashion of previous papers [19,22] : to iteratively refine the predictions/guesses of the scheduling sequence based on the (adjusted) state predictions ★ . The main novelty of this paper is how the refining and estimation of is done: in the prior, the scheduling sequence is taken, as each iteration, directly as gives Eq. (13), i.e. as a nonlinear operator upon a vector, which can be computationally difficult to track, according to the kind of nonlinearity present on the scheduling map (·); in contrast, in this paper, is taken according to a linear time-varying operator on ★ , this is: . This linear operator derives from a Moving-Horizon Estimation procedure, which proceeds by trying to match a fixed-size linear auto-regressive model for , being Θ the model parameters computed by the MHE at a given sampling instant and −1 the scheduling sequence guess at the previous instant.
A priori, the operation of this MHE has the computational complexity of a QP, which could be faster to evaluate than the Eqs. (13) with A ( ) and B ( ), depending on the amount of nonlinearities present in (·). Moreover, the proposed MHE-MPC mechanism is able to provide convergence to the real scheduling sequence faster than looping Eq. (13) to the MPC [22] , as demonstrated through the experiment presented in Section Benchmark Example. The method follows: . . .
, which can be given in compact form by: being MHE ( −1 , Θ , ★ ) = −1 + Θ ★ a fairly easy map to compute with respect to numerical burden.
Proof. Indeed, due to Assumption 8, it is quite reasonable that Assumption 1 holds: any algebraic function of form ( + 1) = ( ( + 1)) can be Taylor-expanded to achieve a linear dependency on with sufficiently small error, i.e. The proposed procedure uses a MHE mechanism to estimate these parameter values 0,0 to −1, −1 , at each sampling instant, concatenated as Θ , through the following QP: s.t.
black Essentially, this MHE scheme operates in order to find a parameter matrix Θ that makes the linear autoregressive equation = −1 + Θ ★ yield the best match between the state evolution sequence ★ and the qLPV model. Notice that the MHE algorithm only needs a few data from the previous step to find the parameter matrix Θ , being these the state predictions ★ , the scheduling sequence −1 and the input vector . Figure 1 syntheses the proposed algorithm, which relies in a coordination between the MHE and the MPC optimization procedures. We must note that the MHE loop should operate until converges to the actual value for the scheduling sequence, or until a certain stop criterion/heuristic threshold for the number of iterations is reached.
The proposed scheme is also detailed through the Algorithm below. Its application departs with an initial state evolution sequence 0 , that can be simply taken as constant/frozen evolution for the states and two initial scheduling sequences 0 and −1 , which can be simply taken as if ( ) remained frozen along the whole prediction horizon, i.e. = ( )I 1, . The Algorithm also departs with a known terminal set condition X and a known target reference goal . The Hessian, gradient and offset of are taken as give Eqs. (17)- (19), respectively.

Convergence property
In order to demonstrate the convergence of the proposed method (as in its implementation form given in Algorithm 1), we will proceed by verifying a well-known results for Newton based SQPs from the literature [28][29][30] . This is the same path followed in a previous paper [22] , which invokes established results to demonstrate that, under certain conditions, the MHE-MPC mechanism that is solved at each iteration is equivalent to a quadratic sub-problem used in standard Newton SQP. Therefore, a local convergence property is readily found.

Proposition 2. A quadratic sub-problem program of SQP algorithms is derived by a second-order approximation of the SQP optimization cost and a linearization of its constraints.
Proof. Found in [28] .
For illustration purposes regarding this matter, we consider the following generic NP (as given in Definition This problem can be given as a quadratic sub-problem directly, as follows: where ( ) denotes the Hessian of the optimization cost ( ) and ∇ℎ ( ) and ∇ ( ) denote divergent operators. We note that this sub-problem is evaluated at a given solution estimate (at some given iteration), for which˘= − .
Regarding the proposed MHE-MPC mechanism, we can easily show that if either simple Jacobian linearization or Linear Differential Inclusion are used to find a qLPV model (as in Eq. (5)) for the nonlinear system (as in Eq. (1)), then, the proposed mechanism iterates in equivalence to a Newton SQP sub-problem. Notice how such sub-problem in Eq. (27) is identical to the MHE-MPC optimization given through the consecutive iterations of the MHE (Eq. (25)) and the MPC (Eq. (8)). The terminal constraint in the MPC optimization adds no convergence trade-off.
Thence, it follows that if local convergence of the equivalent Newton SQP can be established, the proposed MHE-MPC also yields convergence. The sufficient conditions for local convergence of a Newton SQP subproblem at = , as given by prior references [29][30][31][32] , are that (i) the problem is set simply with equality constraints (not the MPC case) or that (ii) the subset of active inequality constraints are known before the optimization solution. The second condition is also not true for general MPC paradigms. However, one can iterate the sub-problem until convergence is found at another point = , as previously discussed [22,33,34] .
In practice, the proposed MHE-MPC will not be set to freely iterate until the convergence of . This is not desirable because the number of iterations needed for convergence may require more time than the available sampling period. Therefore, a stop criterion is added to the mechanism, so that iterations stop at a given threshold. A warm-start is also included by shifting the result regarding and from one sampling as the initial guess for the optimization at + 1, which ensures that the proposed algorithm reaches convergence after a few discrete-time samples. We note that convergence of Newton SQP sub-problems with warm-start have been assessed [33,34] and shown to be practicable for real-time schemes.

STABILITY AND OFFLINE PREPARATIONS
In this Section, we offer a Theorem to construct the terminal ingredients of the MPC algorithm: (a) the terminal set within which ( + | ) is bounded to, and (b) the terminal offset cost ( ( + | )) minimized by the MPC.
Since the establishment of terminal ingredients toolkit [2,35] as the key way to ensure stability and recursive feasibility of state-feedback predictive control loops, MPC grew on both theory and industrial practice.
The usual approach with terminal ingredients resides in some ensuring that conditions are met by (a) the terminal set X and (b) the terminal cost ( ( + | )) with respect to a nominal state-feedback controller ( ) = n ( ), which is usually the unconstrained solution of the MPC problem. For the tracking case, the nominal feedback is given by ( ) = n ( ( ) − ) (and so is the terminal constraint ( + | ) − ∈ X and the terminal cost ( ( + | ) − )). Accordingly, we develop a sufficient stability condition for the proposed MHE-MPC mechanism in order to verify these conditions. Firstly, we consider that there exists a parameter-dependent nominal state-feedback gain n : R → R × . For demonstration simplicity and notation lightness, we will proceed with null 3 .
Of course, there is a complexity barrier to solve this problem, because the states have parametric nonlinearities that impact their trajectories (the qLPV scheduling parameters). Therefore, we determine this nominal feedback gain together with the terminal ingredients, which are also taken as parameter-dependent on . We consider, for regularity, an ellipsoidal set as the terminal constraint, which is given by: This ellipsoid is centered at the origin and has a radius of . Furthermore, this terminal set is a sub-level set of terminal cost (·), which is taken as a Lyapunov function as follows: This parameter-dependent nominal feedback gain n ( )) and the parameter dependent terminal ingredients verbalized through the symmetric parameter dependent Lyapunov matrix ( ) are so that the following inputto-state stability Theorem is guaranteed. Theorem 1. Input-to-State Stable MPC [2,22,35,36] Let Assumptions 4 and 7 hold. Assume that a nominal control law = n ( ) exists. Consider that the MPC is in the framework of the optimization problem in Eq. (8), with a terminal state set given by X ( ) and a terminal cost ( , ). Then, input-to-state stability is ensured if the following conditions are hold ∀ ∈ P: • (C1) The origin lies in the interior of X ( ); • (C2) Any consecutive state to , given by ( ( ) + ( ) n ( )) lies within X ( ) (i.e. this is an invariant set); • (C3) The discrete algebraic Ricatti equation is verified within this invariant set, this is, ∀ ∈ X ( ( )): (( ( ( )) + ( ( )) n ( ( ))) , ( + 1)) − ( , ( )) ≤ − − ( n ( ( ))) n ( ( )) .
• (C4) The image of the nominal feedback always lies within the admissible control input domain: n ( ( )) ∈ U.
• (C5) The terminal set X ( ) is a subset of the admissible state domain X. 3 The tracking equivalency is easily done with = 0 and by computing the qLPV model with the states dynamics are given with respect to Assuming that the initial solution of the MPC problem ★ , computed with respect to the initial state (0), is feasible, the MPC algorithm is indeed recursively feasible, asymptotically stabilizing the state origin.
Proof. Provided in Appendix Proof of Theorem 1.
In order to find some nominal state-feedback gain n ( ), some terminal set X and some terminal offset cost (·), an offline LMI problem is proposed in the sequel. This LMI problem is such that a ( ) positive definite parameter-dependent matrix is found to ensure that the conditions of Theorem 1 are satisfied. Due to condition (C3), the LMI is solved over a sufficiently dense grid over , consider its admissibility domain P. We note that (C3) is a time-variant condition, which depends explicitly on ( ) and ( + 1) due to the nature of the parameter dependent (·).
This LMI problem is provided through the following Theorem, which aims to find the largest terminal set X that is invariant under the nominal control policy ( ) = n ( ( )) ( ) for all , while remaining admissible, i.e. n ( ) ∈ U , ∀ ∈ X and ∈ P. Note that the largest ellipsoidal set as in the form of Eq. (28) is posed through the maximization of . Theorem 2. Terminal Ingredients [22,37] The conditions (C1)-(C5) of Theorem 1 are satisfied if there exist a symmetric parameter-dependent positive definite matrix ( ) : R → R × , a parameter-dependent rectangular matrix ( ) : R → R × and a scalar 0 <ˆ∈ R such that ( ) = ( ( )) −1 > 0, ( ) = n ( ) ( ) and that the following LMIs hold for all ∈ P and ∈ P, while minimizingˆ: where denotes the -th row of the identity matrix I. In LMI (31), it is given w.r.t. to an identity I , while in LMI (32), it is given w.r.t. to an identity I .
We must note that the above proof demonstrates that the solution of the LMIs presented in Theorem 2 ensure a positive definite parameter dependent matrix ( ) which can be used to compute the MPC terminal ingredients (·) and X such that input-to-state stability of the closed-loop in guaranteed, verifying the conditions of Theorem 1. Furthermore, when the MPC is designed with these terminal ingredients, for whichever initial condition (0) ∈ X it starts with, it remains recursively feasible for all consecutive discrete time instants > 0.
Anyhow, Theorem 2 provides infinite-dimensional LMIs, since they should hold for all ∈ P and for all ∈ P. To address this issue, one can handle the LMIs considering an sufficiently dense grid [38] of R × points in P × P, for which the LMIs must be enforced. This solves the infinite dimension of the problem, which is converted into an -dimensional LMI problem, being the number of grid points. For this solution to be practically implementable, continuity on matrices ( ) and ( ) should be verified. We must also note that parameter-dependency on may be dropped if the system is quadratically stabilizable, which is a conservative assumption.

BENCHMARK EXAMPLE
In this Section, we pursue the application of the proposed MHE-MPC mechanism with terminal ingredients found through Theorem 2. For such, we consider the application of our control method upon a benchmark system, detailed in the sequel.

Continuouslystirred tank reactor
Consider the model of a Continuously-Stirred Tank Reactor (CSTR) process, which consists of an irreversible, exothermic reaction, → , in a constant volume reactor cooled by a single coolant stream which can be modeled by the following equations: where is the concentration of in the reactor, is the temperature in the reactor, and is the coolant temperature.
In this process, = is a control input, whereas and are measurable process variables. The considered model parameters and process constraints are reported in Table 1.

Control goal and tuning
Considering an arbitrary initial condition 0 given within the state admissibility set X, the proposed controller is set to steer the state trajectories to a known state reference . For such, we use identity weights in the MPC cost ( , , ). Complementary, we use a prediction horizon of = 8 steps. This prediction horizon was chosen in accordance with prior literature using the same nonlinear CSTR benchmark [39] .
The MHE scheme, which is used to estimate the future scheduling behaviour at each sampling instant , is set to operate with a threshold loop barrier of 3 loops, which is a verified sufficient bound to induce convergence (refer to the discussion in Section Convergence Property).

Simulation results
Considering a realistic nonlinear CSTR model, we obtain simulation results to demonstrate the effectiveness of the proposed control scheme. The following results were obtained in a 2.4 GHz, 8 GB RAM Macintosh computer, using Matlab, yalmip and Gurobi solver.
First, we show how the MHE operation is able to accurately predict the behaviour of the scheduling trajectory, as depicts Figure 2. At each instant , the MHE provides estimation for , which is composed of the following entries of the scheduling variables. In this Figure, we observe the real scheduling variables 1 and 2 (dotdash black line) and the estimates provided at different samples (coloured -marked lines). Within some samples, we can see that the predicted trajectory converges to the real one, which confirms the effectiveness of the MHE operation.
Based on the scheduling behaviours predicted by the MHE loop, the predictive controller determines the control input ( Figure 4) in order to drive the system states from 0 to the reference goal . The corresponding state trajectories are depicted in Figure 3, which also shows the state admissibility set X and the terminal set constraint X (a parameter-dependent ellipsoid generated via Theorem 2). As one can see, the behaviour of the process variables is a smooth trajectory towards .
Finally, we demonstrate the dissipating properties of the proposed control scheme. In Figure 5, we show the evolution of MPC stage cost over time. As expected, decays and converges to the origin, which verifies the dissipation properties required by Theorem 1.

CONCLUSIONS
In this paper, a new method for the fast, real-time implementation of Nonlinear Model Predictive Control is proposed. The method provides a near-optimal, approximated solution, which is found through the online operation of sequential Quadratic Programming Problems. The main necessary argument to develop the method is that the nonlinear process should be described by quasi-Linear Parameter Varying model, for which the embedding is ensured through a scheduling proxy. Then, the online operations resides in the consecutive operation of the MPC program together with a Moving-Horizon Estimation scheme, which is used to match  the future values of the scheduling proxy along the prediction horizon, which are unknown. Input-to-state stability and recursive feasibility properties of the algorithm are ensured by parameter-dependent terminal ingredients, which are computed offline. Using a benchmark example, the method is tested. We highlight that it proves itself more effective for stronger nonlinearities in the qLPV scheduling proxy, for which the MHE scheme operates faster than the application of the scheduling proxy upon each entry of the future state variables, as in many other techniques. For future works, the Authors plan on assessing the issue of periodically-changing (possibly unreachable) output reference signals.