10.8
CiteScore
 
5.3
Impact Factor
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
Filter by Categories
Corrigendum
Current Issue
Editorial
Erratum
Full Length Article
Full lenth article
Original Article
Research article
Retraction notice
Review
Review Article
SPECIAL ISSUE: ENVIRONMENTAL CHEMISTRY
10.8
CiteScore
5.3
Impact Factor
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
Filter by Categories
Corrigendum
Current Issue
Editorial
Erratum
Full Length Article
Full lenth article
Original Article
Research article
Retraction notice
Review
Review Article
SPECIAL ISSUE: ENVIRONMENTAL CHEMISTRY
View/Download PDF

Translate this page into:

Original article
11 2022
:15;
104257
doi:
10.1016/j.arabjc.2022.104257

Optimal control approach for nonlinear chemical processes with uncertainty and application to a continuous stirred-tank reactor problem

School of Mathematical Sciences, Guizhou Normal University, Guiyang 550001, PR China
School of Life Sciences, Guizhou Normal University, Guiyang 550001, PR China
School of Electrical Engineering, Southeast University, Nanjing 210096, PR China
School of Automation, Southeast University, Nanjing 210096, PR China
Key Laboratory of Measurement and Control of CSE, Ministry of Education, Southeast University, Nanjing 210096, PR China

⁎Corresponding authors at: School of Mathematical Sciences, Guizhou Normal University, Guiyang 550001, PR China (X. Wu); School of Life Sciences, Guizhou Normal University, Guiyang 550001, PR China (Y.Z. Hou). xwu@gznu.edu.cn (Xiang Wu), yuzhou_hou@163.com (Yuzhou Hou)

Disclaimer:
This article was originally published by Elsevier and was migrated to Scientific Scholar after the change of Publisher.

Peer review under responsibility of King Saud University.

Abstract

Practical chemical process is usually a dynamic process including uncertainty. Stochastic constraints can be used to chemical process modeling, where constraints cannot be strictly satisfied or need not be fully satisfied. Thus, optimal control of nonlinear systems with stochastic constraints can be available to address practical nonlinear chemical process problems. This problem is hard to cope with due to the stochastic constraints. By introducing a novel smooth and differentiable approximation function, an approximation-based approach is proposed to address this issue, where the stochastic constraints are replaced by some deterministic ones. Following that, the stochastic constrained optimal control problem is converted into a deterministic parametric optimization problem. Convergence results show that the approximation function and the corresponding feasible set converge uniformly to that of the original problem. Then, the optimal solution of the deterministic parametric optimization problem is guaranteed to converge uniformly to the optimal solution of the original problem. Following that, a computation approach is proposed for solving the original problem. Numerical results, obtained from a nonlinear continuous stirred-tank reactor problem including stochastic constraints, show that the proposed approach is less conservative compared with the existing typical methods and can obtain a stable and robust performance when considering the small perturbations in initial system state.

Keywords

Optimal control
Nonlinear chemical processes
Stochastic constraints
Approximation function
Convergence analysis
Stirred-tank reactor
1

1 Introduction

It is well-recognized that the chemical process problem is a challenging control problem due to its nonlinear nature and the existence of system state and control input constraints (Braatz and Crisalle, 2007; Ostrovsky et al., 2012; Skogestad, 2004; Wang et al., 2011; Zheng et al., 2022). In generally, chemical processes can be controlled by using linear dynamical systems analysis and design tools due to the existence of analytical solutions for linear dynamical systems (Choi et al., 2000; Koller and Ricardez-Sandoval, 2017; Tsay et al., 2018). In addition, compared with the nonlinear dynamical system simulation, the computational demands for linear dynamical system simulation are quite small (El-Farra and Christofides, 2001; Liao et al., 2018). Unfortunately, if a chemical process is highly nonlinear, the use of a linear dynamical system approach is quite limiting (Chen and Weigand, 1994; Kelley et al., 2020). With advances in nonlinear control theory and computer hardware, nonlinear control technique is allowed to raise the productivity, profitability and/or efficiency of chemical processes (Bhatia and Biegler, 1996; Han et al., 2008; Sangal et al., 2012). Thus, nonlinear modeling, optimized dispatching, and nonlinear control have been becoming basic methods to optimize design and operate production facilities in chemical industries (Abdelbasset et al., 2022; Bhat et al., 1990; Bradford and Imsland, 2019; Graells et al., 1995; Li et al., 2022).

A method based on optimal control is extremely important for chemical process applications, such as reactor design (De et al., 2020), process start-up and/or shut down (Naka et al., 1997), reactive distillation system (Tian et al., 2021), etc. The main task of the optimal control problem is to choose a control input such that a given objective function is minimized or maximized for a given dynamical system (Doyle, 1995; Ross and Karpenko, 2012; Zhang et al., 2019). The classical optimal control theory is based on the Pontryagin’s maximum principle (Bourdin and Trélat, 2013) or the dynamic programming method (Grover et al., 2020). The Pontryagin’s maximum principle presents a necessary condition for optimality. By using this principle, some simple cases can be solved analytically. However, this problem may be too complex and big such that the use of a high-speed computer is inevitable in engineering practice (Kim et al., 2011; Assif et al., 2020; Andrés-Martínez et al., 2022). The dynamic programming method requires solving a Hamilton–Jacobi-Bellman equation, which is a partial differential equation. For linear dynamical systems, the Hamilton–Jacobi-Bellman equation degenerates into a Riccati equation, which is very easy to solve. However, the analytical solution of the Hamilton–Jacobi-Bellman equation cannot usually be obtained for nonlinear dynamical systems (Bian et al., 2014; Zhang et al., 2016). In order to overcome above difficulties, numerical computation methods are proposed for solving optimal control problems. In generally, the approaches for obtaining the numerical solution of optimal control problems can be divided into two categories: the indirect approach and the direct approach (Chen-Charpentier and Jackson, 2020; Cots et al., 2018). In the indirect approach, the calculus of variations is usually adopted to obtain the first-order optimality conditions for the optimal control problem. Then, we can determine candidate optimal trajectories called extremals by solving a multiple-point boundary-value problem. Furthermore, each extremal obtained will be checked to see if it is a local maximum, minimum, or a saddle point, and the extremal with the lowest cost will be selected. In the direct approach, the system state or/and control input of the optimal control problem is discretized in some way. Then, this problem is transformed into a nonlinear optimization problem or nonlinear programming problem, which can be solved by using well known nonlinear optimization algorithms and high-speed computers.

During the past two decades, many numerical computation methods have been reported about the optimal control problem (Hannemann-Tamás and Marquardt, 2012; Chen et al., 2014; Assassa and Marquardt, 2016; Goverde et al., 2020; Wu et al., 2017; Wu et al., 2018; Wu et al., 2022a; Wu et al., 2022b). Unfortunately, most of these methods are designed for deterministic models. However, various parameter disturbances or actuator uncertainty must frequently be considered in many practical problems (Kaneba et al., 2022; Li and Shi, 2013; Ostrovsky et al., 2013; Salomon et al., 2014; Wang sand Pedrycz, 2016; Yang et al., 2022; Yonezawa et al., 2021; Wu and Zhang, 2022c). For this purpose, this paper considers an optimal control problem for nonlinear chemical processes with uncertainty by introducing stochastic constraints. In generally, there exist two main difficulties in solving these optimal control problems with stochastic constraints (Rafiei and Ricardez-Sandoval, 2018 Paulson et al., 2019; Sartipizadeh and Açkmeşe, 2020). One is checking the feasibility of a stochastic constraint is usually impossible. The other is that the feasible region of these optimal control problems is usually non-convex. In order to overcome these two difficulties, many methods have been proposed to approximate these stochastic constraints. Further, the original stochastic problem is formulated as a deterministic problem and the corresponding solution is guaranteed to satisfy the stochastic constraints. In generally, there exist two types of approximation approaches in existing literatures: one is the sampling based method, the other is the analytical approximation based method. For example, the scenario optimization (SO) approach (Calafiore and Campi, 2006), the sample approximation (SA) method (Luedtke and Ahmed, 2008), the meta-heuristic (MH) algorithm (Poojari and Varghese, 2008), and the robust optimization approximation (ROA) technique (Li and Li, 2015). It should be pointed out that the SO approach and the SA method are two different sampling based methods. In the SO approach, a set of samples are adopted for the stochastic variable so that the stochastic constraints can be approximately replaced by using some deterministic constraints. For the SA method, an empirical distribution obtained from a random sample is used to replace the actual distribution. Further, it can be used to evaluate the stochastic constraints. The MH algorithm is also a sampling based method. In the MH algorithm, the stochastic nature is processed by using the Monte Carlo simulation, whereas the non-convex and nonlinear nature of the optimization problem with stochastic constraints are addressed by using the Genetic Algorithm. Unfortunately, these three methods are designed to obtain feasible solutions without any optimality guarantees. The ROA technique is an analytical approximation method. Its idea is that the stochastic constraint is transformed into a deterministic constraint. Compared with the SO approach, the SA method, and the MH algorithm, it can provide a safe analytical approximation, the size of the corresponding problem is independent of the solution reliability, and needs only a mild assumption on random distributions. However, this safe approximation may lead to the conservatism is very high and the corresponding solution has very poor performance in practice. In order to overcome the disadvantages of these existing methods, a novel approximation approach is proposed for treating the stochastic constraints. Its idea is that a novel smooth approximation function is used to construct a subset of feasible region for this optimal control problem. Convergence results show that the smooth approximation can converge uniformly to the stochastic constraints as the adjusting parameter reduces. Finally, in order to illustrate the effectiveness of the proposed numerical computation method, the nonlinear continuous stirred-tank reactor problem (Jensen, 1964; Lapidus and Luus, 1967) is extended by introducing some stochastic constraints. The numerical simulation results show that the proposed numerical computation method is less conservative and can obtain a stable and robust performance when considering the small perturbations in initial system state.

Our main contributions of this paper can be summarized in the following aspects:

  • An approximation-based approach is proposed and used to approximate the stochastic constraints. Further, the convergence analysis results show that the approximation function and the corresponding feasible set converge uniformly to that of the original optimal control problem.

  • The nonlinear continuous stirred-tank reactor problem (Jensen, 1964; Lapidus and Luus, 1967) is further extended by considering some stochastic constraints. Subsequently, this stochastic-constrained optimal control problem is solved by using the proposed numerical computation method.

  • The numerical simulation results and comparative studies show that compared with other existing typical methods, the proposed approach is less conservative and can obtain a stable and robust performance when considering the small perturbations in initial system state.

The rest of this paper is organized as follows. In Section 2, we describe the optimal control problem for nonlinear chemical processes with stochastic constraints. Further, an approximation method of the stochastic constraints and its convergence results are presented by Sections 3 and 4, respectively. In Section 5, a numerical computation approach is proposed for solving the approximate deterministic problem. Following that, by introducing some stochastic constraints, the nonlinear continuous stirred-tank reactor problem is provided to illustrate that our method is effective in Section 6.

2

2 Problem formulation

Consider the following optimal control problem for nonlinear chemical processes over the time horizon 0 , T with stochastic constraints: min u t J u t = ϕ x T + 0 T L x t , u t dt ( 2.1 a ) s . t . dx t dt = f x t , u t , ( 2.1 b ) x 0 = x 0 , ( 2.1 c ) Prob g i x t , u t , θ 0 i , i = 1 , , q , ( 2.1 d ) u t U , ( 2.1 e ) where T presents a given terminal time; x t R m presents the system state variable; u t U R n presents the control input variable; U presents a compact set; θ presents a random variable with a known probability density function F θ supported on the measurable set Λ R r ; Prob · presents the probability; ϕ : R m R , L : R m × U R , f : R m × U R , g i : R m × U × Λ R , i = 1 , , q , present given continuously differentiable functions; i [ 0 , 1 ] , i = 1 , , q , present q risk parameters chosen by the decision maker; and Eq. (2.1d) presents q stochastic constraints.

3

3 Approximation of stochastic constraints

In generally, during the process of solving problem (2.1a)-(2.1e), the major challenge is obtaining the probability values Prob g i x t , u t , θ 0 , i = 1 , , q , for a given control input u t . However, it is challenging to obtain the exact analytic representation for nonlinear stochastic constraints. Thus, some approximation methods are developed for solving the optimal control problem for chemical processes with stochastic constraints. Among these approaches, sample average approximation and scenario generation are two most typical methods. Unfortunately, the solutions obtained by sample average approximation and scenario generation may be infeasible for the original optimal control problem. To overcome this difficulty, it is required to develop some better approximation methods for dealing with the stochastic constraints described by (2.1d).

3.1

3.1 Properties of problem (2.1a)-(2.1e)

Note that the system state variable x t depends on the control input u t and the random variable θ . Then, a more brief mathematical expression for the stochastic constraints described by (2.1d) can be achieved as follows:

(3.1)
Prob g i x t , u t , θ 0 i = Prob g i u t , θ 0 i , i = 1 , , q . Then, for any i = 1 , , q , the probability function G i can be defined as follows:
(3.2)
G i u t = Prob g i u t , θ 0 , i = 1 , , q .
From Equalities (3.1)-(3.2), it follows that
(3.3)
1 - G i u t = Prob g i u t , θ > 0 = E A g i u t , θ , i = 1 , , q ,
where E presents the mathematical expectation and A g i u t , θ = 0 , if g i u t , θ 0 , 1 , if g i u t , θ > 0 , i = 1 , , q . By using Equalities Eqs. (3.1)-(3.3), one can obtain that the following equivalence relation:
(3.4)
Prob g i u t , θ 0 i 1 - G i u t 1 - i E A g i u t , θ 1 - i ,
for any i = 1 , , q . Note that the compactness of the set U. Thus, the feasible set of problem (2.1a)-(2.1e) Δ = u t U G i u t i , i = 1 , , q and the set Ω = u t U E A ( g i u t , θ ) 1 - i , i = 1 , , q are compact. Further, from Equality (3.3), we can obtain Δ = Ω , which indicates that the following two problems on the time horizon 0 , T : min u t J u t = ϕ x T + 0 T L x t , u t dt ( 3.5 a ) s . t . dx t dt = f x t , u t , ( 3.5 b ) x 0 = x 0 , ( 3.5 c ) u t Δ , ( 3.5 d ) min u t J u t = ϕ x T + 0 T L x t , u t dt ( 3.6 a ) s . t . dx t dt = f x t , u t , ( 3.6 b ) x 0 = x 0 , ( 3.6 c ) E A ( g i u t , θ ) 1 - i , i = 1 , , q , ( 3.6 d ) u t U , ( 3.6 e ) have the same optimal solution set. Although E A ( g i u t , θ ) 1 - i , i = 1 , , q , gives an exact representation of the stochastic constraints described by (2.1d), it is still difficulty to obtain the numerical solution of problem (3.6a)-(3.6e) due to the existing of the functions A ( g i u t , θ ) , i = 1 , , q . Encouragingly, the function E A ( g i u t , θ ) provides a feasible technical route for constructing a tractable smooth approximation of problem (2.1a)-(2.1e).

3.2

3.2 Approximation of stochastic constraints

Under the premise of ensuring the feasibility, the main task of approximating the stochastic constraints described by (2.1d) is to construct q continuous functions ϒ i α , u ( t ) : 0 , + × R n 0 , + , i = 1 , , q , satisfying the following three properties:(1) E A ( g i u t , θ ) ϒ i α , u ( t ) , i = 1 , , q , for any α > 0 and u t U ;(2) inf α > 0 ϒ i α , u ( t ) = E A ( g i u t , θ ) , i = 1 , , q , for any u t U ;(3) ϒ i α , u ( t ) , i = 1 , , q , are non-decreasing functions with respect to the parameter α .

By using the properties (2)-(3) and the continuity of the functions ϒ i α , u ( t ) , i = 1 , , q , respect to the parameter α , it follows that

(3.7)
inf α > 0 ϒ i α , u ( t ) = lim α 0 + ϒ i α , u ( t ) , i = 1 , , q . Then, this constraint described by (3.6d) can be replaced by inf α > 0 ϒ i α , u ( t ) 1 - i , i = 1 , , q . In addition, from the property (1) of the functions ϒ i α , u ( t ) , i = 1 , , q , we can obtain that for any fixed α > 0 , the solution obtained by solving the following problem on the time horizon 0 , T : min u t J u t = ϕ x T + 0 T L x t , u t dt ( 3.8 a ) s . t . dx t dt = f x t , u t , ( 3.8 b ) x 0 = x 0 , ( 3.8 c ) ϒ i α , u ( t ) 1 - i , i = 1 , , q , ( 3.8 d ) u t U , ( 3.8 e ) is feasible for problem (3.6a)-(3.6e). Note that the feasible set Γ α = u t U ϒ i α , u ( t ) 1 - i , i = 1 , , q of problem (3.8a)-(3.8e) is a subset of the feasible set of problem (3.6a)-(3.6e). Then, the set Γ α is also a subset of the feasible set of problem (2.1a)-(2.1e).

Suppose that the following assumption is true:

Assumption 3.1. Let i [ 0.5 , 1 ) , i = 1 , , q . The stochastic constraints described by (2.1d) is called to be regular if for any u ( t ) U with Prob g i x t , u t , θ 0 = i , i = 1 , , q , there exists a sequence { u k ( t ) } k N such that lim k + u k t = u t and Prob g i x t , u k t , θ 0 > i , i = 1 , , q .

Then, the relationship between Δ , Ω , and Γ can be provided by the following theorem:

Theorem 3.1. The compactness of the set U, the continuity and monotonicity of the functions ϒ i α , u ( t ) , i = 1 , , q , with respect to the parameter α , and the properties (1)-(3) of the functions ϒ i α , u ( t ) , i = 1 , , q , indicate that for any sequence { α k } k N , the following equality is true:

(3.9)
lim k + Γ α k = Δ = Ω , where lim k + α k = 0 and the stochastic constraint described by (2.1d) is regular.

Proof. Note that lim k + α k = 0 . By using the property (3) of the functions ϒ i α , u ( t ) , i = 1 , , q , we can obtain that the sequence { Γ α k } k N satisfies Γ α k Γ α k + 1 , k = 1 , 2 , , due to the compactness of the set U and the continuity of the functions ϒ i α , u ( t ) , i = 1 , , q . Further, it follows that

(3.10)
lim inf k + Γ α k = lim sup k + Γ α k . Clearly, lim sup k + Γ α k Δ is true. Thus, we only need to illustrate that lim sup k + Γ α k Δ due to Δ = Ω .

Suppose that u ( t ) U . Then, there exist the following two cases:

Case 1: E A g i u t , θ < 1 - i , i = 1 , , q .

Note that E A g i u t , θ = inf α > 0 ϒ i α , u ( t ) , i = 1 , , q , and the functions ϒ i α , u ( t ) , i = 1 , , q , are continuous and non-decreasing with respect to the parameter α . Then, there exists a k ¯ such that the following inequality is true:

(3.11)
ϒ i α k , u ( t ) < 1 - i , i = 1 , , q , for all k k ¯ . Thus, u t Γ α k for all k k ¯ , which implies that u t lim sup k + Γ α k .

Case 2: E A g i u t , θ = 1 - i , i = 1 , , q .

By using Assumption 3.1, there exists a sequence u k t k N in the set U such that lim k + u k t = u t and Prob g i x t , u k t , θ 0 > i , i = 1 , , q . This indicates that E A g i u k t , θ < 1 - i , i = 1 , , q . Further, by using the property (2) of the functions ϒ i α , u ( t ) , i = 1 , , q , it follows that for any u k ( t ) , there exists a sufficiently small parameter α k such that E A g i u k t , θ < ϒ i α k , u k ( t ) < 1 - i , i = 1 , , q . Thus, u k t Γ α k for all k, which indicates that u t lim sup k + Γ α k .

The properties (2)-(3) of the functions ϒ i α , u ( t ) , i = 1 , , q , imply that the parameter α k can be selected such that α k k N is a monotonically decreasing sequence and satisfies lim k + α k = 0 . Thus, lim k + Γ α k = Δ = Ω . This completes the proof of Theorem 3.1.

It should be pointed out that the equality lim sup α k 0 + Γ α k = lim α k 0 + Γ α k = Ω = Δ is also true because of the compactness of the set Γ α k and the monotony property Γ α k Γ α k + 1 for all 0 < α k + 1 < α k . The so-called concentration of measure inequalities from probability theory can be used as a background to construct the smooth approximation functions ϒ i α , u ( t ) , i = 1 , , q , which satisfy properties (1)-(3) (Pinter, 1989). For example, the functions ϒ i α , u ( t ) , i = 1 , , q , satisfying properties (1)-(3) can be defined as ϒ i α , u ( t ) = E B α , g i u t , θ , i = 1 , , q , where B : 0 , + × R 0 , + is a given function. Further, it provides a feasible technical route for constructing a tractable smooth approximation of problem (2.1a)-(2.1e). There exists many functions B α , z such that the functions ϒ i α , u ( t ) , i = 1 , , q obtained by ϒ i α , u ( t ) = E B α , g i u t , θ , i = 1 , , q satisfy properties (1)-(3). For example,

  • B 1 α , z = e z α , α > 0 ;

  • B 2 α , z = 1 α - 0 Ψ s + z α ds , α > 0 , where Ψ z is a non-negative integrable and symmetric function (i.e. Ψ z = Ψ - z and - + Ψ z dz = 1 ).

Unfortunately, the functions B 1 α , z and B 2 α , z can not be used directly to the computation of general non-convex stochastic constraints (Pinter, 1989). To tackle this issue, this paper use the following function

(3.12)
B α , z = 1 + α v 1 1 + α v 2 e - 1 α z , to construct the functions ϒ i α , u ( t ) , i = 1 , , q , such that ϒ i α , u ( t ) = E B α , g i u t , θ , i = 1 , , q , where v 1 > 0 and v 2 > 0 are two given constants; v 2 v 1 ; and 0 < α < 1 . The function B α , z defined by (3.12) has some nice properties, which allow us to obtain the desired approximation.

Theorem 3.2. If 0 < v 2 v 1 and α > 0 , then the function B α , z possesses the following two properties:(1) B α , z > 0 , for any value of z;(2) B α , z 1 , for z 0 .

Proof. (1) Note that v 1 > 0 , v 2 > 0 , and α > 0 . Then, from e - 1 α z > 0 with z R , it follows that B α , z > 0 , for any value of z.(2) Clearly, one can obtain that

(3.13)
v 2 α e - 1 α z v 2 α v 1 α , due to 0 < v 2 v 1 , α > 0 , and e - 1 α z 1 for z 0 . By using Inequality (3.13), it follows that
(3.14)
B α , z = 1 + α v 1 1 + α v 2 e - 1 α z 1 ,
because of 1 + α v 2 e - 1 α z 1 + α v 1 . This completes the proof of Theorem 3.2.

Theorem 3.3. If v 1 and v 2 satisfy the condition 0 < v 2 v 1 1 + v 1 , then the function B α , z possesses the following three properties:(1) B α , z is strictly monotonically increasing with respect to z R ;(2) B α , z is non-decreasing with respect to α ( 0 , 1 ) ;(3) B α , z satisfies the following equality:

(3.15)
lim α 0 + B α , z = 1 , if z 0 , 0 , if z < 0 . for z - , - η 0 , + and arbitrary η > 0 .

Proof. (1) Note that the function e - 1 α z is strictly decreasing with respect to z R . Then, we can obtain that B α , z is strictly monotonically increasing with respect to z R .(2) By using the definition B α , z described by (3.12), the partial derivative of B α , z with respect to α can be given by B α , z α = v 1 1 + α v 2 e - 1 α z + 1 + α v 1 1 + α v 2 e - 1 α z 2 - v 2 e - 1 α z - v 2 z α e - 1 α z

(3.16)
= 1 1 + α v 2 e - 1 α z 2 v 1 1 + α v 2 e - 1 α z - v 2 1 + α v 1 1 + z α e - 1 α z . Applying the inequalities 0 < v 2 v 1 1 + v 1 , α v 2 e - 1 α z > 0 , and 1 + s e - s 1 , s R , to Equality (3.16) yields B α , z α 1 1 + α v 2 e - 1 α z 2 v 1 - v 2 1 + α v 1 1 1 + α v 2 e - 1 α z 2 v 1 - v 2 1 + v 1
(3.17)
1 1 + α v 2 e - 1 α z 2 v 1 - v 1 1 + v 1 1 + v 1 = 0 .
Equality (3.17) implies that B α , z is non-decreasing with respect to α ( 0 , 1 ) .(3) By using the definition B α , z described by (3.12), we can obtain the following two inequalities:
(3.18)
B α , z - 1 = 1 + α v 1 1 + α v 2 e - 1 α z - 1 = α v 1 - α v 2 e - 1 α z 1 + α v 2 e - 1 α z α v 1 , for z 0 ,
(3.19)
0 < B α , z 1 + α v 1 1 + α v 2 e 1 α ε , for z - ε ,
due to α v 2 e - 1 α z > 0 and e - 1 α z being monotonically increasing with respect to z. In addition, we have
(3.20)
lim α 0 + 1 + α v 1 1 + α v 2 e 1 α ε = 0 ,
due to lim α 0 + α e 1 α ε = + . Then, by using (3.18)-(3.20), it follows that the function B α , z satisfies Equality (3.15). This completes the proof of Theorem 3.3.

Assumption 3.2. For any fixed u t U , Prob Ξ u t = 0 , where Prob · presents the probability and Ξ u t = θ Λ g i u t , θ = 0 , i = 1 , , q .

Now, by using the properties (2)-(3) of the functions ϒ i α , u ( t ) , i = 1 , , q , the definition of B α , z , the definition of A g i u t , θ , i = 1 , , q , and Theorem 3.3, the following corollary can be obtained directly.

Corollary 3.1. If v 1 and v 2 satisfy the condition 0 < v 2 v 1 1 + v 1 , Assumption 3.2 is true, and the variable z is replaced by using g i u t , θ , i = 1 , , q , in Theorem 3.3, then

(3.21)
lim α 0 + ϒ i α , u ( t ) = lim α 0 + E B α , g i u t , θ = E A g i u t , θ = 1 - G i u t , i = 1 , , q . Define a function A z , z R , as follows:
(3.22)
A z = 0 , if z 0 , 1 , if z > 0 ,
Then, the approximation level of B α , z with v 1 = 1 and v 2 = 0.3 is presented by Fig. 3.1, which implies that the approximation error can be artificially controlled by adjusting the values of α .
A ( z ) and B α , z with v 1 = 1 , v 2 = 0.3 , and different values of α .
Fig. 3.1
A ( z ) and B α , z with v 1 = 1 , v 2 = 0.3 , and different values of α .

4

4 Convergence analysis

For a fixed value of the parameter α k ( 0 , 1 ) , an optimal control problem on the time horizon 0 , T is introduced as follows: min u t J u t = ϕ x T + 0 T L x t , u t dt ( 4.1 a ) s . t . dx t dt = f x t , u t , ( 4.1 b ) x 0 = x 0 , ( 4.1 c ) u t Γ α k , ( 4.1 d ) where Γ α k = u t U ϒ i α k , u t 1 - i , i = 1 , , q presents the feasible region. Suppose that α k k = 1 + 0 , 1 is a decreasing sequence (i.e., α k + 1 < α k ) and satisfies the equality lim k + α k = 0 . Further, this leads to a sequence of optimal control problems (i.e., problem (4.1a)-(4.1d) with k { 1 , 2 , } ). Note that J · and ϒ i α k , · , i = 1 , , q , are continuous on the compact set U. Thus, Γ α k is a compact set and problem (4.1a)-(4.1d) can obtain an optimal solution u k ( t ) in Γ α k . In addition, J · and ϒ i α k , · , i = 1 , , q , are also continuously differentiable functions. Thus, the solution of problem (4.1a)-(4.1d) can be obtained by using any gradient-based numerical optimization algorithm. Now, our main interest is the relation between problems (4.1a)-(4.1d) and (2.1a)-(2.1e).

4.1

4.1 Convergence analysis of feasible regions

By using the property (2) of ϒ i α k , u ( t ) , i = 1 , , q , if follows that Γ α Δ , for any α 0 , 1 . Particularly, the sequence Γ α k k = 1 + satisfies lim sup k + Γ α k Δ , for any sequence α k k = 1 + with lim k + α k = 0 . Further, lim k + Γ α k = Δ is guaranteed under Assumption 3.1 (see Theorem 3.1). Next, the following Theorem will show that the feasible regions Γ α k of problem (4.1a)-(4.1d) with k { 1 , 2 , } are uniformly convergent to the feasible region Δ of problem (2.1a)-(2.1e).

Theorem 4.1. Suppose that α k k = 1 + 0 , 1 is a sequence satisfying lim k + α k = 0 . Then, Assumption 3.1 indicates that

(4.2)
lim k + D Γ α k , Δ = 0 , where D ( · ) denotes the Hausdorff distance.

Proof. Note that Γ α k k = 1 + is a sequence consisting of compact sets, Γ α k U , the set U is compact, lim k + Γ α k = Δ (see Theorem 3.1), and the set Δ is also compact. Then, by using the definition of the Hausdorff distance, we can obtain that lim k + D Γ α k , Δ = 0 . This completes the proof of Theorem 4.1.

Remark 4.1. Note that the results of Theorem 3.1 and 4.1 is true for any sequence α k k = 1 + with lim k + α k = 0 and α k + 1 < α k , i = 1 , 2 , . Then, these results can be summarized as follows:

(4.3)
inf α 0 , 1 D Γ α , Δ = lim sup α 0 + D Γ α , Δ = 0 , where Γ α is the feasible region of problem (4.1a)-(4.1d) ( α k is replaced by α ).

In generally, it is challenging to obtain the exact analytic representation for the stochastic constraints described by (2.1d). In addition, it is also very difficulty to solve directly problem (2.1a)-(2.1e) numerically on Δ . Fortunately, the above analysis and discussion show that Γ ( α ) can be as close as possible to Δ and equal to Δ in the limit. This implies that we can obtain the numerical solution of problem (2.1a)-(2.1e) on Γ ( α k ) with lim k + α k = 0 .

4.2

4.2 Convergence analysis of approximate solutions

From the definition of problem (4.1a)-(4.1d) and the properties (1)-(3) of ϒ i α , u ( t ) , i = 1 , , q , it follows that any solution u k ( t ) of problem (4.1a)-(4.1d) is a feasible solution of problem (3.6a)-(3.6e). Further, for any α k , the objective function’s optimal value of problem (4.1a)-(4.1d) is an upper bound for the objective function’s optimal value of problem (2.1a)-(2.1e). Next, the following Theorem will show that any limit point of the local optimal solution sequence u k t k = 1 + of problem (4.1a)-(4.1d) is also a local optimal solution of problem (2.1a)-(2.1e).

Theorem 4.2. Suppose that u k t k = 1 + is a sequence and u k t is a local optimal solution of problem (4.1a)-(4.1d), for any k { 1 , 2 , } . Then, there exists a subsequence u k j t k j = 1 + of u k t k = 1 + such that lim k j + u k j t = u * t and there exists an open ball C ( u ( t ) ) around u ( t ) such that u * t Δ C u * t and u ( t ) is a local optimal solution of problem (2.1a)-(2.1e). Inversely, if u ̃ ( t ) is a strict local optimal solution of problem (2.1a)-(2.1e), then there exists a local optimal solution sequence u k t k = 1 + of problem (4.1a)-(4.1d) such that lim k + u k t = u ̃ t .

Proof. Note that the set U is compact and u k t k = 1 + U . Thus, there exists a subsequence u k j t k j = 1 + of u k t k = 1 + such that lim k j + u k j t = u * t . Since u k j t Γ α k j and lim k + Γ α k = Δ (see Theorem 3.1), one can obtain that u ( t ) Δ . Further, for a sufficiently large j { 1 , 2 , } , there exists a ball C ( u ( t ) ) such that u k j t Δ C u * t is a local optimal solution of problem (4.1a)-(4.1d).

Suppose that u ( t ) Δ C u * t is not a local optimal solution of problem (2.1a)-(2.1e). This indicates that there exists a u ¯ ( t ) Δ C u * t such that

(4.4)
J u * t > J u ¯ t . Note that Γ α k is a feasible region of problem (4.1a)-(4.1d) ( α is replaced by α k ) and U is a compact set. From Theorem 3.1, it follows that there exists a sequence y k ( t ) k = 1 + such that y k ( t ) Γ α k C u * t and lim k + y k ( t ) = u ¯ t , where u ¯ t is feasible for problem (2.1a)-(2.1e). Then, by using the local optimality of u k j ( t ) , one can obtain that
(4.5)
J y k j t J u k j t ,
where y k j k j = 1 + is a subsequence of y k k = 1 + . Further, from Inequality (4.5), it follows that J u ¯ t = lim k j + J y k j t lim k j + J u k j t = J u * t . This is in contradiction with Inequality (4.4). Thus, u ( t ) is a local optimal solution of problem (2.1a)-(2.1e).

Suppose that u ̃ ( t ) is a local optimal solution of problem (2.1a)-(2.1e). Let J u t > J u ̃ t , for any u t C ¯ u ̃ t , where C ¯ denotes the closure of C and the inequality constraints G i ( u ( t ) ) i , i = 1 , , q , hold true, for a bounded ball C ( u ̃ t ) . Further, let J u t J u k t , k = 1 , 2 , , for any u t C ¯ u ̃ t with 1 - ϒ i α k , u t i , i = 1 , , q . By using the compactness of C ¯ , it follows that there exists a sequence u k t k = 1 + such that lim k + u k t = u * t . Note that lim k + D Γ α k C ¯ u ̃ t , Δ C ¯ u ̃ t = 0 . Thus, by using the continuity of the objective function J ( · ) , it follows that J u t J u * t for any u t C ¯ u ̃ t with G i u t i , i = 1 , , q . Thus, J u * t = J u ̃ t . If u * t u ̃ t , then this is in contradiction with u ̃ ( t ) is a strict local optimal solution of problem (2.1a)-(2.1e). Thus, the sequence u k t k = 1 + converges to u ̃ ( t ) , where u k ( t ) is the local optimal solution of problem (4.1a)-(4.1d). This completes the proof of Theorem 4.2.

5

5 Solving problem (4.1a)-(4.1d)

In generally, it is challenging to obtain an analytical solution of problem (4.1a)-(4.1d). To solve this problem numerically, an important strategy is to discretize the dynamical system such that the original optimal control problem can be transformed into a nonlinear parameter optimization problem with stochastic constraints.

5.1

5.1 Time domain transformation

To construct the method of solving problem (4.1a)-(4.1d), the time domain is required to be transformed from t [ 0 , T ] to s [ - 1 , 1 ] by using the following scaling transformation:

(5.1)
t = T 2 s + 1 . Then, by using the scaling transformation described by (5.1), problem (4.1a)-(4.1d) can be transformed into the following equivalent form: min u s J u s = ϕ x 1 + T 2 - 1 + 1 L x s , u s ds ( 5.2 a ) s . t . dx s ds = T 2 f x s , u s , ( 5.2 b ) x - 1 = x 0 , ( 5.2 c ) u s Γ α k , ( 5.2 d ) where Γ α k = u s U ϒ i α k , u s 1 - i , i = 1 , , q .

5.2

5.2 Problem discretization

Numerical computation approaches for optimal control problems can be divided into two categories: indirect approaches and direct approaches. The idea of indirect approaches is that the optimality conditions are established by using variational techniques and the original optimal control problem can be transformed into a Hamiltonian boundary-value problem. Unfortunately, the Hamiltonian boundary-value problem is usually ill-conditioned because of hypersensitivity. The idea of direct approaches is that the control input and/or system state are parameterized by using basis functions and the original optimal control problem can be transformed into a finite-dimensional nonlinear optimization problem. Although direct approaches usually are less ill-conditioning, many existing approaches either are inaccurate or provide no costate information. To overcome this difficulty, the Gaussian quadrature orthogonal collocation approach is developed for solving optimal control problems in recent years. Its idea is that the system state and control input are approximated by using a basis of global polynomials and are discretized at a set of discretization points. In this paper, Legendre–Gauss-Radau points (Kameswaran and Biegler, 2008) are adopted as the discretization points. Suppose that s 1 , , s M are M Legendre–Gauss-Radau points, where s 1 = - 1 and s M < + 1 . To contain the terminal time s = + 1 , the point s M + 1 = + 1 is added to the discretization point set. Now, a basis of Lagrange polynomials are defined as follows:

(5.3)
Q j s = Π i = 1 i j M + 1 s - s i s j - s i , j = 1 , , M + 1 . Further, the system state and control input can be discretized as
(5.4)
x s X s = j = 1 M + 1 X j Q j s ,
(5.5)
u s U ̃ s = j = 1 M U ̃ j Q j s ,
where X j = X s j and U ̃ j = U ̃ s j . Differentiating Eq. (5.4) and substituting the approximations X ( s ) and U ̃ ( s ) of the system state x ( s ) and the control input u ( s ) into Eq. (5.2b), one can obtain that
(5.6)
j = 1 M + 1 X j dQ j s ds = T 2 f X s , U ̃ s .
From Eq. (5.6), it follows that
(5.7)
j = 1 M + 1 X j dQ j s ds s = s j = T 2 f X j , U ̃ j , j = 1 , , M .
Define D j and X ̃ as follows:
(5.8)
D j = dQ 1 s ds s = s j , , dQ M + 1 s ds , s = s j
(5.9)
X ̃ = X 1 , , X M + 1 T .
Then, Eq. (5.7) can be rewritten as the following equivalent form:
(5.10)
D j X ̃ = T 2 f X j , U ̃ j , j = 1 , , M .
Further, the integral term in Eq. (5.2a) can be approximated by
(5.11)
- 1 + 1 L x s , u s ds = j = 1 M β j L X j , U ̃ j ,
where β j denotes the weights corresponding to the discretization points. Finally, problem (5.2a)-(5.2d) can be rewritten as follows: min U ̃ j J U ̃ j = ϕ X M + 1 + T 2 j = 1 M β j L X j , U ̃ j ( 5.12 a ) s . t . D j X ̃ = T 2 f X j , U ̃ j , j = 1 , , M , ( 5.12 b ) E B α k , g i U ̃ j , θ 1 - i , i = 1 , , q , ( 5.12 c ) X 1 = x 0 , ( 5.12 d ) U ̃ j U , j = 1 , , M , ( 5.12 e ) where E B α k , g i U ̃ j , θ = ϒ i α k , U ̃ j , i = 1 , , q .

To solve problem (5.12a)-(5.12e), the Monte Carlo sample method is adopted to compute the expectation value in Eq. (5.12c). Let θ l l = 1 M be M ̃ samples following the distribution F ( θ ) . Then, problem (5.12a)-(5.12e) can be approximated by min U ̃ j J U ̃ j = ϕ X M + 1 + T 2 j = 1 M β j L X j , U ̃ j ( 5.13 a ) s . t . D j X ̃ = T 2 f X j , U ̃ j , j = 1 , , M , ( 5.13 b ) 1 M l = 1 M B α k , g i U ̃ j , θ l 1 - i , i = 1 , , q , ( 5.13 c ) X 1 = x 0 , ( 5.13 d ) U ̃ j U , j = 1 , , M , ( 5.13 e ) which is a static nonlinear programming problem and can be solved by using any gradient-based numerical optimization algorithm.

Remark 5.1. The work (Kameswaran and Biegler, 2008) shows that the solution accuracy can be improved by using a larger value of the number M for Legendre–Gauss-Radau points. However, it may greatly increase the computation amount of the numerical computation approach developed by this paper. In order to balance the computation amount and the numerical solution accuracy, Section 6.7 (i.e., Processing power analysis) will propose a sensitivity analysis method to set the value of the parameter M.

5.3

5.3 Solving problem (4.1a)-(4.1d)

For simplicity, we suppose that U ̃ is defined by U ̃ = U ̃ 1 , , U ̃ M , where U ̃ j U , j = 1 , , M . Based on the above analysis and discussion, the following numerical optimization algorithm is proposed for solving problem (4.1a)-(4.1d).

Algorithm 5.1.

Step 1: Initialize U ̃ 0 , choose the tolerance ε > 0 , set α 0 1 × 10 - 2 , p 0 .

Step 2: Generate M samples θ l l = 1 M following the distribution F θ .

Step 3: Solve problem (5.13a)-(5.13e) by using the sequential quadratic programming method, and suppose that the solution obtained is U ̃ p .

Step 4: If J U ̃ p - J U ̃ p - 1 ε , then stop and go to Step 5. Otherwise, set p p + 1 and U p U p - 1 , α p α p - 1 10 , and go to Step 3.

Step 5: Construct the solution u of problem (4.1a)-(4.1d) from U ̃ p by using (5.5), and output the corresponding optimal objective function value J .

Remark 5.2. This method proposed by this paper is designed for problem (2.1a-2.1e), in which the random variable θ has an arbitrary probability distribution. Note that any limit point of the local optimal solution sequence of problem (4.1a)-(4.1d) is also a local optimal solution of problem (2.1a)-(2.1e) (see Theorem 4.2). Then, a solution of problem (2.1a)-(2.1e) is also obtained, as long as the parameter α k is sufficiently small. Thus, Algorithm 5.1 can be used to obtain a solution of problem (2.1a-2.1e), in which the random variable θ has an arbitrary probability distribution.

6

6 Numerical results

In this section, the nonlinear continuous stirred-tank reactor problem (Jensen, 1964; Lapidus and Luus, 1967) is further extended to illustrate the effectiveness of the approach developed by Sections 2–5 by introducing some stochastic constraints. All numerical simulation results are obtained under Windows 10 and Intel Core i7-1065G7 CPU, 3.9 GHz, with 8.00-GB RAM.

6.1

6.1 Nonlinear continuous stirred-tank reactor

As shown in Fig. 6.1, the nonlinear continuous stirred-tank reactor problem describes several simultaneous chemical reactions taking place in an isothermal nonlinear continuous stirred-tank reactor. The control variables are an electrical energy input used to promote a photochemical reaction and three feed stream flow rates. The objective of this problem is to maximize the economic benefit by choosing the control variables of the nonlinear continuous stirred-tank reactor. This problem can be described by an optimal control problem with stochastic constraints as follows:

Schematic diagram of a nonlinear continuous stirred-tank reactor..
Fig. 6.1
Schematic diagram of a nonlinear continuous stirred-tank reactor..

Choose the control input u ( t ) over the time horizon t [ 0 , 0.2500 ] to maximize the objective function

(6.1)
J ( u ( t ) ) = x 6 ( 0.2500 ) , subject to the nonlinear continuous-time dynamical system dx 1 t dt = u 4 t - u 2 t + u 3 t + u 4 t x 1 t - 73.0000 x 1 t x 4 t , ( 6.2 a ) dx 2 t dt = - u 2 t + u 3 t + u 4 t x 2 t + 35.2000 x 3 t x 4 t - 51.3000 x 2 t x 7 t , ( 6.2 b ) dx 3 t dt = u 2 t - u 2 t + u 3 t + u 4 t x 3 t - 17.6000 x 3 t x 4 t - 23.0000 x 3 t x 8 t u 1 t , ( 6.2 c ) dx 4 t dt = u 3 t - u 2 t + u 3 t + u 4 t x 4 t - 17.6000 x 3 t x 4 t - 146.0000 x 1 t x 4 t , ( 6.2 d ) dx 5 t dt = - u 2 t + u 3 t + u 4 t x 5 t + 46.0000 x 3 t x 8 t u 1 t , ( 6.2 e ) dx 6 t dt = 5.8000 u 2 t + u 3 t + u 4 t x 3 t - u 2 t - 3.7000 u 3 t - 4.1000 u 4 t ( 6.2 f ) + u 2 t + u 3 t + u 4 t 23.0000 x 2 t + 35.0000 x 5 t + 11.0000 x 7 t + 28.0000 x 8 t ( 6.2 g ) - 5.0000 u 1 t 2 - 0.0990 , ( 6.2 h ) dx 7 t dt = - u 2 t + u 3 t + u 4 t x 7 t + 219.0000 x 1 t x 4 t - 51.3000 x 2 t x 7 t , ( 6.2 i ) dx 8 t dt = - u 2 t + u 3 t + u 4 t x 8 t + 102.6000 x 2 t x 7 t - 23.0000 x 3 t x 8 t u 1 t , ( 6.2 j ) with the initial condition
(6.3)
x 0 = x 0 = 0.0467 , 0.0899 , 0.1883 , 0.2507 , 0.1046 , 0.0000 , 0.1804 , 0.1394 T ,
the stochastic constraints
(6.4)
Prob u 1 t - 4.0000 + θ 1 0.0000 1 ,
(6.5)
Prob u 2 t - 20.0000 + θ 2 0.0000 2 ,
(6.6)
Prob u 3 t - 20.0000 + θ 3 0.0000 3 ,
(6.7)
Prob u 4 t - 6.0000 + θ 4 0.0000 4 ,
and the nonnegative constrains
(6.8)
u i t 0.0000 , i = 1 , 2 , 3 , 4 ,
where x t = x 1 t , x 2 t , x 3 t , x 4 t , x 5 t , x 6 t , x 7 t , x 8 t T ; u t = u 1 t , u 2 t , u 3 t , u 4 t T ; in this reaction, x 1 ( t ) - x 5 ( t ) and x 7 ( t ) - x 8 ( t ) denote the weight fractions for the seven species, respectively; x 6 ( t ) denotes a transformed variable from the cost function; u t denotes the feed rate; θ i , i = 1 , 2 , 3 , 4 , are four random variables, which denote the influence of external disturbance on various restricted conditions of the nonlinear continuous stirred-tank reactor problem; and i , i = 1 , 2 , 3 , 4 , denote the allowable risk values.

6.2

6.2 Parameter description

To discretized the nonlinear continuous-time dynamical system, we adopt M = 60.0000 Legendre–Gauss-Radau points. Let v 1 = 2.0000 and v 2 = 1.0000 . In order to make Algorithm 5.1 converge quickly, we set U ̃ 0 = 2 , 10 , 10 , 3 T , which satisfies the stochastic constraints (6.4)-(6.8) (i.e., U ̃ 0 U is an admissible control). Then, the technique proposed in Sections 3 and 4 of this paper is adopted to transform the stochastic constraints describe by (6.4)-(6.7). Based on the analysis and discussion of Sections 3 and 4, these stochastic constraints can be transformed into the following form:

(6.9)
ϒ 1 α , u t = E B α , u 1 t - 4.0000 + θ 1 1 - 1 ,
(6.10)
ϒ 2 α , u t = E B α , u 2 t - 20.0000 + θ 2 1 - 2 ,
(6.11)
ϒ 3 α , u t = E B α , u 3 t - 20.0000 + θ 3 1 - 3 ,
(6.12)
ϒ 4 α , u t = E B α , u 4 t - 6.0000 + θ 4 1 - 4 ,
where the random variable θ i , i = 1 , 2 , 3 , 4 , follow normal distribution (e.g., θ i N 0.0000 , 0.0100 , i = 1 , 2 , 3 , 4 .) and the allowable risk values are specified as 1 - i = 0.1000 , i = 1 , 2 , 3 , 4 . In addition, a sufficiently large-scale sample (e.g., M ̃ = 1.0000 × 10 6 ) should be chosen in order to obtain convergent numerical results.

Following the above transformation method, the original nonlinear continuous stirred-tank reactor problem with stochastic constraints is rewritten as a deterministic constrained optimal control problem, which can be solved by using some existing numerical algorithms. In this paper, all the numerical simulation results are obtained by using the technique proposed in Sections 3–5 with the sequential quadratic programming method.

6.3

6.3 Impact of the parameter α

In this subsection, we will investigate the sensitivity analysis of the parameter α with the proposed numerical computation method. In generally, it is usually difficult to choose a suitable value of the parameter α as the parameter α does not involve any clear physical meaning. Based on Theorem 3.1 described by Section 3 and Theorems 4.1–4.2 described by Section 4, the solution accuracy can be improved by using a larger value of the parameter α . Unfortunately, it may give rise to some computational difficulties for the proposed numerical computation method. By specifying α = 300 , 600 , 1200 , 2400 , 4800 , the numerical simulation results are presented by Fig. 6.2, which implies that the numerical solution is sensitive with respect to the value of the parameter α . In other words, a better objective value, together with a more aggressive stochastic constraint violation rate, can be obtained by increasing the value of the parameter α . Thus, a suitable treatment of the parameter α is needed. For this reason, an adaptive strategy is designed in Algorithm 5.1.

Numerical results of sensitivity analysis with respect to the parameter α .
Fig. 6.2
Numerical results of sensitivity analysis with respect to the parameter α .

6.4

6.4 Numerical simulation results

The numerical computation method proposed in Sections 3–5 is adopted to solve the nonlinear continuous stirred-tank reactor problem and the corresponding numerical simulation results are presented by Figs. 6.3,6.4. Further, the stochastic constraint violation of the proposed numerical computation method is presented by Fig. 6.5, which shows that all the nonnegative constrains described by (6.8) can be satisfied strictly, whereas the control input cannot achieve its boundary values exactly due to the consideration of stochastic limits. In addition, from Fig. 6.5, the violation rate of stochastic constraints can be smaller than the maximum allowable values 1 - i = 0.1000 , i = 1 , 2 , 3 , 4 , during the time horizon t [ 0.0000 , 0.2000 ] . These numerical simulation results imply that the effectiveness of the proposed numerical computation method can be well guaranteed.

The optimal control input obtained by using the SO approach (Calafiore and Campi, 2006), the SA method (Luedtke and Ahmed, 2008), the MH algorithm (Poojari and Varghese, 2008), the ROA technique (Li and Li, 2015), and the numerical computation method proposed by this paper..
Fig. 6.3
The optimal control input obtained by using the SO approach (Calafiore and Campi, 2006), the SA method (Luedtke and Ahmed, 2008), the MH algorithm (Poojari and Varghese, 2008), the ROA technique (Li and Li, 2015), and the numerical computation method proposed by this paper..
The optimal system state obtained by using the SO approach (Calafiore and Campi, 2006), the SA method (Luedtke and Ahmed, 2008), the MH algorithm (Poojari and Varghese, 2008), the ROA technique (Li and Li, 2015), and the numerical computation method proposed by this paper..
Fig. 6.4
The optimal system state obtained by using the SO approach (Calafiore and Campi, 2006), the SA method (Luedtke and Ahmed, 2008), the MH algorithm (Poojari and Varghese, 2008), the ROA technique (Li and Li, 2015), and the numerical computation method proposed by this paper..
Constraint violations of u 1 ( t ) , u 2 ( t ) , u 3 ( t ) , and u 4 ( t ) obtained by using the SO approach (Calafiore and Campi, 2006), the SA method (Luedtke and Ahmed, 2008), the MH algorithm (Poojari and Varghese, 2008), the ROA technique (Li and Li, 2015), and the numerical computation method proposed by this paper.
Fig. 6.5
Constraint violations of u 1 ( t ) , u 2 ( t ) , u 3 ( t ) , and u 4 ( t ) obtained by using the SO approach (Calafiore and Campi, 2006), the SA method (Luedtke and Ahmed, 2008), the MH algorithm (Poojari and Varghese, 2008), the ROA technique (Li and Li, 2015), and the numerical computation method proposed by this paper.

6.5

6.5 Comparison with other typical approaches

To further illustrate the effectiveness of the proposed numerical computation method, a comparative research is adopted to analyze the optimal system state trajectories and the stochastic constraint violations by implementing the proposed numerical computation method and the other typical approaches. For example, the scenario optimization (SO) approach (Calafiore and Campi, 2006), the sample approximation (SA) method (Luedtke and Ahmed, 2008), the meta-heuristic (MH) algorithm (Poojari and Varghese, 2008), and the robust optimization approximation (ROA) technique (Li and Li, 2015). It should be pointed out that the SO approach and the SA method are two different sampling based methods. In the SO approach, a set of samples are adopted for the stochastic variable so that the stochastic constraints can be approximately replaced by using some deterministic constraints. For the SA method, an empirical distribution obtained from a random sample is used to replace the actual distribution. Further, it can be used to evaluate the stochastic constraints. The MH algorithm is also a sampling based method. In the MH algorithm, the stochastic nature is processed by using the Monte Carlo simulation, whereas the non-convex and nonlinear nature of the optimization problem with stochastic constraints are addressed by using the Genetic Algorithm. Unfortunately, these three methods are designed to obtain feasible solutions without any optimality guarantees. The ROA technique is an analytical approximation method. Its idea is that the stochastic constraint is transformed into a deterministic constraint. Compared with the SO approach, the SA method, and the MH algorithm, it can provide a safe analytical approximation, the size of the corresponding problem is independent of the solution reliability, and needs only a mild assumption on random distributions. However, this safe approximation may lead to the conservatism is very high and the corresponding solution has very poor performance in practice. The optimal system state and control input obtained by using the other typical approaches are also presented by Figs. 6.3 and 6.4 and the corresponding stochastic constraint violation is presented by Fig. 6.5.

Detailed numerical simulation results with relation to the maximum violation rates V θ i , i = 1 , 2 , 3 , 4 , for the stochastic constraints and the optimal value of objective function J for different approaches are presented by Table 6.1. From Figs. 6.3 and 6.5, it follows that the nonnegative constrains and the stochastic constraints can be satisfied by the SO approach, the SA method, the MH algorithm, the ROA technique, and the proposed numerical computation method. Thus, these five approaches are all feasible for solving the nonlinear continuous stirred-tank reactor problem. Further, according to the data or information provided by Table 6.1 and Fig. 6.5, the proposed numerical computation method can usually perform better than the SO approach, the SA method, the MH algorithm, and the ROA technique. More specifically, compared with the SO approach, the SA method, the MH algorithm, and the ROA technique, the constraint violation rates obtained by using the proposed numerical computation method is closer to the maximum violation rates, which implies that it can offer a better solution.

Table 6.1 The optimality and conservatism for the SO approach (Calafiore and Campi, 2006), the SA method (Luedtke and Ahmed, 2008), the MH algorithm (Poojari and Varghese, 2008), the ROA technique (Li and Li, 2015), and the numerical computation method proposed by this paper.
Algorithms Maximum Violation Rate Objective function value
V θ 1 V θ 2 V θ 3 V θ 4 J
SO 3.4581% 3.5157% 3.4608% 3.3982% 18.4572
SA 5.3369% 5.2892% 5.3587% 5.2186% 19.3401
MH 6.2983% 6.4753% 6.3242% 6.4995% 19.8322
ROA 8.4675% 8.3696% 8.2678% 8.3702% 20.6925
Proposed method 9.9241% 9.9389% 9.9182% 9.9816% 21.7922

6.6

6.6 Stability and robustness analysis

In this subsection, we will present the results of a 1000-trial Monto-Carlo analysis. The main purpose for implementing this analysis is to further illustrate the stability and robustness of the proposed numerical computation method with some small perturbations in initial system state. In every trial, the initial system state is defined by x ̃ 0 = x 0 + δ 0 , where δ 0 G and G = 0.0100 , 0.0100 , 0.0500 , 0.0500 , 0.0100 , 2.0000 , 0.0300 , 0.0100 T . The numerical simulation results for 600 realizations with some small perturbations in initial system state is presented by Figs. 6.6 and 6.7. From these two figures, it follows that the introduction of some small perturbations in initial system state will give rise to some differences with respect to the optimal system state trajectory. Fortunately, the corresponding violation rates are less than the preassigned risk parameter values. In addition, all numerical simulation results can successfully converge to the neighbourhood of the optimal solution for the nonlinear continuous stirred-tank reactor problem, which indicates that the proposed numerical computation method is stable and robust with respect to the small perturbations in initial system state.

The system state obtained by using the proposed numerical computation method for 1000 realizations with some small perturbations in initial system state.
Fig. 6.6
The system state obtained by using the proposed numerical computation method for 1000 realizations with some small perturbations in initial system state.
Violation rates of u 1 ( t ) , u 2 ( t ) , u 3 ( t ) , and u 4 ( t ) obtained by using the proposed numerical computation method for 1000 realizations with some small perturbations in initial system state.
Fig. 6.7
Violation rates of u 1 ( t ) , u 2 ( t ) , u 3 ( t ) , and u 4 ( t ) obtained by using the proposed numerical computation method for 1000 realizations with some small perturbations in initial system state.

6.7

6.7 Processing power analysis

In this subsection, we will investigate and analyze the processing power of the proposed numerical computation method. The total calculated cost of the proposed approach is mainly depended on the following two aspects: one is the function evaluations, the other is the time needed for the calculation of the nonlinear programming solver. If M Legendre–Gauss-Radau points are used, then the numbers of decision variables and the function evaluations become O M + 1 m + Mn and O M m + q + 1 , respectively. Clearly, the nonlinear programming solver requires more computation than the function evaluation part. In addition, the computational process is sensitive with respect to the tolerance ε described by Algorithm 5.1. A sensitivity analysis for the tolerance ε is implemented and the numerical simulation results are presented by Table 6.2. From Table 6.2, it follows that the computation time for the nonlinear continuous stirred-tank reactor problem is monotonically increasing as the tolerance ε becomes smaller. In order to balance the computation amount and the numerical solution accuracy, the tolerance ε is set to 1.0000 × 10 - 5 in this paper.

Table 6.2 Processing power analysis with M = 60.0000 and different values of the tolerance ε .
Values for the tolerance ε 1 × 10 - 2 1 × 10 - 3 1 × 10 - 4 1 × 10 - 5 1 × 10 - 6 1 × 10 - 7
Objective function value 17.0541 19.7640 20.3690 21.7922 21.8327 21.8401
CPU time (second) 0.6480 0.6529 0.6703 5.2775 8.3590 19.3240

Although the solution accuracy can be improved by using a larger value of the number M for Legendre–Gauss-Radau, it may greatly increase the computation amount of the numerical computation approach developed by this paper. Note that the computational process is also sensitive with respect to the the parameter M. Then, a sensitivity analysis for the parameter M is implemented and the numerical simulation results are presented by Table 6.3. From Table 6.3, it follows that the computation time for the nonlinear continuous stirred-tank reactor problem is monotonically increasing as the parameter M becomes larger. In order to balance the computation amount and the numerical solution accuracy, the parameter M is set to 60.0000 in this paper.

Table 6.3 Processing power analysis with ε = 1 × 10 - 5 and different values of the parameter M.
Values for the parameter M 7.0000 15.0000 30.0000 60.0000 120.0000 240.0000
Objective function value 15.8483 18.9181 19.5481 21.7922 21.8346 21.8412
CPU time (second) 0.6022 0.6263 0.6419 5.2775 8.3621 19.3323

In conclusion, above numerical simulation results show that compared with compared with the SO approach, the SA method, the MH algorithm, and the ROA technique, the proposed numerical computation method is less conservative and can obtain a stable and robust performance when considering the small perturbations in initial system state.

7

7 Conclusion

In this paper, a convergent approximation approach is proposed and used for solving the optimal control problem for nonlinear chemical processes with stochastic constraints. The main feature of this approach is that it introduces a smooth and differentiable function to obtain a subset of feasible solutions of the stochastic-constrained optimal control problem. Further, the convergence analysis results show that the approximation function and the corresponding feasible set converge uniformly to that of the original optimal control problem as the adjusting parameter α reduces. Following that, a numerical computation approach is proposed for solving the original optimal control problem. Finally, in order to verify the effectiveness of the approach proposed by this paper, the nonlinear continuous stirred-tank reactor problem is extended by considering some stochastic constraints. Subsequently, the proposed approach is used to solve this stochastic-constrained optimal control problem. The numerical simulation results and the comparative study show that compared with other existing typical methods, the proposed approach is less conservative and can obtain a stable and robust performance when considering the small perturbations in initial system state.

Acknowledgements

The authors express their sincere gratitude to Professor Abdulrahman Abdullah Alwarthan, the editor, and the anonymous reviewers for their constructive comments in improving the presentation and quality of this manuscript. This work was supposed by the National Natural Science Foundation of China under Grant Nos. 61963010 and 61563011, and the Special Project for Cultivation of New Academic Talent and Innovation Exploration of Guizhou Normal University, PR China in 2019 under Grant No. 11904-0520077.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. , , , , , , , , , , . Development of multiple machine-learning computational techniques for optimization of heterogenous catalytic biodiesel production from waste vegetable oil. Arabian J. Chem.. 2022;15:103843.
    [Google Scholar]
  2. , , , . Optimal control and the Pontryagin’s principle in chemical engineering: History, theory, and challenges. AIChE J. 2022
    [CrossRef] [Google Scholar]
  3. , , . Optimality-based grid adaptation for input-affine optimal control problems. Comput. Chem. Eng.. 2016;92:189-203.
    [Google Scholar]
  4. , , , . A simple proof of the discrete time geometric Pontryagin maximum principle on smooth manifolds. Automatica 2020
    [CrossRef] [Google Scholar]
  5. , , , . Adaptive dynamic programming and optimal control of nonlinear nonaffine systems. Automatica. 2014;50:2624-2632.
    [Google Scholar]
  6. , , . Dynamic optimization in the design and scheduling of multiproduct batch plants. Ind. Eng. Chem. Res.. 1996;35:2234-2246.
    [Google Scholar]
  7. , , , , . Modeling chemical process systems via neural computation. IEEE Control Syst. Mag.. 1990;10:24-30.
    [Google Scholar]
  8. , , . Pontryagin maximum principle for finite dimensional nonlinear optimal control problems on time scales. SIAM J. Control Optim.. 2013;51:3781-3813.
    [Google Scholar]
  9. , , . Chemical process control. Int. J. Robust Nonlinear Control. 2007;17:1161-1162.
    [Google Scholar]
  10. , , . Output feedback stochastic nonlinear model predictive control for batch processes. Comput. Chem. Eng.. 2019;126:434-450.
    [Google Scholar]
  11. , , . The scenario approach to robust control design. IEEE Trans. Auto. Control. 2006;51:742-753.
    [Google Scholar]
  12. , , . Direct and indirect optimal control applied to plant virus propagation with seasonality and delays. J. Comput. Appl. Math. 2020
    [CrossRef] [Google Scholar]
  13. , , . Dynamic optimization of nonlinear processes by combining neural net model with UDMC. Aiche J.. 1994;40:1488-1497.
    [Google Scholar]
  14. , , , . A bilevel NLP sensitivity-based decomposition for dynamic optimization with moving finite elements. AIChE J.. 2014;60:966-979.
    [Google Scholar]
  15. , , , . Constrained linear quadratic optimal control of chemical processes. Comput. Chem. Eng.. 2000;24:823-827.
    [Google Scholar]
  16. , , , . Direct and indirect methods in optimal control with state constraints and the climbing trajectory of an aircraft. Optim. Control Appl. Meth.. 2018;39:281-301.
    [Google Scholar]
  17. , , , . Parameter estimation and optimal control of a batch transesterification reactor: An experimental study. Chem. Eng. Res. Design. 2020;157:1-12.
    [Google Scholar]
  18. , . Robust and Optimal Control. New Jersey: Prentice Hall; .
  19. , , . Integrating robustness, optimality and constraints in control of nonlinear processes. Chem. Eng. Sci.. 2001;56:1841-1868.
    [Google Scholar]
  20. , , , . Pseudospectral optimal train control. Eur. J. Oper. Res. 2020
    [CrossRef] [Google Scholar]
  21. , , , , . Dispatching-like strategies using intermediate storage for the scheduling of batch chemical processes. Comput. Chem. Eng.. 1995;19:621-626.
    [Google Scholar]
  22. , , , , , . Optimal feedback control of batch self-assembly processes using dynamic programming. J. Process Control. 2020;88:32-42.
    [Google Scholar]
  23. , , , . A hierarchical decision procedure for productivity innovation in large-scale petrochemical processes. Comput. Chem. Eng.. 2008;32:1029-1041.
    [Google Scholar]
  24. , , . How to verify optimal controls computed by direct shooting methods?-A tutorial. J. Process Control. 2012;22:494-507.
    [Google Scholar]
  25. Jensen, T., 1964. Dynamic control of large dimension nonlinear chemical processes, Ph.D. Dissertation, Princeton University.
  26. , , . Convergence rates for direct transcription of optimal control problems using collocation at Radau points. Comput. Optim. Appl.. 2008;41:81-126.
    [Google Scholar]
  27. , , , , . Event triggered control for fault tolerant control system with actuator failure and randomly occurring parameter uncertainty. Appl. Math. Comput.. 2022;415:126714.
    [Google Scholar]
  28. , , , . A direct transcription-based multiple shooting formulation for dynamic optimization. Comput. Chem. Eng. 2020
    [CrossRef] [Google Scholar]
  29. , , , . Optimal control of hybrid electric vehicles based on Pontryagin’s minimum Principle. IEEE Trans. Control Syst. Tech.. 2011;19:1279-1287.
    [Google Scholar]
  30. , , . A dynamic optimization framework for integration of design, control and scheduling of multi-product chemical processes under disturbance and uncertainty. Comput. Chem. Eng.. 2017;106:147-159.
    [Google Scholar]
  31. , , . Optimal control of engineering processes. Waltham, MA: Blaisdell; .
  32. , , , , , , , . Optimization and design of machine learning computational technique for prediction of physical separation process. Arabian J. Chem.. 2022;15:103680.
    [Google Scholar]
  33. , , . Robust distributed model predictive control of constrained continuous-time nonlinear systems: A robustness constraint approach. IEEE Trans. Auto. Control. 2013;59:1673-1678.
    [Google Scholar]
  34. , , . Optimal robust optimization approximation for chance constrained optimization problem. Comput. Chem. Eng.. 2015;74:89-99.
    [Google Scholar]
  35. , , , , . A multi-objective robust optimization scheme for reducing optimization performance deterioration caused by fluctuation of decision parameters in chemical processes. Comput. Chem. Eng.. 2018;119:1-12.
    [Google Scholar]
  36. , , . A sample approximation approach for optimization with probabilistic constraints. SIAM J. Optim.. 2008;19:674-699.
    [Google Scholar]
  37. , , , . Operational design for start-up of chemical processes. Comput. Chem. Eng.. 1997;21:997-1007.
    [Google Scholar]
  38. , , , , . Optimization of chemical processes with dependent uncertain parameters. Chem. Eng. Sci.. 2012;83:119-127.
    [Google Scholar]
  39. , , , . Optimal design of chemical processes with chance constraints. Comput. Chem. Eng.. 2013;59:74-88.
    [Google Scholar]
  40. , , , . Optimal Bayesian experiment design for nonlinear dynamic systems with chance constraints. J. Process Control. 2019;77:155-171.
    [Google Scholar]
  41. , . Deterministic approximations of probability inequalities. Math. Meth. Oper. Res.. 1989;33:219-239.
    [Google Scholar]
  42. , , . Genetic algorithm based technique for solving chance constrained problems. Eur. J. Oper. Res.. 2008;185:1128-1154.
    [Google Scholar]
  43. , , . Stochastic back-off approach for integration of design and control under uncertainty. Ind. Eng. Chem. Res.. 2018;57:4351-4365.
    [Google Scholar]
  44. , , . A review of pseudospectral optimal control: From theory to flight. Annu. Rev. Control. 2012;36:182-197.
    [Google Scholar]
  45. , , . Approximate convex hull based scenario truncation for chance constrained trajectory optimization. Automatica. 2020;112:108702.
    [Google Scholar]
  46. , , , , . Active robust optimization: Enhancing robustness to uncertain environments. IEEE Trans. Cybern.. 2014;44:2221-2231.
    [Google Scholar]
  47. , , , . Optimization of structural and operational variables for the energy efficiency of a divided wall distillation column. Comput. Chem. Eng.. 2012;40:33-40.
    [Google Scholar]
  48. , . Control structure design for complete chemical plants. Comput. Chem. Eng.. 2004;28:219-234.
    [Google Scholar]
  49. , , , , , . Simultaneous design & control of a reactive distillation system-A parametric optimization & control approach. Chem. Eng. Sci. 2021
    [CrossRef] [Google Scholar]
  50. , , , , . A survey of optimal process design capabilities and practices in the chemical and petrochemical industries. Comput. Chem. Eng.. 2018;112:180-189.
    [Google Scholar]
  51. , , . Data-driven adaptive probabilistic robust optimization using information granulation. IEEE Trans. Cybern.. 2016;48:450-462.
    [Google Scholar]
  52. , , , , , , . Analysis of the stability and controllability of chemical processes. Comput. Chem. Eng.. 2011;35:1101-1109.
    [Google Scholar]
  53. , , , , . Hybrid stochastic optimization method for optimal control problems of chemical processes. Chem. Eng. Res. Design. 2017;126:297-310.
    [Google Scholar]
  54. , , , . Optimal control of constrained switched systems and application to electrical vehicle energy management. Nonlinear Anal. Hybrid Syst.. 2018;30:171-188.
    [Google Scholar]
  55. , , , , . Numerical algorithm for optimal control of switched systems and its application in cancer chemotherapy. Appl. Soft Comput.. 2022;115:108090.
    [Google Scholar]
  56. , , , . Switched system optimal control approach for drug administration in cancer chemotherapy. Biomed. Signal Process. Control. 2022;75:103575.
    [Google Scholar]
  57. , , . Chance constrained dynamic optimization approach for single machine scheduling involving flexible maintenance, production, and uncertainty. Eng Appl Artif. Intel.. 2022;114:105024.
    [Google Scholar]
  58. , , , , . Passenger evacuation path Planning in subway station under multiple fires based on multiobjective robust optimization. In: IEEE Trans. Intell. Transport. Syst.. .
    [CrossRef] [Google Scholar]
  59. , , , . Model-free vibration control based on a virtual controlled object considering actuator uncertainty. J. Vib. Control. 2021;27:1324-1335.
    [Google Scholar]
  60. , , , . Optimal control for nonlinear continuous systems by adaptive dynamic programming based on fuzzy basis functions. Appl. Math. Model.. 2016;40:6766-6774.
    [Google Scholar]
  61. , , , . Near-optimal control of nonlinear dynamical systems: A brief survey. Annu. Rev. Control. 2019;47:71-80.
    [Google Scholar]
  62. , , , . Asynchronous nonfragile guaranteed cost control for impulsive switched fuzzy systems with quantizations and its applications. IEEE Trans. Fuzzy Syst. 2022
    [CrossRef] [Google Scholar]
Show Sections