2. Consider the system
ẋ = Ax + Bu
and suppose we want to design state feedback control
u = F x to stabilize the system.
– p. 2/22
3. Consider the system
ẋ = Ax + Bu
and suppose we want to design state feedback control
u = F x to stabilize the system. The design of F is a
tradeoff between the transient response and the control
effort.
– p. 2/22
4. Consider the system
ẋ = Ax + Bu
and suppose we want to design state feedback control
u = F x to stabilize the system. The design of F is a
tradeoff between the transient response and the control
effort. The optimal control approach to this design tradeoff
is to define the performance index (cost functional)
J =
Z ∞
0
[xT
(t)Qx(t) + uT
(t)Ru(t)] dt
and search for the control u = F x that minimizes this index.
– p. 2/22
5. Consider the system
ẋ = Ax + Bu
and suppose we want to design state feedback control
u = F x to stabilize the system. The design of F is a
tradeoff between the transient response and the control
effort. The optimal control approach to this design tradeoff
is to define the performance index (cost functional)
J =
Z ∞
0
[xT
(t)Qx(t) + uT
(t)Ru(t)] dt
and search for the control u = F x that minimizes this index.
Q is an n × n symmetric positive semidefinite matrix and R
is an m × m symmetric positive definite matrix
– p. 2/22
6. The matrix Q can be written as Q = MT M, where M is a
p × n matrix, with p ≤ n. With this representation
xT
Qx = xT
MT
Mx = zT
z
where z = Mx can be viewed as a controlled output
– p. 3/22
7. The matrix Q can be written as Q = MT M, where M is a
p × n matrix, with p ≤ n. With this representation
xT
Qx = xT
MT
Mx = zT
z
where z = Mx can be viewed as a controlled output
Optimal Control Problem: Find u(t) = F x(t) to minimize J
subject to the constraint ẋ = Ax + Bu
– p. 3/22
8. The matrix Q can be written as Q = MT M, where M is a
p × n matrix, with p ≤ n. With this representation
xT
Qx = xT
MT
Mx = zT
z
where z = Mx can be viewed as a controlled output
Optimal Control Problem: Find u(t) = F x(t) to minimize J
subject to the constraint ẋ = Ax + Bu
Since J is defined by an integral over [0, ∞), the first
question we need to address is: Under what conditions will
J exist and be finite?
– p. 3/22
9. Write J as
J = lim
tf →∞
¯
J(tf )
¯
J(tf ) =
Z tf
0
[xT
(t)Qx(t) + uT
(t)Ru(t)] dt
– p. 4/22
10. Write J as
J = lim
tf →∞
¯
J(tf )
¯
J(tf ) =
Z tf
0
[xT
(t)Qx(t) + uT
(t)Ru(t)] dt
¯
J(tf ) is a monotonically increasing function of tf .
– p. 4/22
11. Write J as
J = lim
tf →∞
¯
J(tf )
¯
J(tf ) =
Z tf
0
[xT
(t)Qx(t) + uT
(t)Ru(t)] dt
¯
J(tf ) is a monotonically increasing function of tf . Hence,
as tf → ∞, ¯
J(tf ) either converges to a finite limit or
diverges to infinity
– p. 4/22
12. Write J as
J = lim
tf →∞
¯
J(tf )
¯
J(tf ) =
Z tf
0
[xT
(t)Qx(t) + uT
(t)Ru(t)] dt
¯
J(tf ) is a monotonically increasing function of tf . Hence,
as tf → ∞, ¯
J(tf ) either converges to a finite limit or
diverges to infinity
Under what conditions will limtf →∞
¯
J(tf ) be finite?
– p. 4/22
13. Recall that (A, B) is stabilizable if the uncontrollable
eigenvalues of A, if any, have negative real parts
– p. 5/22
14. Recall that (A, B) is stabilizable if the uncontrollable
eigenvalues of A, if any, have negative real parts
Notice that (A, B) is stabilizable if (A, B) is controllable or
Re[λ(A)] < 0
– p. 5/22
15. Recall that (A, B) is stabilizable if the uncontrollable
eigenvalues of A, if any, have negative real parts
Notice that (A, B) is stabilizable if (A, B) is controllable or
Re[λ(A)] < 0
Definition: (A, C) is detectable if the unobservable
eigenvalues of A, if any, have negative real parts
– p. 5/22
16. Recall that (A, B) is stabilizable if the uncontrollable
eigenvalues of A, if any, have negative real parts
Notice that (A, B) is stabilizable if (A, B) is controllable or
Re[λ(A)] < 0
Definition: (A, C) is detectable if the unobservable
eigenvalues of A, if any, have negative real parts
Lemma 1 : Suppose (A, B) is stabilizable, (A, M) is
detectable, where Q = MT M, and u(t) = F x(t). Then, J
is finite for every x(0) ∈ Rn if and only if
Re[λ(A + BF )] < 0
– p. 5/22
18. Remarks:
1. The need for (A, B) to be stabilizable is clear, for
otherwise there would be no F such that
Re[λ(A + BF )] < 0
– p. 6/22
19. Remarks:
1. The need for (A, B) to be stabilizable is clear, for
otherwise there would be no F such that
Re[λ(A + BF )] < 0
2. To see why detectability of (A, M) is needed, consider
ẋ = x + u, J =
Z ∞
0
u2
(t) dt
– p. 6/22
20. Remarks:
1. The need for (A, B) to be stabilizable is clear, for
otherwise there would be no F such that
Re[λ(A + BF )] < 0
2. To see why detectability of (A, M) is needed, consider
ẋ = x + u, J =
Z ∞
0
u2
(t) dt
A = 1, B = 1, M = 0, R = 1
– p. 6/22
21. Remarks:
1. The need for (A, B) to be stabilizable is clear, for
otherwise there would be no F such that
Re[λ(A + BF )] < 0
2. To see why detectability of (A, M) is needed, consider
ẋ = x + u, J =
Z ∞
0
u2
(t) dt
A = 1, B = 1, M = 0, R = 1
(A, B) is controllable, but (A, M) is not detectable
– p. 6/22
22. Remarks:
1. The need for (A, B) to be stabilizable is clear, for
otherwise there would be no F such that
Re[λ(A + BF )] < 0
2. To see why detectability of (A, M) is needed, consider
ẋ = x + u, J =
Z ∞
0
u2
(t) dt
A = 1, B = 1, M = 0, R = 1
(A, B) is controllable, but (A, M) is not detectable
F = 0 ⇒ u(t) = 0 ⇒ J = 0
– p. 6/22
23. Remarks:
1. The need for (A, B) to be stabilizable is clear, for
otherwise there would be no F such that
Re[λ(A + BF )] < 0
2. To see why detectability of (A, M) is needed, consider
ẋ = x + u, J =
Z ∞
0
u2
(t) dt
A = 1, B = 1, M = 0, R = 1
(A, B) is controllable, but (A, M) is not detectable
F = 0 ⇒ u(t) = 0 ⇒ J = 0
This control is clearly optimal and results in a finite J but it
does not stabilize the system because A + BF = A = 1
– p. 6/22
24. Lemma 2: For any stabilizing control u(t) = F x(t), the
cost is given by
J = x(0)T
W x(0)
where W is a symmetric positive semidefinite matrix that
satisfies the Lyapunov equation
W (A + BF ) + (A + BF )T
W + Q + F T
RF = 0
– p. 7/22
25. Lemma 2: For any stabilizing control u(t) = F x(t), the
cost is given by
J = x(0)T
W x(0)
where W is a symmetric positive semidefinite matrix that
satisfies the Lyapunov equation
W (A + BF ) + (A + BF )T
W + Q + F T
RF = 0
Remark: The control u(t) = F x(t) is stabilizing if
Re[λ(A + BF )] < 0
– p. 7/22
26. Theorem: Consider the system ẋ = Ax + Bu and the
performance index
J =
Z ∞
0
[xT
(t)Qx(t) + uT
(t)Ru(t)] dt
where Q = MT M, R is symmetric and positive definite,
(A, B) is stabilizable, and (A, M) is detectable. The
optimal stabilizing control is
u(t) = −R−1
BT
P x(t)
where P is the symmetric positive semidefinite solution of
the Algebraic Riccati Equation (ARE)
0 = P A + AT
P + Q − P BR−1
BT
P
– p. 8/22
29. Remarks:
1. Since the control is stabilizing,
Re[λ(A − BR−1
BT
P )] < 0
2. The control is optimal among all square integrable
signals u(t), not just among u(t) = F x(t)
– p. 9/22
30. Remarks:
1. Since the control is stabilizing,
Re[λ(A − BR−1
BT
P )] < 0
2. The control is optimal among all square integrable
signals u(t), not just among u(t) = F x(t)
3. The Riccati equation can have more than one solution,
but there is only one solution that is positive semidefinite
– p. 9/22
51. Matlab Calculations:
X = lyap(A,N) solves the Lyapunov equation
AX + XAT
+ N = 0
To solve the equation
W (A + BF ) + (A + BF )T
W + Q + F T
RF = 0
use W = lyap((A+B*F)′,Q+F′*R*F) ( Notice the transpose)
– p. 14/22
52. X = are(A,S,Q) solves the Riccati equation
XA + AT
X + Q − XSX = 0
S = BR−1BT
– p. 15/22
53. X = are(A,S,Q) solves the Riccati equation
XA + AT
X + Q − XSX = 0
S = BR−1BT
[K,P,E] = lqr(A,B,Q,R) solves the Riccati equation
P A + AT
P + Q − P BR−1
BT
P = 0
K = R−1BT P and E is a vector whose elements are the
eigenvalues of (A − BR−1BT P )
– p. 15/22
54. X = are(A,S,Q) solves the Riccati equation
XA + AT
X + Q − XSX = 0
S = BR−1BT
[K,P,E] = lqr(A,B,Q,R) solves the Riccati equation
P A + AT
P + Q − P BR−1
BT
P = 0
K = R−1BT P and E is a vector whose elements are the
eigenvalues of (A − BR−1BT P )
Notice that F = −K
– p. 15/22
56. LQR Design
Given the system
ẋ = Ax + Bu
where (A, B) is stabilizable, we want to design state
feedback control u = F x to stabilize the system while
meeting
Transient response specifications
Magnitude constraints on x and u
– p. 16/22
58. Procedure:
1. Choose Q and R such that Q = MT M, with (A, M)
detectable, and R = RT > 0
– p. 17/22
59. Procedure:
1. Choose Q and R such that Q = MT M, with (A, M)
detectable, and R = RT > 0
2. Solve the Riccati equation
P A + AT
P + Q − P BR−1
BT
P = 0
and compute F = −R−1BT P
– p. 17/22
60. Procedure:
1. Choose Q and R such that Q = MT M, with (A, M)
detectable, and R = RT > 0
2. Solve the Riccati equation
P A + AT
P + Q − P BR−1
BT
P = 0
and compute F = −R−1BT P
3. Simulate the initial response of ẋ = (A + BF )x for
different initial conditions
– p. 17/22
61. Procedure:
1. Choose Q and R such that Q = MT M, with (A, M)
detectable, and R = RT > 0
2. Solve the Riccati equation
P A + AT
P + Q − P BR−1
BT
P = 0
and compute F = −R−1BT P
3. Simulate the initial response of ẋ = (A + BF )x for
different initial conditions
4. If the transient response specifications and/or the
magnitude constraints are not met, go back to step 1 and
re-choose Q and/or R
– p. 17/22
62. Typical Choice:
Q =
q1
q2
...
qn
, R = ρ
r1
r2
...
rm
qi =
1
tsi(ximax)2
, ri =
1
(uimax)2
, ρ > 0
tsi is the desired settling time of xi
ximax is a constraint on |xi|
uimax is a constraint on |ui|
ρ is chosen to tradeoff regulation versus control effort
– p. 18/22
63. Example:
A =
0 1 0
0 0 1
−0.4 −4.2 −2.1
, B =
0
0
1
Open-loop eigenvalues are: −0.1, −1 ± 1.7321j
Desired settling time is 4 sec.
– p. 19/22
64. Example:
A =
0 1 0
0 0 1
−0.4 −4.2 −2.1
, B =
0
0
1
Open-loop eigenvalues are: −0.1, −1 ± 1.7321j
Desired settling time is 4 sec.
Q = I3; Try R = 0.01, 0.1, and 1
– p. 19/22
65. Example:
A =
0 1 0
0 0 1
−0.4 −4.2 −2.1
, B =
0
0
1
Open-loop eigenvalues are: −0.1, −1 ± 1.7321j
Desired settling time is 4 sec.
Q = I3; Try R = 0.01, 0.1, and 1
R λ
0.01 −9.7364, −0.9039 ± 0.4593j
0.1 −0.6568, −1.9548 ± 1.0158j
1 −0.2599, −1.1433 ± 1.6840j
– p. 19/22