SlideShare a Scribd company logo
1 of 63
System Identification, Estimation, and Modeling
Peter Schneider
University of California, Los Angeles
Abstract
System identification methods are used to model multi-input/multi-
output systems from measured input-output data in the frequency do-
main, with a nonlinear least-squares algorithm, and in the time domain,
with Hankel matrices constructed from discrete-time impulse response
data. Model reduction is performed, by utilizing a balanced realization
and the singular value decomposition, to identify and discard low-gain
states from already minimal realizations. The trade-offs for reducing sys-
tem models to different sizes is examined and it is demonstrated that by
appropriately selecting a reduced size, the lower-order systems model the
much more complex ones with relatively little loss in accuracy.
i
Contents
1 Introduction 1
2 Model Reduction with Balanced Realization 1
2.1 Model Estimation from Frequency Response Features . . . . . . . 1
2.2 MIMO State Space Model Representation . . . . . . . . . . . . . 5
2.3 Minimal Realization . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.4 Model Reduction with Balanced Realization . . . . . . . . . . . . 7
2.5 Final Reduced Model . . . . . . . . . . . . . . . . . . . . . . . . . 16
3 Model Estimation from Nonlinear Least-Squares Solution 19
3.1 Least-Squares Problem . . . . . . . . . . . . . . . . . . . . . . . . 19
3.1.1 Linear Least-Squares . . . . . . . . . . . . . . . . . . . . . 19
3.1.2 Extension to Nonlinear Least-Squares . . . . . . . . . . . 19
3.2 Application to Transfer Function Identification . . . . . . . . . . 20
3.2.1 Model Representation . . . . . . . . . . . . . . . . . . . . 20
3.2.2 Unweighted Nonlinear Least-Squares Problem . . . . . . . 20
3.2.3 Levy’s Linearized Estimator . . . . . . . . . . . . . . . . . 21
3.2.4 Sanathanan-Koerner Iteration . . . . . . . . . . . . . . . . 21
3.2.5 Extension to MIMO System Model Estimation . . . . . . 23
3.2.6 Polynomial Order and Weighting Function Selection . . . 25
3.3 Evolution of Iterations Towards Convergence . . . . . . . . . . . 26
3.4 Final Converged Least-Squares Solution Models . . . . . . . . . . 29
4 Subspace Realization from Impulse Response 33
4.1 Discrete-Time State Space Model . . . . . . . . . . . . . . . . . . 33
4.2 Markov Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.3 Measured Impulse Response . . . . . . . . . . . . . . . . . . . . . 34
4.4 Hankel Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.5 Model Estimation from Singular Value Decomposition . . . . . . 37
4.6 Full-Order Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.7 Model Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.8 Final Reduced Model . . . . . . . . . . . . . . . . . . . . . . . . . 51
5 Conclusion 57
Bibliography 58
ii
List of Figures
1 Full Order Model 11 Channel . . . . . . . . . . . . . . . . . . . . 3
2 Full Order Model 12 Channel . . . . . . . . . . . . . . . . . . . . 3
3 Full Order Model 21 Channel . . . . . . . . . . . . . . . . . . . . 4
4 Full Order Model 22 Channel . . . . . . . . . . . . . . . . . . . . 4
5 Hankel Singular Values . . . . . . . . . . . . . . . . . . . . . . . . 9
6 Reduced Models of Varying Size 11 Channel . . . . . . . . . . . . 10
7 Reduced Models of Varying Size 12 Channel . . . . . . . . . . . . 11
8 Reduced Models of Varying Size 21 Channel . . . . . . . . . . . . 11
9 Reduced Models of Varying Size 22 Channel . . . . . . . . . . . . 12
10 Error Full Order Model vs Reduced Model 11 Channel . . . . . . 13
11 Error Full Order Model vs Reduced Model 12 Channel . . . . . . 14
12 Error Full Order Model vs Reduced Model 21 Channel . . . . . . 14
13 Error Full Order Model vs Reduced Model 22 Channel . . . . . . 15
14 Error Full Order Model vs Reduced Model MIMO System . . . . 15
15 Reduced Model 11 Channel . . . . . . . . . . . . . . . . . . . . . 16
16 Reduced Model 12 Channel . . . . . . . . . . . . . . . . . . . . . 17
17 Reduced Model 21 Channel . . . . . . . . . . . . . . . . . . . . . 17
18 Reduced Model 22 Channel . . . . . . . . . . . . . . . . . . . . . 18
19 Convergence of Least Squares Solution Error . . . . . . . . . . . 26
20 Evolution of Least Squares Iterations 11 Channel . . . . . . . . . 27
21 Evolution of Least Squares Iterations 12 Channel . . . . . . . . . 27
22 Evolution of Least Squares Iterations 21 Channel . . . . . . . . . 28
23 Evolution of Least Squares Iterations 22 Channel . . . . . . . . . 28
24 Least Squares Solutions 11 Channel . . . . . . . . . . . . . . . . 29
25 Least Squares Solutions 12 Channel . . . . . . . . . . . . . . . . 30
26 Least Squares Solutions 21 Channel . . . . . . . . . . . . . . . . 30
27 Least Squares Solutions 22 Channel . . . . . . . . . . . . . . . . 31
28 Fifth-Order Model Pole Zero Map . . . . . . . . . . . . . . . . . 31
29 Fourth-Order Model Pole Zero Map . . . . . . . . . . . . . . . . 32
30 System Block Diagram . . . . . . . . . . . . . . . . . . . . . . . . 34
31 Measured Impulse Response Output 1 . . . . . . . . . . . . . . . 34
32 Measured Impulse Response Output 2 . . . . . . . . . . . . . . . 35
33 Measured Impulse Response Output 3 . . . . . . . . . . . . . . . 35
34 Full-Order Model Impulse Response Output 1 . . . . . . . . . . . 39
35 Full-Order Model Impulse Response Output 2 . . . . . . . . . . . 39
36 Full-Order Model Impulse Response Output 3 . . . . . . . . . . . 40
37 Full-Order Model Impulse Response Error Output 1 . . . . . . . 40
38 Full-Order Model Impulse Response Error Output 2 . . . . . . . 41
39 Full-Order Model Impulse Response Error Output 3 . . . . . . . 41
40 Full-Order Model Frequency Response Output 1 . . . . . . . . . 42
41 Full-Order Model Frequency Response Output 2 . . . . . . . . . 42
42 Full-Order Model Frequency Response Output 3 . . . . . . . . . 43
43 Eigenvalues for Full-Order System . . . . . . . . . . . . . . . . . 44
44 Instability of Full-Order system Model shown in Impulse Response 45
iii
45 Singular Values of H(0) . . . . . . . . . . . . . . . . . . . . . . . 46
46 Fifteen Largest Singular Values . . . . . . . . . . . . . . . . . . . 48
47 Output 1 Impulse Response Errors for Reduced Systems . . . . . 49
48 Output 2 Impulse Response Errors for Reduced Systems . . . . . 49
49 Output 3 Impulse Response Errors for Reduced Systems . . . . . 50
50 Reduced-Order Model Impulse Response Output 1 . . . . . . . . 51
51 Reduced-Order Model Impulse Response Output 2 . . . . . . . . 52
52 Reduced-Order Model Impulse Response Output 3 . . . . . . . . 52
53 Reduced-Order Model Frequency Response Output 1 . . . . . . . 53
54 Reduced-Order Model Frequency Response Output 2 . . . . . . . 53
55 Reduced-Order Model Frequency Response Output 3 . . . . . . . 54
56 Eigenvalues for Reduced-Order Model . . . . . . . . . . . . . . . 54
57 Reduced-Order Model Impulse Response Error Output 1 . . . . . 55
58 Reduced-Order Model Impulse Response Error Output 2 . . . . . 55
59 Reduced-Order Model Impulse Response Error Output 3 . . . . . 56
iv
1 Introduction
System identification refers to the process of extracting information about a
system from measured input-output data and using that information to build
mathematical models capable of explaining the data. Models typically describe
the behavior of a system either in the frequency domain, with transfer functions,
or in the time domain, with state-space models. Each system representation is
derived from differential equations that approximate complex naturally occur-
ring phenomena.
System identification also includes optimization of model design such as
model reduction. This involves projecting a higher-order model onto a lower-
order model having properties similar to the original one. While naturally oc-
curring phenomena are arguably of infinite order, if their characteristics can be
adequately represented in a much simpler model, they can be analyzed with less
computational time, less storage, and less cost than the original problem. This
can be advantageous for such purposes as direct simulation of dynamic systems,
where as more detail is included, the dimensions of the simulations can increase
to prohibitive levels.
System identification methods have applications in a wide range of indus-
tries, including nearly all areas of engineering. Methods are commonly used to
model mechanical, aerospace, electrical, chemical, and biological systems as well
as economic and social systems. Their applications include use in simulation,
control design, analysis, and prediction of complex systems.
This report covers several different methods for identifying and modeling a
system from measured data. In section two, we consider model reduction from
data collected in the frequency domain. In section three, we consider identifi-
cation a system model in the frequency domain with a least-squares algorithm.
In section four, we consider subspace realization from impulse response data
collected in discrete-time and subsequent model reduction.
2 Model Reduction with Balanced Realization
2.1 Model Estimation from Frequency Response Features
It is first desired to fit asymptotically stable transfer functions to empirical
frequency response data collected from a multi-input/multi-output (MIMO)
system. The system in this case is two-input/two-output, comprised of four
individual single-input/single-output (SISO) channels, each of which exhibits
simple high-pass filter features, low-pass filter features, and resonant frequency
features.
By observing these frequency response features, a transfer function for each
SISO channel is approximated by combining the transfer functions for simple
first order low-pass filters, first order high-pass filters, and oscillators, respec-
tively. These transfer functions are given as
Hlp(s) =
ωl
s + ωl
1
Hhp(s) =
s
s + ωh
Hosc(s) =
ω2
n
1 + 2ζωn + ω2
n
where ωl and ωh are cutoff frequencies for the low-pass filter and high-pass
filter, respectively; ωn is the natural frequency and ζ is the damping ratio for
the oscillator transfer function.
By appropriately combining these transfer functions and adjusting their pa-
rameters, an asymptotically stable transfer function for each SISO channel is
estimated, with frequency response closely fitting the empirical data (Figures 1
- 4). For this case, the transfer functions for each channel was constructed with
the following combination
H11(s) = Hlp(s)Hlp(s)Hosc(s)
H12(s) = Hlp(s)Hosc(s)
H21(s) = Hlp(s)Hhp(s)Hosc(s)
H22(s) = Hhp(s)Hosc(s)
Common features in the empirical data, such as resonant frequencies and cutoff
frequencies, were kept the same across all channels when possible. This en-
sures that the different transfer functions share some identical features, making
it easier to reduce the final system model without affecting the input-output
properties.
The estimated transfer function for the MIMO system is made up of the 4
SISO transfer functions stacked together.
H(s) =
H11(s) H12(s)
H21(s) H22(s)
Y (s) = H(s)U(s)
2
10
0
10
1
10
2
10
−4
10
−2
10
0
10
2
rad/sec
mag(V/V)
11 Channel Magnitude
Measured
Full Order Model
10
0
10
1
10
2
−200
−100
0
100
200
rad/sec
phase(deg)
11 Channel Phase
Figure 1: Full Order Model 11 Channel
10
0
10
1
10
2
10
−4
10
−2
10
0
10
2
rad/sec
mag(V/V)
12 Channel Magnitude
Measured
Full Order Model
10
0
10
1
10
2
−200
−100
0
100
200
rad/sec
phase(deg)
12 Channel Phase
Figure 2: Full Order Model 12 Channel
3
10
0
10
1
10
2
10
−4
10
−2
10
0
10
2
rad/sec
mag(V/V)
21 Channel Magnitude
Measured
Full Order Model
10
0
10
1
10
2
−200
−100
0
100
200
rad/sec
phase(deg)
11 Channel Phase
Figure 3: Full Order Model 21 Channel
10
0
10
1
10
2
10
−4
10
−2
10
0
10
2
rad/sec
mag(V/V)
22 Channel Magnitude
Measured
Full Order Model
10
0
10
1
10
2
−200
−100
0
100
200
rad/sec
phase(deg)
22 Channel Phase
Figure 4: Full Order Model 22 Channel
4
2.2 MIMO State Space Model Representation
While it was convenient to fit the frequency response of the system models to
then empirical data in the frequency domain, the transfer functions can now be
converted to state space form. The transfer function for each of the four SISO
channels is converted to the form
˙x = Ax + Bu
y = Cx + Du
where x is the state vector, y is the output vector, u is the input vector, and
(A, B, C, D) are the system matrices.
The relationship between the transfer functions in the frequency domain and
the state-space models in the time domain can be seen by taking the Laplace
transform of the state space equations. This relationship is
H(s) =
Y (s)
U(s)
= C(sI − A)−1
B + D
The state space representation for the four individual SISO channels can
then be stacked together into one MIMO state space representation for the
entire two-input/two-output system with
˙x =






A11 0 · · · 0
0 A12
...
...
...
... A21 0
0 · · · 0 A22






x +




B11 0
0 B12
B21 0
0 B22



 u
y =
C11 C12 0 0
0 0 C21 C22
x +
D11 D12
D21 D22
u
where in general, A ∈ Cn×n
, B ∈ Cn×m
, C ∈ Cp×n
, and D ∈ Cp×m
for an
m-input/p-output system. Here we have m = 2, p = 2. We are left with a single
system model for the entire MIMO system with 14 states.
2.3 Minimal Realization
It is now desired to reduce the system to a minimal realization by removing any
uncontrollable and unobservable modes from the system. These modes can be
removed without affecting the input-output properties of the system.
The controllability and observability properties of the system can be de-
termined from the ranks of the controllability and observability matrices given
as
C = B AB A2
B · · · An−1
B
5
O =







C
CA
CA2
...
CAn−1







A system is controllable if and only if the controllability matrix is full rank and
is observable if and only if the observability matrix is full rank. For this system,
we find that rank(C) < n and rank(O) < n, allowing us to conclude that the
system is both uncontrollable and unobservable.
The state space representation of the system is not unique; given the transfer
function there are an infinite number of possible representations related by sim-
ilarity transformations. In the interest of reducing the system to a minimal real-
ization, we wish to rearrange the system modes via a similarity transformation
such that the modes that are controllable and observable, controllable and un-
observable, uncontrollable and observable, and uncontrollable and unobservable
are partitioned. This is accomplished through a Kalman decomposition, where
the system coordinates are transformed x → z, (A, B, C, D) → ( ˆA, ˆB, ˆC, ˆD),
with a non-singular matrix ˆT
x → z = ˆT−1
x
A → ˆA = ˆT−1
A ˆT
B → ˆB = ˆT−1
B
C → ˆC = C ˆT
D → ˆD = D
where ˆT is constructed as
ˆT = T1 T2 T3 T4
where the columns of T1 T2 form a basis of R(C), the columns of T2 form
a basis of R(C) ∩ N(O), the columns of T2 T4 form a basis of N(O), and
the columns of T3 are chosen so that ˆT is non-singular. By undergoing this
similarity transform, the system takes on the form




˙xCO
˙xCO
˙xCO
˙xCO



 =




ACO 0 A13 0
A21 ACO A23 A24
0 0 ACO 0
0 0 A43 ACO








xCO
xCO
xCO
xCO








BCO
BCO
0
0



 u
y = CCO 0 CCO 0




xCO
xCO
xCO
xCO



 + Du
6
where the state is partitioned such that xCO is controllable and observable, xCO
is uncontrollable and observable, xCO is controllable and unobservable, and xCO
is uncontrollable and unobservable.
By utilizing the Kalman decomposition, the system model is reduced to a
minimal realization by truncating the system to the controllable and observable
partitions
˙z = ACOz + BCOu
y = CCOz + Du
This is done without changing the input-output properties of the system.
2.4 Model Reduction with Balanced Realization
While uncontrollable and unobservable states have been removed, the dimen-
sions of the system can still potentially be further reduced with relatively little
loss of input-output properties. This can be done if low-gain modes can be
identified and removed from the system.
To accomplish this, we look at the infinite horizon controllability and ob-
servability grammians of the system, defined as
Gc(∞) =
∞
0
eAτ
BB∗
eA∗
τ
dτ
Go(∞) =
∞
0
eA∗
τ
C∗
CeAτ
dτ
Because Re(λ(A)i) < 0 i = 1, 2, · · · , n, the MIMO system is asymptotically
stable and the infinite horizon controllability and observability grammians can
be obtained from the algebraic Lyapunov equations
AGc(∞) + Gc(∞)A∗
= −BB∗
A∗
Go(∞) + Go(∞)A = −C∗
C
The system is controllable if and only if the controllability grammian is positive
definite and is observable if and only if the observability grammian is positive
definite. With the uncontrollable and unobservable states previously removed,
we have Go(∞) > 0 and Gc(∞) > 0, but the grammians are still possibly
ill-conditioned. If this is the case, the direction of these smaller eigenvalues
output less energy and would be good candidates for removal. The challenge
becomes finding how to transform the system such that modes that are both
lightly observable and lightly controllable can be identified for removal.
Referring to the infinite horizon grammians as Gc and Go going forward,
the Hankel singular values are defined as the eigenvalues of the products of the
grammians
σH,k ≡ λk(GoGc), k = 1, 2, . . . n
For the purpose of identifying and removing modes that are both lightly ob-
servable and lightly controllable, it is desired to find a similarity transform T
7
such that a “balanced” realization is obtained where the grammians are equal
and diagonal
Gc = Go = Σ
Σ ≡






σH,1 0 · · · 0
0 σH,2
...
...
... 0
0 · · · 0 σH,n






where the Hankel singular values have been ordered as
σH,1 ≥ σH,2 ≥ · · · ≥ σH,n ≥ 0
If the system can be transformed so that this is the case, the system modes will
be as controllable as they are observable and the system can be reduced while
altering the input-output properties in a quantifiable manner.
When undergoing a change of coordinates x = Tz, the grammians undergo
the transformation
x → z
Go(t) → T∗
Go(t)T
Gc(t) → T−1
Gc(t)T−∗
While a congruence transform will preserve the sign definiteness of the grammi-
ans, in general it will not preserve their eigenvalues. However, the eigenvalues
of the product of the grammians, λ(GoGc) as well as λ(G2
oG2
c) are invariant.
Assuming the system is minimal
λ(GoGc) = λ(G1/2
c GoG1/2
c )
Letting U ∈ Cn×n
be a unitary matrix that diagonalizes G
1/2
c GoG
1/2
c ,
U∗
G1/2
c GoG1/2
c U = Σ2
then using the coordinate transformation
T = G1/2
c UΣ−1/2
will achieve the goal of balanced coordinates
Go → T∗
Go(t)T = Σ
Gc → T−1
Gc(t)T−∗
= Σ
Figure 5 shows the Hankel singular values for the system. It shows that in
a balanced realizations, the system has some modes that are both significantly
less controllable and observable than others. In particular, there is a large drop
between σH,4 and σH,5. This suggests that the system can be further reduced,
8
0 1 2 3 4 5 6 7 8
10
−6
10
−5
10
−4
10
−3
10
−2
10
−1
10
0
10
1
Hankel Singular Values
HankelSingularValues
Number
Figure 5: Hankel Singular Values
perhaps to a four state system associated with the four larger Hankel singular
values, with relatively little loss in the input-output properties of the system.
Based on this analysis, it is desired to remove from the system matrices,
expressed in a balanced realization, the blocks corresponding to the smaller
Hankel singular values. The system in balanced coordinates can be partitioned
as
˙z1
˙z2
=
A11 A12
A21 A22
z1
z2
+
B1
B2
u
y = C1 C2
z1
z2
where A11 ∈ Cr×r
, A12 ∈ Cr×(n−r)
, etc. If σH,r > σH,r+1, then A11 and A22
are asymptotically stable and the system can be reduced to
˙z = A11z + B1u
y = C1z
where the frequency response of this reduced system will approximate that of
the original one with a bound on the frequency response error between the two
systems
|H(jw) − Hr(jw)| ≤ 2(σH,r+1 + σH,r+2 + · · · + σH,n) ∀w ∈ R
In this case, it appears that selecting r = 4 and discarding the states associated
with σH,5, σH,6, and σH,7 will result in a relatively small bound on the frequency
response error.
9
Figures 6 - 9 compare the frequency response of the system model reduced
to different state sizes. Indeed, reducing the system by discarding the states
corresponding to the smaller Hankel singular values has relatively little effect
in terms of frequency response. The frequency responses of the seven state,
six state, five state, and four state systems are nearly indistinguishable from
one another. Reducing the system further to a three state or two state system
requires discarding states corresponding to much larger Hankel singular values.
This results in much larger deviations in the frequency response behavior as
seen in the figures.
10
0
10
1
10
2
10
−4
10
−2
10
0
10
2
rad/sec
mag(V/V)
11 Channel Magnitude 2 state model
3 state model
4 state model
5 state model
6 state model
7 state model
10
0
10
1
10
2
−200
−100
0
100
200
rad/sec
phase(deg)
11 Channel Phase
Figure 6: Reduced Models of Varying Size 11 Channel
10
10
0
10
1
10
2
10
−4
10
−2
10
0
10
2
rad/sec
mag(V/V)
12 Channel Magnitude
10
0
10
1
10
2
−100
0
100
200
rad/sec
phase(deg)
12 Channel Phase 2 state model
3 state model
4 state model
5 state model
6 state model
7 state model
Figure 7: Reduced Models of Varying Size 12 Channel
10
0
10
1
10
2
10
−4
10
−2
10
0
10
2
rad/sec
mag(V/V)
21 Channel Magnitude
10
0
10
1
10
2
−200
−100
0
100
200
rad/sec
phase(deg)
21 Channel Phase
2 state model
3 state model
4 state model
5 state model
6 state model
7 state model
Figure 8: Reduced Models of Varying Size 21 Channel
11
10
0
10
1
10
2
10
−4
10
−2
10
0
10
2
rad/sec
mag(V/V)
22 Channel Magnitude
10
0
10
1
10
2
−200
−100
0
100
200
rad/sec
phase(deg)
12 Channel Phase
2 state model
3 state model
4 state model
5 state model
6 state model
7 state model
Figure 9: Reduced Models of Varying Size 22 Channel
12
Based on this analysis, it is determined that the system will be reduced to a
four state model. The error in frequency response between this reduced model
and the full-order model will be limited by the relatively small values of the
Hankel singular values corresponding to the discarded states, σH,5, σH,6, and
σH,7.
Figures 10 - 13 show the errors in frequency response that result from re-
ducing the system to a four state model from the original full-order model for
each of the channels. Figure 14 shows the total combined error for the MIMO
system. The four state models approximates the frequency response of the full
order model with good accuracy; the cost for reducing the system beyond a
minimal realization is relatively small.
10
0
10
1
10
2
10
3
10
−14
10
−12
10
−10
10
−8
rad/sec
mag(V/V)
11 Channel Magnitude Error
10
0
10
1
10
2
10
3
10
−10
10
−5
10
0
rad/sec
phase(deg)
11 Channel Phase Error
Figure 10: Error Full Order Model vs Reduced Model 11 Channel
13
10
0
10
1
10
2
10
3
10
−14
10
−12
10
−10
10
−8
rad/sec
mag(V/V)
12 Channel Magnitude Error
10
0
10
1
10
2
10
3
10
−10
10
−8
10
−6
10
−4
rad/sec
phase(deg)
12 Channel Phase Error
Figure 11: Error Full Order Model vs Reduced Model 12 Channel
10
0
10
1
10
2
10
3
10
−11
10
−10
10
−9
10
−8
rad/sec
mag(V/V)
21 Channel Magnitude Error
10
0
10
1
10
2
10
3
10
−20
10
−10
10
0
10
10
rad/sec
phase(deg)
21 Channel Phase Error
Figure 12: Error Full Order Model vs Reduced Model 21 Channel
14
10
0
10
1
10
2
10
3
10
−15
10
−10
10
−5
rad/sec
mag(V/V)
22 Channel Magnitude Error
10
0
10
1
10
2
10
3
10
−20
10
−10
10
0
10
10
rad/sec
phase(deg)
22 Channel Phase Error
Figure 13: Error Full Order Model vs Reduced Model 22 Channel
10
0
10
1
10
2
10
3
10
−12
10
−10
10
−8
10
−6
rad/sec
mag(V/V)
Combined MIMO System Magnitude Error
10
0
10
1
10
2
10
3
10
−10
10
−5
10
0
10
5
rad/sec
phase(deg)
Combined MIMO System Phase Error
Figure 14: Error Full Order Model vs Reduced Model MIMO System
15
2.5 Final Reduced Model
The final result is a mathematical model of a four-state MIMO system that
closely models the actual system in terms of frequency response (Figures 15
- 18 ). The four state model maintains the high-pass filter, low-pass filter, and
resonance characteristics seen in the measured data. For the reduced system
λ(A) = {−7.599, −7.601, −3.645 ± j72.8042}, allowing us to confirm that the
reduced model maintains asymptotic stability. Moreover we see that
λ1,2 ≈ −ωl = −ωh
λ3,4 ≈ −ζωn ± jωn 1 − ζ2
from the original low-pass filter, high-pass filter, and oscillator transfer functions
used when fitting the data in the frequency domain, with very little loss in
accuracy. We also see that the system is indeed minimal, as we have λ(Gc) =
λ(Go) = {5.299, 4.825, 0.888, 0.423}.
10
0
10
1
10
2
10
−4
10
−2
10
0
10
2
rad/sec
mag(V/V)
11 Channel Magnitude
Measured
4 State Reduced Model
10
0
10
1
10
2
−200
−100
0
100
200
rad/sec
phase(deg)
11 Channel Phase
Figure 15: Reduced Model 11 Channel
16
10
0
10
1
10
2
10
−4
10
−2
10
0
10
2
rad/sec
mag(V/V)
12 Channel Magnitude
Measured
4 State Reduced Model
10
0
10
1
10
2
−200
−100
0
100
200
rad/sec
phase(deg)
12 Channel Phase
Figure 16: Reduced Model 12 Channel
10
0
10
1
10
2
10
−4
10
−2
10
0
10
2
rad/sec
mag(V/V)
21 Channel Magnitude
Measured
4 State Reduced Model
10
0
10
1
10
2
−200
−100
0
100
200
rad/sec
phase(deg)
21 Channel Phase
Figure 17: Reduced Model 21 Channel
17
10
0
10
1
10
2
10
−4
10
−2
10
0
10
2
rad/sec
mag(V/V)
22 Channel Magnitude Measured
4 State Reduced Model
10
0
10
1
10
2
−200
−100
0
100
200
rad/sec
phase(deg)
22 Channel Phase
Figure 18: Reduced Model 22 Channel
18
3 Model Estimation from Nonlinear Least-Squares
Solution
3.1 Least-Squares Problem
3.1.1 Linear Least-Squares
It is often the case that we wish to estimate the values of a set of parameters,
x, that is linearly related to a set of measurements, y,





y1
y2
...
ym





=





A11 A12 · · · A1n
A21 A22 · · · A2n
...
...
...
...
Am1 Am2 · · · Amn










x1
x2
...
xn





but the system is overdetermined, i.e. there are m linear equations in n unknown
coefficients with m > n. In general, y = Ax has no solution because in general,
y /∈ R(A).
Because there is no solution, we wish to find an estimate that best models the
measured data according to some criterion. Defining the error between estimate
and measurement as
e = y − Ax
we see that we cannot obtain zero error, so we must find a way to minimize it
in some manner. The least-squares method results from using a cost function
that is the sum of the squared errors
J = y − Ax 2
2
The least-squares solution is then a matter of finding an estimate where this
cost function is minimized
xLS = arg min
x
y − Ax 2
2
The linear least-squares problem has a unique, closed-form solution that can
be found by applying a necessary condition that the cost function, J, has a
stationary value at the optimum point. This gives us n gradient vectors that
must be zero
∂J
∂xi
= 0 i = 1, 2, · · · , n
the solution of which yields the vector xLS of the optimal parameter values.
3.1.2 Extension to Nonlinear Least-Squares
Practically, it is often the case that we have a nonlinear relationship between
our set of measurements, y, and parameters, x,
y = f(x)
19
Thus, when the gradient vector of the cost function is set to zero, the resulting
equations become functions of both the independent variable and the parame-
ters. The gradient equations do not have a closed form solution. Instead, we
approximate the model with a linear one and refine the parameters iteratively
with initial values for these parameters guessed. We see that finding the solu-
tion to a nonlinear least-squares problem is much more difficult. There is no
closed-form solution, the problem may have multiple local minima, making it
difficult to find the global minimum, and convergence is not guaranteed.
3.2 Application to Transfer Function Identification
We now consider estimation of a linear system model on the basis of frequency
domain data by utilizing a least-squares minimization between frequency re-
sponse of estimated transfer function model and measured data.
3.2.1 Model Representation
It is desired to evaluate the coefficients, P = [a0, a1, · · · , an, b1, b2, · · · , bd]T
, of
the rational transfer function model expressed as
H(s, P) =
N(s, P)
D(s, P)
=
n
k=0 aksk
1 +
d
k=1 bksk
such that it best models a discrete set of measured frequency response data
Hm(sk), k = 1, 2, · · · , F. We wish to evaluate P by minimizing some cost
criterion related to the error between these transfer functions at all of the ex-
perimental points in a least-square sense.
3.2.2 Unweighted Nonlinear Least-Squares Problem
Defining the error at each measured point k as
ek = Hm(sk) − H(sk)
the unweighted least squares problem is
arg min
P
F
k=1
|Hm(sk) − H(sk, P)|
2
= arg min
P
F
k=1
Hm(sk)D(sk, P) − N(sk, P)
D(sk, P)
2
We see that this is a nonlinear least-squares problem, making it is difficult to
estimate the system parameters in an accurate and efficient manner. If we
have a-priori knowledge of the system poles, the parameters in the denominator
would be assumed known, and the problem reduces to a linear one. In practice,
this is often not a realistic scenario.
20
3.2.3 Levy’s Linearized Estimator
An early approximation of the nonlinear least-squares problem was obtained
by using the denominator, D(sk, P), as a weighting function [3]. The weighted
error at each point becomes
ekD(sk, P) = Hm(sk)D(sk, P) − N(sk, P)
and the least squares problem becomes
arg min
P
F
k=1
|Hm(sk)D(sk, P) − N(sk, P)|
2
This reduces the difficult nonlinear least-squares problem to a linear least-
squares problem, from which a closed-form solution can be obtained. By par-
tially differentiating the new weighted cost function with respect to each of
the unknown polynomial coefficients and equating equal to zero, a set of linear
equations is obtained from which P is solved for.
We see that this formulation reduces to the unweighted least-squares problem
when 1
|D(sk,P )|2 equals one across all frequencies. While this choice of weighting
function linearizes the problem, this clearly biases the least-squares solution. In
particular, if the frequency values in the problem span several decades, the lower
frequency values will have little influence and a good fit cannot be obtained at
these lower frequencies.
3.2.4 Sanathanan-Koerner Iteration
An iterative approach to minimize these issue is to modify the weighting function
to D(sk,PL)
D(sk,PL−1) , where the subscript L is the iteration number [5]. The weighted
error at each point becomes
ek
D(sk, PL)
D(sk, PL−1)
=
Hm(sk)D(sk, PL) − N(sk, PL)
D(sk, PL−1)
and the least squares problem at iteration L becomes
arg min
PL
F
k=1
Hm(sk)D(sk, PL) − N(sk, PL)
D(sk, PL−1)
2
By using this modified weighting function, we see that the problem still reduces
to a linear one at each iteration because the denominator is not a function of
PL, but with the biasing issues now alleviated.
Several extensions to the Sanathanan-Koerner Iteration can be implemented
to aid in finding a solution that converges, obtaining smaller approximation er-
rors, and obtaining a stable model. In particular, unstable poles can be mirrored
to the left hand side of the complex plane in between iterations and the cost
21
function can be modified by raising the denominator to a power, r, such that it
becomes
arg min
PL
F
k=1
Hm(sk)D(sk, PL) − N(sk, PL)
D(sk, PL−1)r
2
r ∈ [0, ∞)
We see that when r = 0, this reduces to the problem from Levy’s linear estimator
and when r = 1, this reduces to the Sanathanan-Koerner iteration. Using pow-
ers different than one can potentially reduce approximation errors and relaxing
the power to r < 1 can help with obtaining a solution that converges.
By setting ∂J
∂PL
equal to zero, the same set of linear equations obtained from
formulation of Levy’s linear estimator is obtained, with the only modification
that there is now the extra weighting term 1
|D(sk,PL−1)r|2 in all equations. This
yields the modified set of linear equations of the form
MP = C
M =














λ0 0 −λ2 · · · T1 S2 −T3 · · ·
0 λ2 0 · · · −S2 T3 S4 · · ·
λ2 0 −λ4 · · · T3 S4 −T5 · · ·
...
...
...
...
...
...
...
...
T1 −S2 −T3 · · · U2 0 −U4 · · ·
S2 T3 −S4 · · · 0 U4 0 · · ·
T3 −S4 −T6 · · · U4 0 −U6 · · ·
...
...
...
...
...
...
...
...














P =














a0
a1
a2
...
b1
b2
b3
...














C =














S0
T1
S2
...
0
U2
0
...














22
where
λi =
F
k=1
ωi
k
|D(sk, PL−1)r|2
Si =
F
k=1
ωi
kRe(Hm(sk))
|D(sk, PL−1)r|2
Ti =
F
k=1
ωi
kIm(Hm(sk))
|D(sk, PL−1)r|2
Ui =
F
k=1
ωi
k(Re2
(Hm(sk)) + Im2
(Hm(sk)))
|D(sk, PL−1)r|2
The coefficients at each iteration, L, are evaluated by solving for the parameters,
P, in the linearized equations. As these parameters are not known initially, an
initial choice of D(sk, P0) = 1 is assumed for the first iteration. Subsequent
iterations tend to converge rapidly and the process is repeated until some con-
vergence criteria is met where
JL ∼ J∞ (L → ∞)
PL ∼ P∞ (L → ∞)
3.2.5 Extension to MIMO System Model Estimation
The same least-squares algorithm can be extended to evaluate the entire MIMO
system simultaneously with a single set of linear equations. All transfer func-
tions must share the same denominator and the set of linear equations for each
individual SISO system is then stacked together into one MIMO set of equations.
The equations become
MP = C
23
M =




































λ0,11 0 · · · T1,11 S2,11 · · ·
0 λ2,11 · · · 0 0 0 −S2,11 T3,11 · · ·
...
...
...
...
...
...
λ0,12 0 · · · T1,12 S2,12 · · ·
0 0 λ2,12 · · · 0 0 −S2,12 T3,12 · · ·
...
...
...
...
...
...
λ0,21 0 · · · T1,21 S2,21 · · ·
0 0 0 λ2,21 · · · 0 −S2,21 T3,21 · · ·
...
...
...
...
...
...
λ0,22 0 · · · T1,22 S2,22 · · ·
0 0 0 0 λ2,22 · · · −S2,22 T3,22 · · ·
...
...
...
...
...
...
T1,11 −S2,11 · · · T1,12 −S2,12 · · · T1,31 −S2,31 · · · T1,22 −S2,22 · · · U2 0 · · ·
S2,11 T3,11 · · · S2,12 T3,12 · · · S2,21 T3,21 · · · S2,22 T3,22 · · · 0 U4 · · ·
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...




































P =































a0,11
a1,11
...
a0,12
a1,12
...
a0,21
a1,21
...
a0,22
a1,22
...
b1
b2
...































24
C =































S0,11
T1,11
...
S0,12
T1,12
...
S0,21
T1,21
...
S0,22
T1,22
...
0
U2
...































where the terms λi,q, Si,q, and Ti,q now apply specifically to channel q, while
the terms Ui now sum together across all channels. The coefficient parameters,
P, are then solved for in the same manner as was shown previously for the SISO
transfer functions, with the coefficients in the numerator a0,q, a1,q, a2,q, · · · now
unique to channel q and the coefficients in the denominator b1, b2, b3, · · · now
shared by each transfer function.
3.2.6 Polynomial Order and Weighting Function Selection
Because this is an iterative algorithm approximating a nonlinear problem, there
is no guarantee that it will converge to a least-squares solution. It is also
possible for the algorithm to converge to a local minimum rather than the
desired global minimum. While these are possible outcomes, the method does
allow for freedom of choice regarding the orders of the polynomials, n and d, in
the transfer function being solved for as well as the weighting power, r, used.
By appropriately selecting and tuning these parameters, it is possible to steer
towards desirable results.
From examination of the frequency response of the different channels, it is
believed that the 11 channel has an excess of four poles over the number of
zeros, the 12 and 21 channels have an excess of three poles over the number of
zeros, and the 22 channel has an excess of two poles over the number of zeros.
With this in mind, the order of the numerator and denominator polynomials
in the transfer functions were selected through experimentation until adequate
solutions were achieved. A good fifth-order model was obtained with d = 5,
n11 = 1, n12 = 2, n21 = 2, n22 = 3, and r = 1.1. Also, a fourth-order
model, with larger approximation errors, was also obtained with d = 4, n11 = 0,
n12 = 1, n21 = 1, n22 = 2, and r = 0.83.
25
3.3 Evolution of Iterations Towards Convergence
The least-squares algorithm is now applied to the entire MIMO system using the
fifth-order model and the refinement in error minimization with each iteration
is examined. Figure 19 shows the convergence process of the weighted errors
in the cost function as the entire system is being solved simultaneously; there
is naturally a single point of convergence across the system. The errors dive
down initially, level out, and then decrease again, before finally settling into the
minimum obtained from the final least-squares solution. Figures 20 - 23 show
the evolution of the first few iterations for each of the channels as they rapidly
converge to the frequency response seen in the empirical data. In this case, the
algorithm reaches the minimum at iteration L = 14. At this point JL ≈ J∞,
PL ≈ P∞.
0 5 10 15
10
0
10
1
10
2
10
3
10
4
10
5
10
6
10
7
10
8
Iteration Number
J
Convergence of Error in Cost Function, Fifth Order Model r=1.1
Total
11 Channel
12 Channel
21 Channel
22 Channel
Figure 19: Convergence of Least Squares Solution Error
26
10
0
10
1
10
2
10
−4
10
−2
10
0
10
2
rad/sec
mag(V/V)
11 Channel Magnitude
10
0
10
1
10
2
−200
−100
0
100
200
rad/sec
phase(deg)
11 Channel Phase
Measured
Iteration 1
Iteration 3
Iteration 5
Iteration 7
Figure 20: Evolution of Least Squares Iterations 11 Channel
10
0
10
1
10
2
10
−4
10
−2
10
0
10
2
rad/sec
mag(V/V)
12 Channel Magnitude
10
0
10
1
10
2
−200
−100
0
100
200
rad/sec
phase(deg)
12 Channel Phase
Measured
Iteration 1
Iteration 3
Iteration 5
Iteration 7
Figure 21: Evolution of Least Squares Iterations 12 Channel
27
10
0
10
1
10
2
10
−4
10
−2
10
0
10
2
rad/sec
mag(V/V)
21 Channel Magnitude
10
0
10
1
10
2
−200
−100
0
100
200
rad/sec
phase(deg)
21 Channel Phase
Measured
Iteration 1
Iteration 3
Iteration 5
Iteration 7
Figure 22: Evolution of Least Squares Iterations 21 Channel
10
0
10
1
10
2
10
−4
10
−2
10
0
10
2
rad/sec
mag(V/V)
22 Channel Magnitude
10
0
10
1
10
2
−200
−100
0
100
200
rad/sec
phase(deg)
22 Channel Phase
Measured
Iteration 1
Iteration 3
Iteration 5
Iteration 7
Figure 23: Evolution of Least Squares Iterations 22 Channel
28
3.4 Final Converged Least-Squares Solution Models
Two different models, a fourth-order MIMO transfer function and a fifth-order
MIMO transfer function, are now considered in their final converged form. Fig-
ures 24 - 27 compare the frequency response of the models with the measured
data for each channel. The fourth-order transfer function does model some prop-
erties of the measured data, but is lacking compared to the fifth-order transfer
function. The fifth-order transfer function models the measured data much
more accurately, the cost of the added order in the system model appears to be
a worthwhile trade-off in this case. If there is a particular shortcoming with the
model, it is that it has less damping at the natural frequency.
The asymptotic stability of both systems is confirmed by observing all poles
in the left-hand side of the complex plane (Figures 28 and 29). Moreover, the
dynamics of the actual measured system can be seen explicitly modelled quite
accurately in the structure of the fifth-order transfer function pole-zero map.
The complex conjugate pole pair models the resonant frequency. The zeros of
the 11 and 12 channels are nearly identical, with the 12 channel having an extra
zero that somewhat negates a pole contributing to extra low-pass filter features
in the 11 channel. Similarly, the zeros of the 21 channel and the 22 channel
are nearly identical, with the 22 channel having an extra zero that somewhat
negates a pole contributing to extra low-pass filter features in the 21 channel.
The pole-zero map of the fourth-order tranfser function very roughly displays a
similar phenomena, but with results much more smeared.
10
0
10
1
10
2
10
−5
10
0
rad/sec
mag(V/V)
11 Channel Magnitude Measured
4th−order, r=0.83
5th−order, r=1.1
10
0
10
1
10
2
−200
−100
0
100
200
rad/sec
phase(deg)
11 Channel Phase
Figure 24: Least Squares Solutions 11 Channel
29
10
0
10
1
10
2
10
−4
10
−2
10
0
10
2
rad/sec
mag(V/V)
12 Channel Magnitude
Measured
4th−order, r=0.83
5th−order, r=1.1
10
0
10
1
10
2
−200
−100
0
100
200
rad/sec
phase(deg)
12 Channel Phase
Figure 25: Least Squares Solutions 12 Channel
10
0
10
1
10
2
10
−4
10
−2
10
0
10
2
rad/sec
mag(V/V)
21 Channel Magnitude
Measured
4th−order, r=0.83
5th−order, r=1.1
10
0
10
1
10
2
−200
−100
0
100
200
rad/sec
phase(deg)
21 Channel Phase
Figure 26: Least Squares Solutions 21 Channel
30
10
0
10
1
10
2
10
−4
10
−2
10
0
10
2
rad/sec
mag(V/V)
22 Channel Magnitude Measured
4th−order, r=0.83
5th−order, r=1.1
10
0
10
1
10
2
−200
−100
0
100
200
rad/sec
phase(deg)
22 Channel Phase
Figure 27: Least Squares Solutions 22 Channel
−9 −8 −7 −6 −5 −4 −3 −2 −1 0 1
−80
−60
−40
−20
0
20
40
60
80
Fifth−Order Least−Squares Solution Pole−Zero Map
Real Axis (seconds−1
)
ImaginaryAxis(seconds
−1
)
11 Channel Zeros
12 Channel Zeros
21 Channel Zeros
22 Channel Zeros
Shared Poles
Figure 28: Fifth-Order Model Pole Zero Map
31
−80 −70 −60 −50 −40 −30 −20 −10 0 10
−80
−60
−40
−20
0
20
40
60
80
Fourth−Order Least−Squares Solution Pole−Zero Map
Real Axis (seconds
−1
)
ImaginaryAxis(seconds
−1
)
11 Channel Zeros
12 Channel Zeros
21 Channel Zeros
22 Channel Zeros
Shared Poles
Figure 29: Fourth-Order Model Pole Zero Map
32
4 Subspace Realization from Impulse Response
We now consider system identification in the time-domain given the response of
a dynamic system to a unit input pulse in discrete-time.
4.1 Discrete-Time State Space Model
A discrete-time model of a system in state space form can be represented by a
set of linear difference equations
xk+1 = Axk + Buk
yk = Cxk + Duk
where the integer k is discrete time, xk is the state vector, uk is the input vector,
and yk is the output vector. In general, A ∈ Cn×n
, B ∈ Cn×m
, C ∈ Cp×n
,
and D ∈ Cp×m
for an m-input/p-output system. Here we will consider a one-
input/three-output system, m = 1 and p = 3. The objective is to identify the
time-invariant system matrices (A, B, C, D).
4.2 Markov Parameters
The output of the system, yk, can be represented as a sequence of weighted
inputs. Assuming a zero initial state, x0 = 0, the output can be obtained
through repeated substitution of the state-space equations as
yk =
k
i=1
CAi−1
Buk−1 + Duk
This sequence depends only on inputs, it is independent of any state measure-
ments.
The Markov parameter sequence for a state space model is obtained from
the impulse response of the system. Given the unit input pulse
uk =
1 k = 0
0 k = 1, 2, 3, · · ·
it follows that the response of the state space model, with an assumed zero
initial state, is
hk =
D k = 0
CAk−1
B k = 1, 2, 3, · · ·
where hk ∈ Rm×p
. The impulse response terms CAk
B for k = 0, 1, 2, · · · are the
Markov parameters of the state space model. Markov parameters are invariant
to state transformations. They are also unique for a given system because the
parameters are the pulse response of the system. We can see that (A, B, C, D)
is a realization of {hk}∞
k=0.
33
4.3 Measured Impulse Response
The Markov parameters can be constructed from the measured impulse response
without explicit knowledge of the system matrices. In this case, we have a one-
input/three-output system, setup according to the diagram in Figure 30. An
8V impulse is input to the system at t = 1s and the three system outputs, y1,
y2, and y3, are sampled at 100Hz.
High-Pass Filter Low-Pass Filter Oscillator
u y1 y2 y3
Figure 30: System Block Diagram
Figures 31 - 33 show the measured impulse responses for the three outputs.
y1 measures the input u having passed through the high-pass filter; y2 measures
the input u having passed through the high-pass filter and low-pass filter; y3
measures the input u having passed through the high-pass filter, low-pass filter,
and oscillator. For the purpose of constructing a unit impulse response sequence
in the manner previously described, the outputs will be manipulated. The
outputs will be calibrated by subtracting out the averaged measurements prior
to the impulse and normalized to the response from a unit impulse.
0 2 4 6 8 10
−1
0
1
2
3
4
5
6
7
8
time(s)
Magnitude(V)
Meausured Impulse Response Output 1
Figure 31: Measured Impulse Response Output 1
34
0 2 4 6 8 10
−0.2
−0.1
0
0.1
0.2
0.3
0.4
0.5
time(s)
Magnitude(V)
Meausured Impulse Response Output 2
Figure 32: Measured Impulse Response Output 2
0 2 4 6 8 10
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
time(s)
Magnitude(V)
Meausured Impulse Response Output 3
Figure 33: Measured Impulse Response Output 3
35
4.4 Hankel Matrix
The Hankel matrices of the system are now constructed with the Markov pa-
rameters from the measured impulse response, hk, according to
H(k − 1) =







hk hk+1 hk+2 · · · hk+J−1
hk+1 hk+2 hk+3 · · · hk+J
hk+2 hk+3 hk+4 · · · hk+J+1
...
...
...
...
...
hk+L−1 hk+L hk+L+1 · · · hk+L+J−2







where H(k − 1) ∈ Rp∗L×m∗J
. Of particular interest are the Hankel matrices for
the cases where k = 1 and k = 2
H(0) =







h1 h2 h3 · · · hJ
h2 h3 h4 · · · hJ+1
h3 h4 h5 · · · hJ+2
...
...
...
...
...
hL hL+1 hL+2 · · · hL+J−1







H(1) =







h2 h3 h4 · · · hJ+1
h3 h4 h5 · · · hJ+2
h4 h5 h6 · · · hJ+3
...
...
...
...
...
hL+1 hL+2 hL+3 · · · hL+J







We determine how many data points is necessary in construction of the Hankel
matrices in order to adequately capture the characteristics of the system. If the
minimal dimension of the system is not immediately apparent, a large number of
Markov parameters is used to insure that the dimension is not underestimated.
If it is known or guessed that the system is not of dimension larger than some
n, then a minimum of 2n elements of the output vector is required. While nine
seconds of measurement from time of impulse at 100Hz was collected in this
case, we will assume less measured data will suffice. We will instead consider
J = 300, L = 300.
It can be seen that these Hankel matrices are closely related to the control-
lability matrix, CJ , and the observability matrix, OL, given as
CJ = B AB A2
B · · · AJ−1
B
OL =







C
CA
CA2
...
CAL−1







36
where CJ ∈ Rn×m∗J
and OL ∈ Rp∗L×n
. The Hankel matrices H(0) and H(1)
are related to the controllability and observability matrices by the factorizations
H(0) = OLCJ
H(1) = OLACJ
In addition, the first m columns of H(0) and the first p rows of H(0) can be
factored according to
H(0)





Im
0
...
0





= OLB
Ip 0 · · · 0 H(0) = CCJ
It is clear that the Hankel matrices contain information about the system
matrices (A, B, C). We would like to first obtain the controllability and observ-
ability matrices from a suitable factorization of H(0). It would then follow that
the system matrices can be estimated from factorizations of the Hankel matrices
according to
A = (O∗
LOL)−1
O∗
LH(1)C∗
J (CJ C∗
J )−1
B = (O∗
LOL)−1
O∗
LH(0)





Im
0
...
0





C = Ip 0 · · · 0 H(0)C∗
J (CJ C∗
J )−1
We note that separately, we can always obtain
D = h0
4.5 Model Estimation from Singular Value Decomposition
It is now desired to obtain the controllability and observability matrices from a
factorization of the Hankel matrix H(0). By using a singular value decomposi-
tion (SVD), the matrix H(0) is factored as
H(0) = OLCJ = UΣV ∗
Several choices are considered regarding how the results of the SVD will be
factored into the product of the controllability and observability matrices. One
choice is to use an internally balanced factorization where
CJ = Σ1/2
V ∗
OL = UΣ1/2
37
This will be particularly helpful for later reduction of the system model. It then
follows that the system matrix A can be estimated from H(1) as
A = Σ−1/2
U∗
H(1)V Σ−1/2
and the system matrices B and C can be estimated from the first m columns
of H(0) and first p rows of H(0), respectively, as
B = Σ−1/2
U∗
H(0)





Im
0
...
0





C = Ip 0 · · · 0 H(0)V Σ−1/2
It can be seen from the results of the SVD, that while Σ is unique, U and V are
not. This means that the estimated systems matrices (A, B, C) are not unique,
because they are dependent on U and V .
4.6 Full-Order Model
Having obtained (A, B, C, D) for the estimated system model, its characteristics
are now compared with those obtained from the measured data. By having
solved for these system matrices with Hankel matrices constructed with J = 300
and L = 300, the system model has 300 states. Figures 34 - 36 compare the
impulse response of the estimated system model with the measured impulse
response data and show the identified model closely approximating the actual
one.
Figures 37 - 39 show the errors in the impulse response of the estimated
model. They show that construction of the Hankel matrices from 3s of measured
data was indeed enough to adequately capture characteristics of the systems;
there is relatively little error even after 3s. While the estimated system models
the first 3s of impulse response very accurately with no error, this does mean
that it includes any noise in the measured system, which perhaps does not
accurately approximate the “true” noise-free system.
Additionally, the estimated model can be compared with the measured data
in the frequency domain. This is possible because the measured data is from an
impulse response. Taking the z-transform of the state space equations converts
the system to a transfer function in the frequency domain, which is equivalent
to taking the discrete Fourier transform of the impulse response. Figures 40
- 42 compare the frequency response of the estimated system model with the
actual system, obtained by taking the fast Fourier transform of the measured
impulse response data. The model closely approximates the measured data in
the frequency domain, with the presence of noise particularly noticeable at lower
magnitudes.
38
0 1 2 3 4 5
−0.2
0
0.2
0.4
0.6
0.8
1
1.2
time(s)
Magnitude(V)
Output 1 Full−Order System Model Impulse Response
Meaured
Estimated Full−Order Model
Figure 34: Full-Order Model Impulse Response Output 1
0 1 2 3 4 5
−0.01
0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
time(s)
Magnitude(V)
Output 2 Full−Order System Model Impulse Response
Meaured
Estimated Full−Order Model
Figure 35: Full-Order Model Impulse Response Output 2
39
0 1 2 3 4 5
−0.1
−0.08
−0.06
−0.04
−0.02
0
0.02
0.04
time(s)
Magnitude(V)
Output 3 Full−Order System Model Impulse Response
Meaured
Estimated Full−Order Model
Figure 36: Full-Order Model Impulse Response Output 3
0 1 2 3 4 5
−2
−1.5
−1
−0.5
0
0.5
1
1.5
2
x 10
−4
time(s)
Magnitude(V)
Output 1 Full−Order Model Impulse Response Error
Figure 37: Full-Order Model Impulse Response Error Output 1
40
0 1 2 3 4 5
−1.5
−1
−0.5
0
0.5
1
1.5
x 10
−4
time(s)
Magnitude(V)
Output 2 Full−Order Model Impulse Response Error
Figure 38: Full-Order Model Impulse Response Error Output 2
0 1 2 3 4 5
−1.5
−1
−0.5
0
0.5
1
1.5
2
x 10
−4
time(s)
Magnitude(V)
Output 3 Full−Order Model Impulse Response Error
Figure 39: Full-Order Model Impulse Response Error Output 3
41
10
0
10
1
10
2
10
−0.7
10
−0.4
10
−0.1
rad/sec
mag(V/V)
Channel 1 Magnitude
Measured
Estimated Full−Order Model
10
0
10
1
10
2
−200
−100
0
100
200
rad/sec
phase(deg)
Channel 1 Phase
Figure 40: Full-Order Model Frequency Response Output 1
10
0
10
1
10
2
10
−2
10
−1
10
0
rad/sec
mag(V/V)
Channel 2 Magnitude
Measured
Estimated Full−Order Model
10
0
10
1
10
2
−200
−100
0
100
200
rad/sec
phase(deg)
Channel 2 Phase
Figure 41: Full-Order Model Frequency Response Output 2
42
10
0
10
1
10
2
10
−4
10
−2
10
0
rad/sec
mag(V/V)
Channel 3 Magnitude
Measured
Estimated Full−Order Model
10
0
10
1
10
2
−200
−100
0
100
200
rad/sec
phase(deg)
Channel 3 Phase
Figure 42: Full-Order Model Frequency Response Output 3
43
Figure 43 shows the 300 eigenvalues of the identified system model, most
are very slow ones close to the unit circle. For the system to be asymptotically
stable, it requires |λi| < 1, i = 1, 2, · · · , 300, but there are actually 18 eigen-
values of magnitude larger than one. Because these eigenvalues are so slow, the
instability was not seen in the previous 5 second impulse response plots. Ex-
panding the impulse response data out to 100 seconds confirms the instability of
the system model due to a handful of very slow eigenvalues (Figure 44). While
it initially appeared that the obtained system realization modelled the actual
system very accurately, it actually is inadequate due its lack of stability.
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
0.1π/T
0.2π/T
0.3π/T
0.4π/T
0.5π/T
0.6π/T
0.7π/T
0.8π/T
0.9π/T
1π/T
0.1π/T
0.2π/T
0.3π/T
0.4π/T
0.5π/T
0.6π/T
0.7π/T
0.8π/T
0.9π/T
1π/T
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Eigenvalues for Full−Order System Model
Real Axis
ImaginaryAxis
Figure 43: Eigenvalues for Full-Order System
44
−50
0
50
To:Out(1)
−20
−10
0
10
20
To:Out(2)
0 10 20 30 40 50 60 70 80 90 100
−50
0
50
To:Out(3)
Instability of Full−Order System Model from Impulse Response
time (seconds)
Magnitude(V)
Figure 44: Instability of Full-Order system Model shown in Impulse Response
4.7 Model Reduction
It is now desired to compute an asymptotically stable system realization of
reduced order whose impulse response best fits the measured one. Figure 45
shows the singular values for the Hankel matrix (constructed with J = 300, L =
300) that the 300 state system model was constructed from. With perfect, noise-
free data, the minimal realization can easily be obtained by only keeping the
non-zero singular values from the SVD of H(0). However, with experimentally
obtained data addled with noise, the rank of the Hankel matrix will usually be
the order of the system, as is the case here. As a result, the system realization
may be of much larger dimensions than the dynamic system actually warrants,
motivating the desire to reduce the system. While the system model is composed
of 300 states, it is clear that several singular values are much larger than most
others. This leads to the belief that this systems can be represented by a
realizations of much smaller order with relatively little loss in system input-
output properties.
45
0 50 100 150 200 250 300
10
−4
10
−3
10
−2
10
−1
10
0
10
1
Number
SingularValues
Singular Values
Figure 45: Singular Values of H(0)
The discrete-time controllability and observability grammians for the system
are defined as
Gc,J = CJ C∗
J =
J−1
k=0
Ak
BB∗
A∗k
Go,L = O∗
LOL =
L−1
k=0
A∗k
C∗
CAk
Having chosen an internally balanced realization where
CJ = Σ1/2
V ∗
OL = UΣ1/2
we see that our controllability and observability grammians satisfy
Gc,J = Go,L = Σ
Thus, the system is in a coordinate system that is as controllable as it is ob-
servable. This makes it a convenient one to remove low gain system modes
from.
In order to reduce the order of the system realizations, we wish to approxi-
mate the original Hankel matrix H(0) of rank n by another matrix ˆH(0) of rank
r < n, such that we minimize
H(0) − ˆH(0)
46
This desired matrix ˆH(0) is obtained by taking the SVD of H(0), setting the
smallest non-zero singular values to zero, and multiplying the matrices back
together. That is,
H(0) = UΣV ∗
Σ =






σ1 0 · · · 0
0 σ2
...
...
... 0
0 · · · 0 σn






while
ˆH(0) = U ˆΣV ∗
ˆΣ =















σ1 0 · · · · · · · · · · · · 0
0 σ2
...
...
...
...
... σr
...
... 0
...
...
...
...
0 · · · · · · · · · · · · · · · 0















where ˆH(0) is the closest matrix of rank r to H(0).
Having obtained ˆH(0) = U ˆΣV ∗
of reduced rank r in this manner, we seek
to determine the corresponding system matrix ˆA, such that we minimize
UΣ1/2
AΣ1/2
V ∗
− U ˆΣ1/2 ˆAˆΣ1/2
V ∗
= U∗
H(1)V − ˆΣ1/2 ˆAˆΣ1/2
By noting that ˆΣ1/2 ˆAˆΣ1/2
is zero in the last n − r rows and columns (based on
our construction of ˆΣ) we can see that this minimum will occur if ˆΣ1/2 ˆAˆΣ1/2
=
U∗
H(1)V in the upper r × r submatrix. We are left with a freedom of choice
in selection of the remaining elements of ˆA in the last n − r rows and columns;
this selection will have no effect on ˆΣ1/2 ˆAˆΣ1/2
. Thus, in the interest of being
able to obtain a system matrix of the smallest dimension, the last n − r rows
and columns are set equal to zero.
This allows us to obtain the following system matrices
ˆA = ˆΣ1/2†
U∗
H(1)V ˆΣ1/2†
ˆB = ˆΣ−1/2
U∗
H(0)





Im
0
...
0





47
ˆC = Ip 0 · · · 0 H(0)V ˆΣ−1/2
but with ˆA constructed with zeros in the last n − r rows and columns, ˆB con-
structed with zeros in the last n − r rows, and ˆC constructed with zeros in the
last n − r columns. By taking the upper r × r submatrix of ˆA, the first r rows
of ˆB, and the first r columns of ˆC, we calculate a smaller system of rank r from
H(0) of rank n, that approximates the original full-order model.
We now consider just how small we can make the rank of our Hankel matrix
and the corresponding state size of our system model, while still best modeling
the full-order system. Figure 46 shows the fifteen largest singular values of
H(0). The Hankel matrix has seven singular values in particular that are much
larger than the remaining singular values. This motivates the belief that the
system can perhaps be reduced to a seven state model with relatively little loss
in system input-output properties.
0 2 4 6 8 10 12 14 16
10
−3
10
−2
10
−1
10
0
10
1
Number
SingularValues
Fifteen Largest Singular Values
Figure 46: Fifteen Largest Singular Values
Figures 47 - 49 show the errors in impulse response that result from reducing
the full-order model to various sizes. We see that indeed, reducing the model
to anything less than a seven-state model introduces relatively large errors, this
corresponds to removing one of the relatively larger singular values. We see that
by keeping the seven larger singular values in our seven-state model, we have
relatively little error. We also see that there is much less benefit in increasing
our system model to an eight-state one; this corresponds to keeping an extra
singular value that is significantly smaller. The difference in impulse responses
between the seven-state model and the eight-state one are relatively small, the
seven-state model already has little error.
48
0 1 2 3 4 5
−0.12
−0.1
−0.08
−0.06
−0.04
−0.02
0
0.02
0.04
time(s)
Error(V)
Output 1 Impulse Response Errors for Reduced System Models
3 State Model
4 State Model
5 State Model
6 State Model
7 State Model
8 State Model
Figure 47: Output 1 Impulse Response Errors for Reduced Systems
0 1 2 3 4 5
−0.02
−0.01
0
0.01
0.02
0.03
0.04
0.05
0.06
time(s)
Error(V)
Output 2 Impulse Response Errors for Reduced System Models
3 State Model
4 State Model
5 State Model
6 State Model
7 State Model
8 State Model
Figure 48: Output 2 Impulse Response Errors for Reduced Systems
49
0 1 2 3 4 5
−0.1
−0.08
−0.06
−0.04
−0.02
0
0.02
0.04
0.06
time(s)
Error(V)
Output 3 Impulse Response Errors for Reduced System Models
3 State Model
4 State Model
5 State Model
6 State Model
7 State Model
8 State Model
Figure 49: Output 3 Impulse Response Errors for Reduced Systems
50
4.8 Final Reduced Model
The final result is an asymptotically stable reduced seven-state, one-input/three-
output system that models the actual system from which the measured data was
collected. Figures 50 - 52 compare the impulse response of the reduced-order
model with the measured data, while figures 53 - 55 compare the frequency
response. The reduced-order system closely approximates the actual system,
maintaining the high-pass filter, low-pass filter, and resonant characteristics.
Asymptotic stability is confirmed by noting that |λi| < 1, i = 1, 2, · · · , r in
Figure 56. The complex conjugate pair contributing to the resonant frequency is
seen at 0.7203±j0.6404, while the remaining eigenvalues closer to the imaginary
axis contribute primarily to the low-pass filter and high-pass filter features.
Figures 57 - 59 show the error in impulse response for the reduced-order model
compared to the measured data. It can be argued that this is noise that was
removed from the measured data in construction of a better model of the “true”
system.
0 1 2 3 4 5
−0.2
0
0.2
0.4
0.6
0.8
1
1.2
time(s)
Magnitude(V)
Output 1 Impulse Response for Reduced System Model
Meaured
7 State Reduced Model
Figure 50: Reduced-Order Model Impulse Response Output 1
51
0 1 2 3 4 5
−0.01
0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
time(s)
Magnitude(V)
Output 2 Impulse Response for Reduced System Model
Meaured
7 State Reduced Model
Figure 51: Reduced-Order Model Impulse Response Output 2
0 1 2 3 4 5
−0.1
−0.08
−0.06
−0.04
−0.02
0
0.02
0.04
time(s)
Magnitude(V)
Output 3 Impulse Response for Reduced System Model
Meaured
7 State Reduced Model
Figure 52: Reduced-Order Model Impulse Response Output 3
52
10
0
10
1
10
2
10
−0.7
10
−0.4
10
−0.1
rad/sec
mag(V/V)
Channel 1 Magnitude
Measured
7 State Reduced Model
10
0
10
1
10
2
−200
−100
0
100
200
rad/sec
phase(deg)
Channel 1 Phase
Figure 53: Reduced-Order Model Frequency Response Output 1
10
0
10
1
10
2
10
−2
10
−1
10
0
rad/sec
mag(V/V)
Channel 2 Magnitude
Measured
7 State Reduced Model
10
0
10
1
10
2
−200
−100
0
100
200
rad/sec
phase(deg)
Channel 2 Phase
Figure 54: Reduced-Order Model Frequency Response Output 2
53
10
0
10
1
10
2
10
−4
10
−2
10
0
rad/sec
mag(V/V)
Channel 3 Magnitude
Measured
7 State Reduced Model
10
0
10
1
10
2
−200
−100
0
100
200
rad/sec
phase(deg)
Channel 3 Phase
Figure 55: Reduced-Order Model Frequency Response Output 3
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
0.1π/T
0.2π/T
0.3π/T
0.4π/T
0.5π/T
0.6π/T
0.7π/T
0.8π/T
0.9π/T
1π/T
0.1π/T
0.2π/T
0.3π/T
0.4π/T
0.5π/T
0.6π/T
0.7π/T
0.8π/T
0.9π/T
1π/T
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Eigenvalues for Reduced System Model
Real Axis
ImaginaryAxis
Figure 56: Eigenvalues for Reduced-Order Model
54
0 1 2 3 4 5
−3
−2
−1
0
1
2
3
4
5
x 10
−4
time(s)
Magnitude(V)
Output 1 Impulse Response Error for Reduced System Model
Figure 57: Reduced-Order Model Impulse Response Error Output 1
0 1 2 3 4 5
−3
−2
−1
0
1
2
3
4
5
x 10
−4
time(s)
Magnitude(V)
Output 2 Impulse Response Error for Reduced System Model
Figure 58: Reduced-Order Model Impulse Response Error Output 2
55
0 1 2 3 4 5
−4
−3
−2
−1
0
1
2
3
4
x 10
−4
time(s)
Magnitude(V)
Output 3 Impulse Response Error for Reduced System Model
Figure 59: Reduced-Order Model Impulse Response Error Output 3
56
5 Conclusion
Through the various methods presented, a four-state continuous-time state-
space model, a fifth-order transfer function, and a seven-state discrete-time
state-space model were identified from noisy input-output data to model more
complex MIMO systems with relatively good accuracy. Consideration was taken
to arrive at these specific system sizes; it was shown that using smaller models
adversely affected the system input-output properties, while using larger models
provided little added benefit. Depending on their intended use, these reduced
models could provide adequate representations of the actual systems, without
the cost associated with larger models. It was also shown that it is possible
that the reduced models remove noise from larger models and actually provide
a more accurate representation of the true system. The same principles can be
applied to much larger and complex systems, where there is likely much more
benefit in utilizing a model of reduced complexity. It was also demonstrated that
asymptotic stability is not necessarily guaranteed for the methods presented and
special consideration needs to be taken to ensure such is achieved. Lastly, it
should be noted that the methods presented are limited to linear time-invariant
system models.
57
Bibliography
[1] Panos J. Antsaklis and Anthony N. Michel. A Linear Systems Primer.
Birkh¨auser, 2007.
[2] Gene F. Franklin, J. David Powell, and Michael L. Workman. Digital Control
of Dynamic Systems. Ellis-Kagle Press, 2007.
[3] E. C. Levy. Complex-curve fitting. IRE Transactions on Automatic Control,
AC-4(1):37–43, May 1959.
[4] R. Pintelon, P. Guillaume, Y. Rolain, J. Schoukens, and H. Van hamme.
Parametric identification of transfer functions in the frequency domain -
a survey. IEEE Transactions on Automatic Control, 39(11):2245–2260,
November 1994.
[5] C. K. Sanathanan and J. Koerner. Transfer function synthesis as a ratio
of two complex polynomials. IEEE Transactions on Automatic Control,
8(1):56–58, January 1963.
[6] Jason L. Speyer and Walter H. Chung. Stochastic Processes, Estimation,
and Control. SIAM, 2008.
[7] H. Zeiger and A. McEwen. Approximate linear realizations of given dimen-
sion via ho’s algorithm. IEEE Transactions on Automatic Control, 19(2):153,
April 1974.
58

More Related Content

Viewers also liked

"FELIZ NAVIDAD"
"FELIZ NAVIDAD""FELIZ NAVIDAD"
"FELIZ NAVIDAD"CHALATA
 
The Formation of a Disembedded Capital at Monte Alban
The Formation of a Disembedded Capital at Monte AlbanThe Formation of a Disembedded Capital at Monte Alban
The Formation of a Disembedded Capital at Monte AlbanJoseph Williams
 
Citizenship Identity and Imperial Control Roman Italy on the Eve of the Socia...
Citizenship Identity and Imperial Control Roman Italy on the Eve of the Socia...Citizenship Identity and Imperial Control Roman Italy on the Eve of the Socia...
Citizenship Identity and Imperial Control Roman Italy on the Eve of the Socia...Joseph Williams
 
проект об употреблении чипсов и сухариков
проект об употреблении чипсов и сухариковпроект об употреблении чипсов и сухариков
проект об употреблении чипсов и сухариковvalentina1909
 
Interactive PowerPoint Quiz
Interactive PowerPoint QuizInteractive PowerPoint Quiz
Interactive PowerPoint Quizallisondaniel1
 
2016-18 Australia Program Overview FINAL
2016-18 Australia Program Overview FINAL2016-18 Australia Program Overview FINAL
2016-18 Australia Program Overview FINALNick Devereaux
 
Realizing value from IT investments - Kemper v1
Realizing value from IT investments - Kemper v1Realizing value from IT investments - Kemper v1
Realizing value from IT investments - Kemper v1Brian Kemper
 
MVC PMO Roundtable - M and A overview v3
MVC PMO Roundtable - M and A overview v3MVC PMO Roundtable - M and A overview v3
MVC PMO Roundtable - M and A overview v3Brian Kemper
 
bio data new format ready for online application(ayon)
bio data new format ready for online application(ayon)bio data new format ready for online application(ayon)
bio data new format ready for online application(ayon)Ayon Samanta
 
SAGE_Manage your cash flow
SAGE_Manage your cash flowSAGE_Manage your cash flow
SAGE_Manage your cash flowJoanne Gilson
 

Viewers also liked (14)

Producto 17
Producto 17Producto 17
Producto 17
 
"FELIZ NAVIDAD"
"FELIZ NAVIDAD""FELIZ NAVIDAD"
"FELIZ NAVIDAD"
 
The Formation of a Disembedded Capital at Monte Alban
The Formation of a Disembedded Capital at Monte AlbanThe Formation of a Disembedded Capital at Monte Alban
The Formation of a Disembedded Capital at Monte Alban
 
Citizenship Identity and Imperial Control Roman Italy on the Eve of the Socia...
Citizenship Identity and Imperial Control Roman Italy on the Eve of the Socia...Citizenship Identity and Imperial Control Roman Italy on the Eve of the Socia...
Citizenship Identity and Imperial Control Roman Italy on the Eve of the Socia...
 
проект об употреблении чипсов и сухариков
проект об употреблении чипсов и сухариковпроект об употреблении чипсов и сухариков
проект об употреблении чипсов и сухариков
 
Interactive PowerPoint Quiz
Interactive PowerPoint QuizInteractive PowerPoint Quiz
Interactive PowerPoint Quiz
 
Time series project
Time series projectTime series project
Time series project
 
2016-18 Australia Program Overview FINAL
2016-18 Australia Program Overview FINAL2016-18 Australia Program Overview FINAL
2016-18 Australia Program Overview FINAL
 
Realizing value from IT investments - Kemper v1
Realizing value from IT investments - Kemper v1Realizing value from IT investments - Kemper v1
Realizing value from IT investments - Kemper v1
 
MVC PMO Roundtable - M and A overview v3
MVC PMO Roundtable - M and A overview v3MVC PMO Roundtable - M and A overview v3
MVC PMO Roundtable - M and A overview v3
 
Sicopedagogia
SicopedagogiaSicopedagogia
Sicopedagogia
 
bio data new format ready for online application(ayon)
bio data new format ready for online application(ayon)bio data new format ready for online application(ayon)
bio data new format ready for online application(ayon)
 
SAGE_Manage your cash flow
SAGE_Manage your cash flowSAGE_Manage your cash flow
SAGE_Manage your cash flow
 
Wheat grass magic
Wheat grass   magicWheat grass   magic
Wheat grass magic
 

Similar to E299

Audio Equalization Using LMS Adaptive Filtering
Audio Equalization Using LMS Adaptive FilteringAudio Equalization Using LMS Adaptive Filtering
Audio Equalization Using LMS Adaptive FilteringBob Minnich
 
Multiuser detection based on generalized side-lobe canceller plus SOVA algorithm
Multiuser detection based on generalized side-lobe canceller plus SOVA algorithmMultiuser detection based on generalized side-lobe canceller plus SOVA algorithm
Multiuser detection based on generalized side-lobe canceller plus SOVA algorithmAitor López Hernández
 
An_expected_improvement_criterion_for_the_global_optimization_of_a_noisy_comp...
An_expected_improvement_criterion_for_the_global_optimization_of_a_noisy_comp...An_expected_improvement_criterion_for_the_global_optimization_of_a_noisy_comp...
An_expected_improvement_criterion_for_the_global_optimization_of_a_noisy_comp...Kanika Anand
 
Implementation of a Localization System for Sensor Networks-berkley
Implementation of a Localization System for Sensor Networks-berkleyImplementation of a Localization System for Sensor Networks-berkley
Implementation of a Localization System for Sensor Networks-berkleyFarhad Gholami
 
Thesis-MitchellColgan_LongTerm_PowerSystem_Planning
Thesis-MitchellColgan_LongTerm_PowerSystem_PlanningThesis-MitchellColgan_LongTerm_PowerSystem_Planning
Thesis-MitchellColgan_LongTerm_PowerSystem_PlanningElliott Mitchell-Colgan
 
PFM - Pablo Garcia Auñon
PFM - Pablo Garcia AuñonPFM - Pablo Garcia Auñon
PFM - Pablo Garcia AuñonPablo Garcia Au
 
A study on improving speaker diarization system = Nghiên cứu phương pháp cải ...
A study on improving speaker diarization system = Nghiên cứu phương pháp cải ...A study on improving speaker diarization system = Nghiên cứu phương pháp cải ...
A study on improving speaker diarization system = Nghiên cứu phương pháp cải ...Man_Ebook
 
A study on improving speaker diarization system = Nghiên cứu phương pháp cải ...
A study on improving speaker diarization system = Nghiên cứu phương pháp cải ...A study on improving speaker diarization system = Nghiên cứu phương pháp cải ...
A study on improving speaker diarization system = Nghiên cứu phương pháp cải ...Man_Ebook
 

Similar to E299 (20)

Audio Equalization Using LMS Adaptive Filtering
Audio Equalization Using LMS Adaptive FilteringAudio Equalization Using LMS Adaptive Filtering
Audio Equalization Using LMS Adaptive Filtering
 
book_dziekan
book_dziekanbook_dziekan
book_dziekan
 
2010_09_rs_ea
2010_09_rs_ea2010_09_rs_ea
2010_09_rs_ea
 
Multiuser detection based on generalized side-lobe canceller plus SOVA algorithm
Multiuser detection based on generalized side-lobe canceller plus SOVA algorithmMultiuser detection based on generalized side-lobe canceller plus SOVA algorithm
Multiuser detection based on generalized side-lobe canceller plus SOVA algorithm
 
An_expected_improvement_criterion_for_the_global_optimization_of_a_noisy_comp...
An_expected_improvement_criterion_for_the_global_optimization_of_a_noisy_comp...An_expected_improvement_criterion_for_the_global_optimization_of_a_noisy_comp...
An_expected_improvement_criterion_for_the_global_optimization_of_a_noisy_comp...
 
Dissertation
DissertationDissertation
Dissertation
 
D-STG-SG02.16.1-2001-PDF-E.pdf
D-STG-SG02.16.1-2001-PDF-E.pdfD-STG-SG02.16.1-2001-PDF-E.pdf
D-STG-SG02.16.1-2001-PDF-E.pdf
 
Implementation of a Localization System for Sensor Networks-berkley
Implementation of a Localization System for Sensor Networks-berkleyImplementation of a Localization System for Sensor Networks-berkley
Implementation of a Localization System for Sensor Networks-berkley
 
t
tt
t
 
Thesis-MitchellColgan_LongTerm_PowerSystem_Planning
Thesis-MitchellColgan_LongTerm_PowerSystem_PlanningThesis-MitchellColgan_LongTerm_PowerSystem_Planning
Thesis-MitchellColgan_LongTerm_PowerSystem_Planning
 
Erlangga
ErlanggaErlangga
Erlangga
 
report
reportreport
report
 
Jung.Rapport
Jung.RapportJung.Rapport
Jung.Rapport
 
bachelors-thesis
bachelors-thesisbachelors-thesis
bachelors-thesis
 
andershuss2015
andershuss2015andershuss2015
andershuss2015
 
Inteligencia artificial matlab
Inteligencia artificial matlabInteligencia artificial matlab
Inteligencia artificial matlab
 
PFM - Pablo Garcia Auñon
PFM - Pablo Garcia AuñonPFM - Pablo Garcia Auñon
PFM - Pablo Garcia Auñon
 
A study on improving speaker diarization system = Nghiên cứu phương pháp cải ...
A study on improving speaker diarization system = Nghiên cứu phương pháp cải ...A study on improving speaker diarization system = Nghiên cứu phương pháp cải ...
A study on improving speaker diarization system = Nghiên cứu phương pháp cải ...
 
A study on improving speaker diarization system = Nghiên cứu phương pháp cải ...
A study on improving speaker diarization system = Nghiên cứu phương pháp cải ...A study on improving speaker diarization system = Nghiên cứu phương pháp cải ...
A study on improving speaker diarization system = Nghiên cứu phương pháp cải ...
 
MS-Thesis
MS-ThesisMS-Thesis
MS-Thesis
 

Recently uploaded

Design For Accessibility: Getting it right from the start
Design For Accessibility: Getting it right from the startDesign For Accessibility: Getting it right from the start
Design For Accessibility: Getting it right from the startQuintin Balsdon
 
School management system project Report.pdf
School management system project Report.pdfSchool management system project Report.pdf
School management system project Report.pdfKamal Acharya
 
Max. shear stress theory-Maximum Shear Stress Theory ​ Maximum Distortional ...
Max. shear stress theory-Maximum Shear Stress Theory ​  Maximum Distortional ...Max. shear stress theory-Maximum Shear Stress Theory ​  Maximum Distortional ...
Max. shear stress theory-Maximum Shear Stress Theory ​ Maximum Distortional ...ronahami
 
1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf
1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf
1_Introduction + EAM Vocabulary + how to navigate in EAM.pdfAldoGarca30
 
Query optimization and processing for advanced database systems
Query optimization and processing for advanced database systemsQuery optimization and processing for advanced database systems
Query optimization and processing for advanced database systemsmeharikiros2
 
Computer Graphics Introduction To Curves
Computer Graphics Introduction To CurvesComputer Graphics Introduction To Curves
Computer Graphics Introduction To CurvesChandrakantDivate1
 
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXssuser89054b
 
Electromagnetic relays used for power system .pptx
Electromagnetic relays used for power system .pptxElectromagnetic relays used for power system .pptx
Electromagnetic relays used for power system .pptxNANDHAKUMARA10
 
scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...
scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...
scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...HenryBriggs2
 
AIRCANVAS[1].pdf mini project for btech students
AIRCANVAS[1].pdf mini project for btech studentsAIRCANVAS[1].pdf mini project for btech students
AIRCANVAS[1].pdf mini project for btech studentsvanyagupta248
 
UNIT 4 PTRP final Convergence in probability.pptx
UNIT 4 PTRP final Convergence in probability.pptxUNIT 4 PTRP final Convergence in probability.pptx
UNIT 4 PTRP final Convergence in probability.pptxkalpana413121
 
Theory of Time 2024 (Universal Theory for Everything)
Theory of Time 2024 (Universal Theory for Everything)Theory of Time 2024 (Universal Theory for Everything)
Theory of Time 2024 (Universal Theory for Everything)Ramkumar k
 
Introduction to Serverless with AWS Lambda
Introduction to Serverless with AWS LambdaIntroduction to Serverless with AWS Lambda
Introduction to Serverless with AWS LambdaOmar Fathy
 
Introduction to Geographic Information Systems
Introduction to Geographic Information SystemsIntroduction to Geographic Information Systems
Introduction to Geographic Information SystemsAnge Felix NSANZIYERA
 
8th International Conference on Soft Computing, Mathematics and Control (SMC ...
8th International Conference on Soft Computing, Mathematics and Control (SMC ...8th International Conference on Soft Computing, Mathematics and Control (SMC ...
8th International Conference on Soft Computing, Mathematics and Control (SMC ...josephjonse
 
COST-EFFETIVE and Energy Efficient BUILDINGS ptx
COST-EFFETIVE  and Energy Efficient BUILDINGS ptxCOST-EFFETIVE  and Energy Efficient BUILDINGS ptx
COST-EFFETIVE and Energy Efficient BUILDINGS ptxJIT KUMAR GUPTA
 
Post office management system project ..pdf
Post office management system project ..pdfPost office management system project ..pdf
Post office management system project ..pdfKamal Acharya
 
8086 Microprocessor Architecture: 16-bit microprocessor
8086 Microprocessor Architecture: 16-bit microprocessor8086 Microprocessor Architecture: 16-bit microprocessor
8086 Microprocessor Architecture: 16-bit microprocessorAshwiniTodkar4
 
Introduction to Artificial Intelligence ( AI)
Introduction to Artificial Intelligence ( AI)Introduction to Artificial Intelligence ( AI)
Introduction to Artificial Intelligence ( AI)ChandrakantDivate1
 

Recently uploaded (20)

Design For Accessibility: Getting it right from the start
Design For Accessibility: Getting it right from the startDesign For Accessibility: Getting it right from the start
Design For Accessibility: Getting it right from the start
 
School management system project Report.pdf
School management system project Report.pdfSchool management system project Report.pdf
School management system project Report.pdf
 
Max. shear stress theory-Maximum Shear Stress Theory ​ Maximum Distortional ...
Max. shear stress theory-Maximum Shear Stress Theory ​  Maximum Distortional ...Max. shear stress theory-Maximum Shear Stress Theory ​  Maximum Distortional ...
Max. shear stress theory-Maximum Shear Stress Theory ​ Maximum Distortional ...
 
1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf
1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf
1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf
 
Query optimization and processing for advanced database systems
Query optimization and processing for advanced database systemsQuery optimization and processing for advanced database systems
Query optimization and processing for advanced database systems
 
Computer Graphics Introduction To Curves
Computer Graphics Introduction To CurvesComputer Graphics Introduction To Curves
Computer Graphics Introduction To Curves
 
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
 
Electromagnetic relays used for power system .pptx
Electromagnetic relays used for power system .pptxElectromagnetic relays used for power system .pptx
Electromagnetic relays used for power system .pptx
 
scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...
scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...
scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...
 
AIRCANVAS[1].pdf mini project for btech students
AIRCANVAS[1].pdf mini project for btech studentsAIRCANVAS[1].pdf mini project for btech students
AIRCANVAS[1].pdf mini project for btech students
 
UNIT 4 PTRP final Convergence in probability.pptx
UNIT 4 PTRP final Convergence in probability.pptxUNIT 4 PTRP final Convergence in probability.pptx
UNIT 4 PTRP final Convergence in probability.pptx
 
Theory of Time 2024 (Universal Theory for Everything)
Theory of Time 2024 (Universal Theory for Everything)Theory of Time 2024 (Universal Theory for Everything)
Theory of Time 2024 (Universal Theory for Everything)
 
Introduction to Serverless with AWS Lambda
Introduction to Serverless with AWS LambdaIntroduction to Serverless with AWS Lambda
Introduction to Serverless with AWS Lambda
 
Introduction to Geographic Information Systems
Introduction to Geographic Information SystemsIntroduction to Geographic Information Systems
Introduction to Geographic Information Systems
 
8th International Conference on Soft Computing, Mathematics and Control (SMC ...
8th International Conference on Soft Computing, Mathematics and Control (SMC ...8th International Conference on Soft Computing, Mathematics and Control (SMC ...
8th International Conference on Soft Computing, Mathematics and Control (SMC ...
 
COST-EFFETIVE and Energy Efficient BUILDINGS ptx
COST-EFFETIVE  and Energy Efficient BUILDINGS ptxCOST-EFFETIVE  and Energy Efficient BUILDINGS ptx
COST-EFFETIVE and Energy Efficient BUILDINGS ptx
 
Post office management system project ..pdf
Post office management system project ..pdfPost office management system project ..pdf
Post office management system project ..pdf
 
Integrated Test Rig For HTFE-25 - Neometrix
Integrated Test Rig For HTFE-25 - NeometrixIntegrated Test Rig For HTFE-25 - Neometrix
Integrated Test Rig For HTFE-25 - Neometrix
 
8086 Microprocessor Architecture: 16-bit microprocessor
8086 Microprocessor Architecture: 16-bit microprocessor8086 Microprocessor Architecture: 16-bit microprocessor
8086 Microprocessor Architecture: 16-bit microprocessor
 
Introduction to Artificial Intelligence ( AI)
Introduction to Artificial Intelligence ( AI)Introduction to Artificial Intelligence ( AI)
Introduction to Artificial Intelligence ( AI)
 

E299

  • 1. System Identification, Estimation, and Modeling Peter Schneider University of California, Los Angeles
  • 2. Abstract System identification methods are used to model multi-input/multi- output systems from measured input-output data in the frequency do- main, with a nonlinear least-squares algorithm, and in the time domain, with Hankel matrices constructed from discrete-time impulse response data. Model reduction is performed, by utilizing a balanced realization and the singular value decomposition, to identify and discard low-gain states from already minimal realizations. The trade-offs for reducing sys- tem models to different sizes is examined and it is demonstrated that by appropriately selecting a reduced size, the lower-order systems model the much more complex ones with relatively little loss in accuracy. i
  • 3. Contents 1 Introduction 1 2 Model Reduction with Balanced Realization 1 2.1 Model Estimation from Frequency Response Features . . . . . . . 1 2.2 MIMO State Space Model Representation . . . . . . . . . . . . . 5 2.3 Minimal Realization . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.4 Model Reduction with Balanced Realization . . . . . . . . . . . . 7 2.5 Final Reduced Model . . . . . . . . . . . . . . . . . . . . . . . . . 16 3 Model Estimation from Nonlinear Least-Squares Solution 19 3.1 Least-Squares Problem . . . . . . . . . . . . . . . . . . . . . . . . 19 3.1.1 Linear Least-Squares . . . . . . . . . . . . . . . . . . . . . 19 3.1.2 Extension to Nonlinear Least-Squares . . . . . . . . . . . 19 3.2 Application to Transfer Function Identification . . . . . . . . . . 20 3.2.1 Model Representation . . . . . . . . . . . . . . . . . . . . 20 3.2.2 Unweighted Nonlinear Least-Squares Problem . . . . . . . 20 3.2.3 Levy’s Linearized Estimator . . . . . . . . . . . . . . . . . 21 3.2.4 Sanathanan-Koerner Iteration . . . . . . . . . . . . . . . . 21 3.2.5 Extension to MIMO System Model Estimation . . . . . . 23 3.2.6 Polynomial Order and Weighting Function Selection . . . 25 3.3 Evolution of Iterations Towards Convergence . . . . . . . . . . . 26 3.4 Final Converged Least-Squares Solution Models . . . . . . . . . . 29 4 Subspace Realization from Impulse Response 33 4.1 Discrete-Time State Space Model . . . . . . . . . . . . . . . . . . 33 4.2 Markov Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . 33 4.3 Measured Impulse Response . . . . . . . . . . . . . . . . . . . . . 34 4.4 Hankel Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 4.5 Model Estimation from Singular Value Decomposition . . . . . . 37 4.6 Full-Order Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 4.7 Model Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.8 Final Reduced Model . . . . . . . . . . . . . . . . . . . . . . . . . 51 5 Conclusion 57 Bibliography 58 ii
  • 4. List of Figures 1 Full Order Model 11 Channel . . . . . . . . . . . . . . . . . . . . 3 2 Full Order Model 12 Channel . . . . . . . . . . . . . . . . . . . . 3 3 Full Order Model 21 Channel . . . . . . . . . . . . . . . . . . . . 4 4 Full Order Model 22 Channel . . . . . . . . . . . . . . . . . . . . 4 5 Hankel Singular Values . . . . . . . . . . . . . . . . . . . . . . . . 9 6 Reduced Models of Varying Size 11 Channel . . . . . . . . . . . . 10 7 Reduced Models of Varying Size 12 Channel . . . . . . . . . . . . 11 8 Reduced Models of Varying Size 21 Channel . . . . . . . . . . . . 11 9 Reduced Models of Varying Size 22 Channel . . . . . . . . . . . . 12 10 Error Full Order Model vs Reduced Model 11 Channel . . . . . . 13 11 Error Full Order Model vs Reduced Model 12 Channel . . . . . . 14 12 Error Full Order Model vs Reduced Model 21 Channel . . . . . . 14 13 Error Full Order Model vs Reduced Model 22 Channel . . . . . . 15 14 Error Full Order Model vs Reduced Model MIMO System . . . . 15 15 Reduced Model 11 Channel . . . . . . . . . . . . . . . . . . . . . 16 16 Reduced Model 12 Channel . . . . . . . . . . . . . . . . . . . . . 17 17 Reduced Model 21 Channel . . . . . . . . . . . . . . . . . . . . . 17 18 Reduced Model 22 Channel . . . . . . . . . . . . . . . . . . . . . 18 19 Convergence of Least Squares Solution Error . . . . . . . . . . . 26 20 Evolution of Least Squares Iterations 11 Channel . . . . . . . . . 27 21 Evolution of Least Squares Iterations 12 Channel . . . . . . . . . 27 22 Evolution of Least Squares Iterations 21 Channel . . . . . . . . . 28 23 Evolution of Least Squares Iterations 22 Channel . . . . . . . . . 28 24 Least Squares Solutions 11 Channel . . . . . . . . . . . . . . . . 29 25 Least Squares Solutions 12 Channel . . . . . . . . . . . . . . . . 30 26 Least Squares Solutions 21 Channel . . . . . . . . . . . . . . . . 30 27 Least Squares Solutions 22 Channel . . . . . . . . . . . . . . . . 31 28 Fifth-Order Model Pole Zero Map . . . . . . . . . . . . . . . . . 31 29 Fourth-Order Model Pole Zero Map . . . . . . . . . . . . . . . . 32 30 System Block Diagram . . . . . . . . . . . . . . . . . . . . . . . . 34 31 Measured Impulse Response Output 1 . . . . . . . . . . . . . . . 34 32 Measured Impulse Response Output 2 . . . . . . . . . . . . . . . 35 33 Measured Impulse Response Output 3 . . . . . . . . . . . . . . . 35 34 Full-Order Model Impulse Response Output 1 . . . . . . . . . . . 39 35 Full-Order Model Impulse Response Output 2 . . . . . . . . . . . 39 36 Full-Order Model Impulse Response Output 3 . . . . . . . . . . . 40 37 Full-Order Model Impulse Response Error Output 1 . . . . . . . 40 38 Full-Order Model Impulse Response Error Output 2 . . . . . . . 41 39 Full-Order Model Impulse Response Error Output 3 . . . . . . . 41 40 Full-Order Model Frequency Response Output 1 . . . . . . . . . 42 41 Full-Order Model Frequency Response Output 2 . . . . . . . . . 42 42 Full-Order Model Frequency Response Output 3 . . . . . . . . . 43 43 Eigenvalues for Full-Order System . . . . . . . . . . . . . . . . . 44 44 Instability of Full-Order system Model shown in Impulse Response 45 iii
  • 5. 45 Singular Values of H(0) . . . . . . . . . . . . . . . . . . . . . . . 46 46 Fifteen Largest Singular Values . . . . . . . . . . . . . . . . . . . 48 47 Output 1 Impulse Response Errors for Reduced Systems . . . . . 49 48 Output 2 Impulse Response Errors for Reduced Systems . . . . . 49 49 Output 3 Impulse Response Errors for Reduced Systems . . . . . 50 50 Reduced-Order Model Impulse Response Output 1 . . . . . . . . 51 51 Reduced-Order Model Impulse Response Output 2 . . . . . . . . 52 52 Reduced-Order Model Impulse Response Output 3 . . . . . . . . 52 53 Reduced-Order Model Frequency Response Output 1 . . . . . . . 53 54 Reduced-Order Model Frequency Response Output 2 . . . . . . . 53 55 Reduced-Order Model Frequency Response Output 3 . . . . . . . 54 56 Eigenvalues for Reduced-Order Model . . . . . . . . . . . . . . . 54 57 Reduced-Order Model Impulse Response Error Output 1 . . . . . 55 58 Reduced-Order Model Impulse Response Error Output 2 . . . . . 55 59 Reduced-Order Model Impulse Response Error Output 3 . . . . . 56 iv
  • 6. 1 Introduction System identification refers to the process of extracting information about a system from measured input-output data and using that information to build mathematical models capable of explaining the data. Models typically describe the behavior of a system either in the frequency domain, with transfer functions, or in the time domain, with state-space models. Each system representation is derived from differential equations that approximate complex naturally occur- ring phenomena. System identification also includes optimization of model design such as model reduction. This involves projecting a higher-order model onto a lower- order model having properties similar to the original one. While naturally oc- curring phenomena are arguably of infinite order, if their characteristics can be adequately represented in a much simpler model, they can be analyzed with less computational time, less storage, and less cost than the original problem. This can be advantageous for such purposes as direct simulation of dynamic systems, where as more detail is included, the dimensions of the simulations can increase to prohibitive levels. System identification methods have applications in a wide range of indus- tries, including nearly all areas of engineering. Methods are commonly used to model mechanical, aerospace, electrical, chemical, and biological systems as well as economic and social systems. Their applications include use in simulation, control design, analysis, and prediction of complex systems. This report covers several different methods for identifying and modeling a system from measured data. In section two, we consider model reduction from data collected in the frequency domain. In section three, we consider identifi- cation a system model in the frequency domain with a least-squares algorithm. In section four, we consider subspace realization from impulse response data collected in discrete-time and subsequent model reduction. 2 Model Reduction with Balanced Realization 2.1 Model Estimation from Frequency Response Features It is first desired to fit asymptotically stable transfer functions to empirical frequency response data collected from a multi-input/multi-output (MIMO) system. The system in this case is two-input/two-output, comprised of four individual single-input/single-output (SISO) channels, each of which exhibits simple high-pass filter features, low-pass filter features, and resonant frequency features. By observing these frequency response features, a transfer function for each SISO channel is approximated by combining the transfer functions for simple first order low-pass filters, first order high-pass filters, and oscillators, respec- tively. These transfer functions are given as Hlp(s) = ωl s + ωl 1
  • 7. Hhp(s) = s s + ωh Hosc(s) = ω2 n 1 + 2ζωn + ω2 n where ωl and ωh are cutoff frequencies for the low-pass filter and high-pass filter, respectively; ωn is the natural frequency and ζ is the damping ratio for the oscillator transfer function. By appropriately combining these transfer functions and adjusting their pa- rameters, an asymptotically stable transfer function for each SISO channel is estimated, with frequency response closely fitting the empirical data (Figures 1 - 4). For this case, the transfer functions for each channel was constructed with the following combination H11(s) = Hlp(s)Hlp(s)Hosc(s) H12(s) = Hlp(s)Hosc(s) H21(s) = Hlp(s)Hhp(s)Hosc(s) H22(s) = Hhp(s)Hosc(s) Common features in the empirical data, such as resonant frequencies and cutoff frequencies, were kept the same across all channels when possible. This en- sures that the different transfer functions share some identical features, making it easier to reduce the final system model without affecting the input-output properties. The estimated transfer function for the MIMO system is made up of the 4 SISO transfer functions stacked together. H(s) = H11(s) H12(s) H21(s) H22(s) Y (s) = H(s)U(s) 2
  • 8. 10 0 10 1 10 2 10 −4 10 −2 10 0 10 2 rad/sec mag(V/V) 11 Channel Magnitude Measured Full Order Model 10 0 10 1 10 2 −200 −100 0 100 200 rad/sec phase(deg) 11 Channel Phase Figure 1: Full Order Model 11 Channel 10 0 10 1 10 2 10 −4 10 −2 10 0 10 2 rad/sec mag(V/V) 12 Channel Magnitude Measured Full Order Model 10 0 10 1 10 2 −200 −100 0 100 200 rad/sec phase(deg) 12 Channel Phase Figure 2: Full Order Model 12 Channel 3
  • 9. 10 0 10 1 10 2 10 −4 10 −2 10 0 10 2 rad/sec mag(V/V) 21 Channel Magnitude Measured Full Order Model 10 0 10 1 10 2 −200 −100 0 100 200 rad/sec phase(deg) 11 Channel Phase Figure 3: Full Order Model 21 Channel 10 0 10 1 10 2 10 −4 10 −2 10 0 10 2 rad/sec mag(V/V) 22 Channel Magnitude Measured Full Order Model 10 0 10 1 10 2 −200 −100 0 100 200 rad/sec phase(deg) 22 Channel Phase Figure 4: Full Order Model 22 Channel 4
  • 10. 2.2 MIMO State Space Model Representation While it was convenient to fit the frequency response of the system models to then empirical data in the frequency domain, the transfer functions can now be converted to state space form. The transfer function for each of the four SISO channels is converted to the form ˙x = Ax + Bu y = Cx + Du where x is the state vector, y is the output vector, u is the input vector, and (A, B, C, D) are the system matrices. The relationship between the transfer functions in the frequency domain and the state-space models in the time domain can be seen by taking the Laplace transform of the state space equations. This relationship is H(s) = Y (s) U(s) = C(sI − A)−1 B + D The state space representation for the four individual SISO channels can then be stacked together into one MIMO state space representation for the entire two-input/two-output system with ˙x =       A11 0 · · · 0 0 A12 ... ... ... ... A21 0 0 · · · 0 A22       x +     B11 0 0 B12 B21 0 0 B22     u y = C11 C12 0 0 0 0 C21 C22 x + D11 D12 D21 D22 u where in general, A ∈ Cn×n , B ∈ Cn×m , C ∈ Cp×n , and D ∈ Cp×m for an m-input/p-output system. Here we have m = 2, p = 2. We are left with a single system model for the entire MIMO system with 14 states. 2.3 Minimal Realization It is now desired to reduce the system to a minimal realization by removing any uncontrollable and unobservable modes from the system. These modes can be removed without affecting the input-output properties of the system. The controllability and observability properties of the system can be de- termined from the ranks of the controllability and observability matrices given as C = B AB A2 B · · · An−1 B 5
  • 11. O =        C CA CA2 ... CAn−1        A system is controllable if and only if the controllability matrix is full rank and is observable if and only if the observability matrix is full rank. For this system, we find that rank(C) < n and rank(O) < n, allowing us to conclude that the system is both uncontrollable and unobservable. The state space representation of the system is not unique; given the transfer function there are an infinite number of possible representations related by sim- ilarity transformations. In the interest of reducing the system to a minimal real- ization, we wish to rearrange the system modes via a similarity transformation such that the modes that are controllable and observable, controllable and un- observable, uncontrollable and observable, and uncontrollable and unobservable are partitioned. This is accomplished through a Kalman decomposition, where the system coordinates are transformed x → z, (A, B, C, D) → ( ˆA, ˆB, ˆC, ˆD), with a non-singular matrix ˆT x → z = ˆT−1 x A → ˆA = ˆT−1 A ˆT B → ˆB = ˆT−1 B C → ˆC = C ˆT D → ˆD = D where ˆT is constructed as ˆT = T1 T2 T3 T4 where the columns of T1 T2 form a basis of R(C), the columns of T2 form a basis of R(C) ∩ N(O), the columns of T2 T4 form a basis of N(O), and the columns of T3 are chosen so that ˆT is non-singular. By undergoing this similarity transform, the system takes on the form     ˙xCO ˙xCO ˙xCO ˙xCO     =     ACO 0 A13 0 A21 ACO A23 A24 0 0 ACO 0 0 0 A43 ACO         xCO xCO xCO xCO         BCO BCO 0 0     u y = CCO 0 CCO 0     xCO xCO xCO xCO     + Du 6
  • 12. where the state is partitioned such that xCO is controllable and observable, xCO is uncontrollable and observable, xCO is controllable and unobservable, and xCO is uncontrollable and unobservable. By utilizing the Kalman decomposition, the system model is reduced to a minimal realization by truncating the system to the controllable and observable partitions ˙z = ACOz + BCOu y = CCOz + Du This is done without changing the input-output properties of the system. 2.4 Model Reduction with Balanced Realization While uncontrollable and unobservable states have been removed, the dimen- sions of the system can still potentially be further reduced with relatively little loss of input-output properties. This can be done if low-gain modes can be identified and removed from the system. To accomplish this, we look at the infinite horizon controllability and ob- servability grammians of the system, defined as Gc(∞) = ∞ 0 eAτ BB∗ eA∗ τ dτ Go(∞) = ∞ 0 eA∗ τ C∗ CeAτ dτ Because Re(λ(A)i) < 0 i = 1, 2, · · · , n, the MIMO system is asymptotically stable and the infinite horizon controllability and observability grammians can be obtained from the algebraic Lyapunov equations AGc(∞) + Gc(∞)A∗ = −BB∗ A∗ Go(∞) + Go(∞)A = −C∗ C The system is controllable if and only if the controllability grammian is positive definite and is observable if and only if the observability grammian is positive definite. With the uncontrollable and unobservable states previously removed, we have Go(∞) > 0 and Gc(∞) > 0, but the grammians are still possibly ill-conditioned. If this is the case, the direction of these smaller eigenvalues output less energy and would be good candidates for removal. The challenge becomes finding how to transform the system such that modes that are both lightly observable and lightly controllable can be identified for removal. Referring to the infinite horizon grammians as Gc and Go going forward, the Hankel singular values are defined as the eigenvalues of the products of the grammians σH,k ≡ λk(GoGc), k = 1, 2, . . . n For the purpose of identifying and removing modes that are both lightly ob- servable and lightly controllable, it is desired to find a similarity transform T 7
  • 13. such that a “balanced” realization is obtained where the grammians are equal and diagonal Gc = Go = Σ Σ ≡       σH,1 0 · · · 0 0 σH,2 ... ... ... 0 0 · · · 0 σH,n       where the Hankel singular values have been ordered as σH,1 ≥ σH,2 ≥ · · · ≥ σH,n ≥ 0 If the system can be transformed so that this is the case, the system modes will be as controllable as they are observable and the system can be reduced while altering the input-output properties in a quantifiable manner. When undergoing a change of coordinates x = Tz, the grammians undergo the transformation x → z Go(t) → T∗ Go(t)T Gc(t) → T−1 Gc(t)T−∗ While a congruence transform will preserve the sign definiteness of the grammi- ans, in general it will not preserve their eigenvalues. However, the eigenvalues of the product of the grammians, λ(GoGc) as well as λ(G2 oG2 c) are invariant. Assuming the system is minimal λ(GoGc) = λ(G1/2 c GoG1/2 c ) Letting U ∈ Cn×n be a unitary matrix that diagonalizes G 1/2 c GoG 1/2 c , U∗ G1/2 c GoG1/2 c U = Σ2 then using the coordinate transformation T = G1/2 c UΣ−1/2 will achieve the goal of balanced coordinates Go → T∗ Go(t)T = Σ Gc → T−1 Gc(t)T−∗ = Σ Figure 5 shows the Hankel singular values for the system. It shows that in a balanced realizations, the system has some modes that are both significantly less controllable and observable than others. In particular, there is a large drop between σH,4 and σH,5. This suggests that the system can be further reduced, 8
  • 14. 0 1 2 3 4 5 6 7 8 10 −6 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 1 Hankel Singular Values HankelSingularValues Number Figure 5: Hankel Singular Values perhaps to a four state system associated with the four larger Hankel singular values, with relatively little loss in the input-output properties of the system. Based on this analysis, it is desired to remove from the system matrices, expressed in a balanced realization, the blocks corresponding to the smaller Hankel singular values. The system in balanced coordinates can be partitioned as ˙z1 ˙z2 = A11 A12 A21 A22 z1 z2 + B1 B2 u y = C1 C2 z1 z2 where A11 ∈ Cr×r , A12 ∈ Cr×(n−r) , etc. If σH,r > σH,r+1, then A11 and A22 are asymptotically stable and the system can be reduced to ˙z = A11z + B1u y = C1z where the frequency response of this reduced system will approximate that of the original one with a bound on the frequency response error between the two systems |H(jw) − Hr(jw)| ≤ 2(σH,r+1 + σH,r+2 + · · · + σH,n) ∀w ∈ R In this case, it appears that selecting r = 4 and discarding the states associated with σH,5, σH,6, and σH,7 will result in a relatively small bound on the frequency response error. 9
  • 15. Figures 6 - 9 compare the frequency response of the system model reduced to different state sizes. Indeed, reducing the system by discarding the states corresponding to the smaller Hankel singular values has relatively little effect in terms of frequency response. The frequency responses of the seven state, six state, five state, and four state systems are nearly indistinguishable from one another. Reducing the system further to a three state or two state system requires discarding states corresponding to much larger Hankel singular values. This results in much larger deviations in the frequency response behavior as seen in the figures. 10 0 10 1 10 2 10 −4 10 −2 10 0 10 2 rad/sec mag(V/V) 11 Channel Magnitude 2 state model 3 state model 4 state model 5 state model 6 state model 7 state model 10 0 10 1 10 2 −200 −100 0 100 200 rad/sec phase(deg) 11 Channel Phase Figure 6: Reduced Models of Varying Size 11 Channel 10
  • 16. 10 0 10 1 10 2 10 −4 10 −2 10 0 10 2 rad/sec mag(V/V) 12 Channel Magnitude 10 0 10 1 10 2 −100 0 100 200 rad/sec phase(deg) 12 Channel Phase 2 state model 3 state model 4 state model 5 state model 6 state model 7 state model Figure 7: Reduced Models of Varying Size 12 Channel 10 0 10 1 10 2 10 −4 10 −2 10 0 10 2 rad/sec mag(V/V) 21 Channel Magnitude 10 0 10 1 10 2 −200 −100 0 100 200 rad/sec phase(deg) 21 Channel Phase 2 state model 3 state model 4 state model 5 state model 6 state model 7 state model Figure 8: Reduced Models of Varying Size 21 Channel 11
  • 17. 10 0 10 1 10 2 10 −4 10 −2 10 0 10 2 rad/sec mag(V/V) 22 Channel Magnitude 10 0 10 1 10 2 −200 −100 0 100 200 rad/sec phase(deg) 12 Channel Phase 2 state model 3 state model 4 state model 5 state model 6 state model 7 state model Figure 9: Reduced Models of Varying Size 22 Channel 12
  • 18. Based on this analysis, it is determined that the system will be reduced to a four state model. The error in frequency response between this reduced model and the full-order model will be limited by the relatively small values of the Hankel singular values corresponding to the discarded states, σH,5, σH,6, and σH,7. Figures 10 - 13 show the errors in frequency response that result from re- ducing the system to a four state model from the original full-order model for each of the channels. Figure 14 shows the total combined error for the MIMO system. The four state models approximates the frequency response of the full order model with good accuracy; the cost for reducing the system beyond a minimal realization is relatively small. 10 0 10 1 10 2 10 3 10 −14 10 −12 10 −10 10 −8 rad/sec mag(V/V) 11 Channel Magnitude Error 10 0 10 1 10 2 10 3 10 −10 10 −5 10 0 rad/sec phase(deg) 11 Channel Phase Error Figure 10: Error Full Order Model vs Reduced Model 11 Channel 13
  • 19. 10 0 10 1 10 2 10 3 10 −14 10 −12 10 −10 10 −8 rad/sec mag(V/V) 12 Channel Magnitude Error 10 0 10 1 10 2 10 3 10 −10 10 −8 10 −6 10 −4 rad/sec phase(deg) 12 Channel Phase Error Figure 11: Error Full Order Model vs Reduced Model 12 Channel 10 0 10 1 10 2 10 3 10 −11 10 −10 10 −9 10 −8 rad/sec mag(V/V) 21 Channel Magnitude Error 10 0 10 1 10 2 10 3 10 −20 10 −10 10 0 10 10 rad/sec phase(deg) 21 Channel Phase Error Figure 12: Error Full Order Model vs Reduced Model 21 Channel 14
  • 20. 10 0 10 1 10 2 10 3 10 −15 10 −10 10 −5 rad/sec mag(V/V) 22 Channel Magnitude Error 10 0 10 1 10 2 10 3 10 −20 10 −10 10 0 10 10 rad/sec phase(deg) 22 Channel Phase Error Figure 13: Error Full Order Model vs Reduced Model 22 Channel 10 0 10 1 10 2 10 3 10 −12 10 −10 10 −8 10 −6 rad/sec mag(V/V) Combined MIMO System Magnitude Error 10 0 10 1 10 2 10 3 10 −10 10 −5 10 0 10 5 rad/sec phase(deg) Combined MIMO System Phase Error Figure 14: Error Full Order Model vs Reduced Model MIMO System 15
  • 21. 2.5 Final Reduced Model The final result is a mathematical model of a four-state MIMO system that closely models the actual system in terms of frequency response (Figures 15 - 18 ). The four state model maintains the high-pass filter, low-pass filter, and resonance characteristics seen in the measured data. For the reduced system λ(A) = {−7.599, −7.601, −3.645 ± j72.8042}, allowing us to confirm that the reduced model maintains asymptotic stability. Moreover we see that λ1,2 ≈ −ωl = −ωh λ3,4 ≈ −ζωn ± jωn 1 − ζ2 from the original low-pass filter, high-pass filter, and oscillator transfer functions used when fitting the data in the frequency domain, with very little loss in accuracy. We also see that the system is indeed minimal, as we have λ(Gc) = λ(Go) = {5.299, 4.825, 0.888, 0.423}. 10 0 10 1 10 2 10 −4 10 −2 10 0 10 2 rad/sec mag(V/V) 11 Channel Magnitude Measured 4 State Reduced Model 10 0 10 1 10 2 −200 −100 0 100 200 rad/sec phase(deg) 11 Channel Phase Figure 15: Reduced Model 11 Channel 16
  • 22. 10 0 10 1 10 2 10 −4 10 −2 10 0 10 2 rad/sec mag(V/V) 12 Channel Magnitude Measured 4 State Reduced Model 10 0 10 1 10 2 −200 −100 0 100 200 rad/sec phase(deg) 12 Channel Phase Figure 16: Reduced Model 12 Channel 10 0 10 1 10 2 10 −4 10 −2 10 0 10 2 rad/sec mag(V/V) 21 Channel Magnitude Measured 4 State Reduced Model 10 0 10 1 10 2 −200 −100 0 100 200 rad/sec phase(deg) 21 Channel Phase Figure 17: Reduced Model 21 Channel 17
  • 23. 10 0 10 1 10 2 10 −4 10 −2 10 0 10 2 rad/sec mag(V/V) 22 Channel Magnitude Measured 4 State Reduced Model 10 0 10 1 10 2 −200 −100 0 100 200 rad/sec phase(deg) 22 Channel Phase Figure 18: Reduced Model 22 Channel 18
  • 24. 3 Model Estimation from Nonlinear Least-Squares Solution 3.1 Least-Squares Problem 3.1.1 Linear Least-Squares It is often the case that we wish to estimate the values of a set of parameters, x, that is linearly related to a set of measurements, y,      y1 y2 ... ym      =      A11 A12 · · · A1n A21 A22 · · · A2n ... ... ... ... Am1 Am2 · · · Amn           x1 x2 ... xn      but the system is overdetermined, i.e. there are m linear equations in n unknown coefficients with m > n. In general, y = Ax has no solution because in general, y /∈ R(A). Because there is no solution, we wish to find an estimate that best models the measured data according to some criterion. Defining the error between estimate and measurement as e = y − Ax we see that we cannot obtain zero error, so we must find a way to minimize it in some manner. The least-squares method results from using a cost function that is the sum of the squared errors J = y − Ax 2 2 The least-squares solution is then a matter of finding an estimate where this cost function is minimized xLS = arg min x y − Ax 2 2 The linear least-squares problem has a unique, closed-form solution that can be found by applying a necessary condition that the cost function, J, has a stationary value at the optimum point. This gives us n gradient vectors that must be zero ∂J ∂xi = 0 i = 1, 2, · · · , n the solution of which yields the vector xLS of the optimal parameter values. 3.1.2 Extension to Nonlinear Least-Squares Practically, it is often the case that we have a nonlinear relationship between our set of measurements, y, and parameters, x, y = f(x) 19
  • 25. Thus, when the gradient vector of the cost function is set to zero, the resulting equations become functions of both the independent variable and the parame- ters. The gradient equations do not have a closed form solution. Instead, we approximate the model with a linear one and refine the parameters iteratively with initial values for these parameters guessed. We see that finding the solu- tion to a nonlinear least-squares problem is much more difficult. There is no closed-form solution, the problem may have multiple local minima, making it difficult to find the global minimum, and convergence is not guaranteed. 3.2 Application to Transfer Function Identification We now consider estimation of a linear system model on the basis of frequency domain data by utilizing a least-squares minimization between frequency re- sponse of estimated transfer function model and measured data. 3.2.1 Model Representation It is desired to evaluate the coefficients, P = [a0, a1, · · · , an, b1, b2, · · · , bd]T , of the rational transfer function model expressed as H(s, P) = N(s, P) D(s, P) = n k=0 aksk 1 + d k=1 bksk such that it best models a discrete set of measured frequency response data Hm(sk), k = 1, 2, · · · , F. We wish to evaluate P by minimizing some cost criterion related to the error between these transfer functions at all of the ex- perimental points in a least-square sense. 3.2.2 Unweighted Nonlinear Least-Squares Problem Defining the error at each measured point k as ek = Hm(sk) − H(sk) the unweighted least squares problem is arg min P F k=1 |Hm(sk) − H(sk, P)| 2 = arg min P F k=1 Hm(sk)D(sk, P) − N(sk, P) D(sk, P) 2 We see that this is a nonlinear least-squares problem, making it is difficult to estimate the system parameters in an accurate and efficient manner. If we have a-priori knowledge of the system poles, the parameters in the denominator would be assumed known, and the problem reduces to a linear one. In practice, this is often not a realistic scenario. 20
  • 26. 3.2.3 Levy’s Linearized Estimator An early approximation of the nonlinear least-squares problem was obtained by using the denominator, D(sk, P), as a weighting function [3]. The weighted error at each point becomes ekD(sk, P) = Hm(sk)D(sk, P) − N(sk, P) and the least squares problem becomes arg min P F k=1 |Hm(sk)D(sk, P) − N(sk, P)| 2 This reduces the difficult nonlinear least-squares problem to a linear least- squares problem, from which a closed-form solution can be obtained. By par- tially differentiating the new weighted cost function with respect to each of the unknown polynomial coefficients and equating equal to zero, a set of linear equations is obtained from which P is solved for. We see that this formulation reduces to the unweighted least-squares problem when 1 |D(sk,P )|2 equals one across all frequencies. While this choice of weighting function linearizes the problem, this clearly biases the least-squares solution. In particular, if the frequency values in the problem span several decades, the lower frequency values will have little influence and a good fit cannot be obtained at these lower frequencies. 3.2.4 Sanathanan-Koerner Iteration An iterative approach to minimize these issue is to modify the weighting function to D(sk,PL) D(sk,PL−1) , where the subscript L is the iteration number [5]. The weighted error at each point becomes ek D(sk, PL) D(sk, PL−1) = Hm(sk)D(sk, PL) − N(sk, PL) D(sk, PL−1) and the least squares problem at iteration L becomes arg min PL F k=1 Hm(sk)D(sk, PL) − N(sk, PL) D(sk, PL−1) 2 By using this modified weighting function, we see that the problem still reduces to a linear one at each iteration because the denominator is not a function of PL, but with the biasing issues now alleviated. Several extensions to the Sanathanan-Koerner Iteration can be implemented to aid in finding a solution that converges, obtaining smaller approximation er- rors, and obtaining a stable model. In particular, unstable poles can be mirrored to the left hand side of the complex plane in between iterations and the cost 21
  • 27. function can be modified by raising the denominator to a power, r, such that it becomes arg min PL F k=1 Hm(sk)D(sk, PL) − N(sk, PL) D(sk, PL−1)r 2 r ∈ [0, ∞) We see that when r = 0, this reduces to the problem from Levy’s linear estimator and when r = 1, this reduces to the Sanathanan-Koerner iteration. Using pow- ers different than one can potentially reduce approximation errors and relaxing the power to r < 1 can help with obtaining a solution that converges. By setting ∂J ∂PL equal to zero, the same set of linear equations obtained from formulation of Levy’s linear estimator is obtained, with the only modification that there is now the extra weighting term 1 |D(sk,PL−1)r|2 in all equations. This yields the modified set of linear equations of the form MP = C M =               λ0 0 −λ2 · · · T1 S2 −T3 · · · 0 λ2 0 · · · −S2 T3 S4 · · · λ2 0 −λ4 · · · T3 S4 −T5 · · · ... ... ... ... ... ... ... ... T1 −S2 −T3 · · · U2 0 −U4 · · · S2 T3 −S4 · · · 0 U4 0 · · · T3 −S4 −T6 · · · U4 0 −U6 · · · ... ... ... ... ... ... ... ...               P =               a0 a1 a2 ... b1 b2 b3 ...               C =               S0 T1 S2 ... 0 U2 0 ...               22
  • 28. where λi = F k=1 ωi k |D(sk, PL−1)r|2 Si = F k=1 ωi kRe(Hm(sk)) |D(sk, PL−1)r|2 Ti = F k=1 ωi kIm(Hm(sk)) |D(sk, PL−1)r|2 Ui = F k=1 ωi k(Re2 (Hm(sk)) + Im2 (Hm(sk))) |D(sk, PL−1)r|2 The coefficients at each iteration, L, are evaluated by solving for the parameters, P, in the linearized equations. As these parameters are not known initially, an initial choice of D(sk, P0) = 1 is assumed for the first iteration. Subsequent iterations tend to converge rapidly and the process is repeated until some con- vergence criteria is met where JL ∼ J∞ (L → ∞) PL ∼ P∞ (L → ∞) 3.2.5 Extension to MIMO System Model Estimation The same least-squares algorithm can be extended to evaluate the entire MIMO system simultaneously with a single set of linear equations. All transfer func- tions must share the same denominator and the set of linear equations for each individual SISO system is then stacked together into one MIMO set of equations. The equations become MP = C 23
  • 29. M =                                     λ0,11 0 · · · T1,11 S2,11 · · · 0 λ2,11 · · · 0 0 0 −S2,11 T3,11 · · · ... ... ... ... ... ... λ0,12 0 · · · T1,12 S2,12 · · · 0 0 λ2,12 · · · 0 0 −S2,12 T3,12 · · · ... ... ... ... ... ... λ0,21 0 · · · T1,21 S2,21 · · · 0 0 0 λ2,21 · · · 0 −S2,21 T3,21 · · · ... ... ... ... ... ... λ0,22 0 · · · T1,22 S2,22 · · · 0 0 0 0 λ2,22 · · · −S2,22 T3,22 · · · ... ... ... ... ... ... T1,11 −S2,11 · · · T1,12 −S2,12 · · · T1,31 −S2,31 · · · T1,22 −S2,22 · · · U2 0 · · · S2,11 T3,11 · · · S2,12 T3,12 · · · S2,21 T3,21 · · · S2,22 T3,22 · · · 0 U4 · · · ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...                                     P =                                a0,11 a1,11 ... a0,12 a1,12 ... a0,21 a1,21 ... a0,22 a1,22 ... b1 b2 ...                                24
  • 30. C =                                S0,11 T1,11 ... S0,12 T1,12 ... S0,21 T1,21 ... S0,22 T1,22 ... 0 U2 ...                                where the terms λi,q, Si,q, and Ti,q now apply specifically to channel q, while the terms Ui now sum together across all channels. The coefficient parameters, P, are then solved for in the same manner as was shown previously for the SISO transfer functions, with the coefficients in the numerator a0,q, a1,q, a2,q, · · · now unique to channel q and the coefficients in the denominator b1, b2, b3, · · · now shared by each transfer function. 3.2.6 Polynomial Order and Weighting Function Selection Because this is an iterative algorithm approximating a nonlinear problem, there is no guarantee that it will converge to a least-squares solution. It is also possible for the algorithm to converge to a local minimum rather than the desired global minimum. While these are possible outcomes, the method does allow for freedom of choice regarding the orders of the polynomials, n and d, in the transfer function being solved for as well as the weighting power, r, used. By appropriately selecting and tuning these parameters, it is possible to steer towards desirable results. From examination of the frequency response of the different channels, it is believed that the 11 channel has an excess of four poles over the number of zeros, the 12 and 21 channels have an excess of three poles over the number of zeros, and the 22 channel has an excess of two poles over the number of zeros. With this in mind, the order of the numerator and denominator polynomials in the transfer functions were selected through experimentation until adequate solutions were achieved. A good fifth-order model was obtained with d = 5, n11 = 1, n12 = 2, n21 = 2, n22 = 3, and r = 1.1. Also, a fourth-order model, with larger approximation errors, was also obtained with d = 4, n11 = 0, n12 = 1, n21 = 1, n22 = 2, and r = 0.83. 25
  • 31. 3.3 Evolution of Iterations Towards Convergence The least-squares algorithm is now applied to the entire MIMO system using the fifth-order model and the refinement in error minimization with each iteration is examined. Figure 19 shows the convergence process of the weighted errors in the cost function as the entire system is being solved simultaneously; there is naturally a single point of convergence across the system. The errors dive down initially, level out, and then decrease again, before finally settling into the minimum obtained from the final least-squares solution. Figures 20 - 23 show the evolution of the first few iterations for each of the channels as they rapidly converge to the frequency response seen in the empirical data. In this case, the algorithm reaches the minimum at iteration L = 14. At this point JL ≈ J∞, PL ≈ P∞. 0 5 10 15 10 0 10 1 10 2 10 3 10 4 10 5 10 6 10 7 10 8 Iteration Number J Convergence of Error in Cost Function, Fifth Order Model r=1.1 Total 11 Channel 12 Channel 21 Channel 22 Channel Figure 19: Convergence of Least Squares Solution Error 26
  • 32. 10 0 10 1 10 2 10 −4 10 −2 10 0 10 2 rad/sec mag(V/V) 11 Channel Magnitude 10 0 10 1 10 2 −200 −100 0 100 200 rad/sec phase(deg) 11 Channel Phase Measured Iteration 1 Iteration 3 Iteration 5 Iteration 7 Figure 20: Evolution of Least Squares Iterations 11 Channel 10 0 10 1 10 2 10 −4 10 −2 10 0 10 2 rad/sec mag(V/V) 12 Channel Magnitude 10 0 10 1 10 2 −200 −100 0 100 200 rad/sec phase(deg) 12 Channel Phase Measured Iteration 1 Iteration 3 Iteration 5 Iteration 7 Figure 21: Evolution of Least Squares Iterations 12 Channel 27
  • 33. 10 0 10 1 10 2 10 −4 10 −2 10 0 10 2 rad/sec mag(V/V) 21 Channel Magnitude 10 0 10 1 10 2 −200 −100 0 100 200 rad/sec phase(deg) 21 Channel Phase Measured Iteration 1 Iteration 3 Iteration 5 Iteration 7 Figure 22: Evolution of Least Squares Iterations 21 Channel 10 0 10 1 10 2 10 −4 10 −2 10 0 10 2 rad/sec mag(V/V) 22 Channel Magnitude 10 0 10 1 10 2 −200 −100 0 100 200 rad/sec phase(deg) 22 Channel Phase Measured Iteration 1 Iteration 3 Iteration 5 Iteration 7 Figure 23: Evolution of Least Squares Iterations 22 Channel 28
  • 34. 3.4 Final Converged Least-Squares Solution Models Two different models, a fourth-order MIMO transfer function and a fifth-order MIMO transfer function, are now considered in their final converged form. Fig- ures 24 - 27 compare the frequency response of the models with the measured data for each channel. The fourth-order transfer function does model some prop- erties of the measured data, but is lacking compared to the fifth-order transfer function. The fifth-order transfer function models the measured data much more accurately, the cost of the added order in the system model appears to be a worthwhile trade-off in this case. If there is a particular shortcoming with the model, it is that it has less damping at the natural frequency. The asymptotic stability of both systems is confirmed by observing all poles in the left-hand side of the complex plane (Figures 28 and 29). Moreover, the dynamics of the actual measured system can be seen explicitly modelled quite accurately in the structure of the fifth-order transfer function pole-zero map. The complex conjugate pole pair models the resonant frequency. The zeros of the 11 and 12 channels are nearly identical, with the 12 channel having an extra zero that somewhat negates a pole contributing to extra low-pass filter features in the 11 channel. Similarly, the zeros of the 21 channel and the 22 channel are nearly identical, with the 22 channel having an extra zero that somewhat negates a pole contributing to extra low-pass filter features in the 21 channel. The pole-zero map of the fourth-order tranfser function very roughly displays a similar phenomena, but with results much more smeared. 10 0 10 1 10 2 10 −5 10 0 rad/sec mag(V/V) 11 Channel Magnitude Measured 4th−order, r=0.83 5th−order, r=1.1 10 0 10 1 10 2 −200 −100 0 100 200 rad/sec phase(deg) 11 Channel Phase Figure 24: Least Squares Solutions 11 Channel 29
  • 35. 10 0 10 1 10 2 10 −4 10 −2 10 0 10 2 rad/sec mag(V/V) 12 Channel Magnitude Measured 4th−order, r=0.83 5th−order, r=1.1 10 0 10 1 10 2 −200 −100 0 100 200 rad/sec phase(deg) 12 Channel Phase Figure 25: Least Squares Solutions 12 Channel 10 0 10 1 10 2 10 −4 10 −2 10 0 10 2 rad/sec mag(V/V) 21 Channel Magnitude Measured 4th−order, r=0.83 5th−order, r=1.1 10 0 10 1 10 2 −200 −100 0 100 200 rad/sec phase(deg) 21 Channel Phase Figure 26: Least Squares Solutions 21 Channel 30
  • 36. 10 0 10 1 10 2 10 −4 10 −2 10 0 10 2 rad/sec mag(V/V) 22 Channel Magnitude Measured 4th−order, r=0.83 5th−order, r=1.1 10 0 10 1 10 2 −200 −100 0 100 200 rad/sec phase(deg) 22 Channel Phase Figure 27: Least Squares Solutions 22 Channel −9 −8 −7 −6 −5 −4 −3 −2 −1 0 1 −80 −60 −40 −20 0 20 40 60 80 Fifth−Order Least−Squares Solution Pole−Zero Map Real Axis (seconds−1 ) ImaginaryAxis(seconds −1 ) 11 Channel Zeros 12 Channel Zeros 21 Channel Zeros 22 Channel Zeros Shared Poles Figure 28: Fifth-Order Model Pole Zero Map 31
  • 37. −80 −70 −60 −50 −40 −30 −20 −10 0 10 −80 −60 −40 −20 0 20 40 60 80 Fourth−Order Least−Squares Solution Pole−Zero Map Real Axis (seconds −1 ) ImaginaryAxis(seconds −1 ) 11 Channel Zeros 12 Channel Zeros 21 Channel Zeros 22 Channel Zeros Shared Poles Figure 29: Fourth-Order Model Pole Zero Map 32
  • 38. 4 Subspace Realization from Impulse Response We now consider system identification in the time-domain given the response of a dynamic system to a unit input pulse in discrete-time. 4.1 Discrete-Time State Space Model A discrete-time model of a system in state space form can be represented by a set of linear difference equations xk+1 = Axk + Buk yk = Cxk + Duk where the integer k is discrete time, xk is the state vector, uk is the input vector, and yk is the output vector. In general, A ∈ Cn×n , B ∈ Cn×m , C ∈ Cp×n , and D ∈ Cp×m for an m-input/p-output system. Here we will consider a one- input/three-output system, m = 1 and p = 3. The objective is to identify the time-invariant system matrices (A, B, C, D). 4.2 Markov Parameters The output of the system, yk, can be represented as a sequence of weighted inputs. Assuming a zero initial state, x0 = 0, the output can be obtained through repeated substitution of the state-space equations as yk = k i=1 CAi−1 Buk−1 + Duk This sequence depends only on inputs, it is independent of any state measure- ments. The Markov parameter sequence for a state space model is obtained from the impulse response of the system. Given the unit input pulse uk = 1 k = 0 0 k = 1, 2, 3, · · · it follows that the response of the state space model, with an assumed zero initial state, is hk = D k = 0 CAk−1 B k = 1, 2, 3, · · · where hk ∈ Rm×p . The impulse response terms CAk B for k = 0, 1, 2, · · · are the Markov parameters of the state space model. Markov parameters are invariant to state transformations. They are also unique for a given system because the parameters are the pulse response of the system. We can see that (A, B, C, D) is a realization of {hk}∞ k=0. 33
  • 39. 4.3 Measured Impulse Response The Markov parameters can be constructed from the measured impulse response without explicit knowledge of the system matrices. In this case, we have a one- input/three-output system, setup according to the diagram in Figure 30. An 8V impulse is input to the system at t = 1s and the three system outputs, y1, y2, and y3, are sampled at 100Hz. High-Pass Filter Low-Pass Filter Oscillator u y1 y2 y3 Figure 30: System Block Diagram Figures 31 - 33 show the measured impulse responses for the three outputs. y1 measures the input u having passed through the high-pass filter; y2 measures the input u having passed through the high-pass filter and low-pass filter; y3 measures the input u having passed through the high-pass filter, low-pass filter, and oscillator. For the purpose of constructing a unit impulse response sequence in the manner previously described, the outputs will be manipulated. The outputs will be calibrated by subtracting out the averaged measurements prior to the impulse and normalized to the response from a unit impulse. 0 2 4 6 8 10 −1 0 1 2 3 4 5 6 7 8 time(s) Magnitude(V) Meausured Impulse Response Output 1 Figure 31: Measured Impulse Response Output 1 34
  • 40. 0 2 4 6 8 10 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 time(s) Magnitude(V) Meausured Impulse Response Output 2 Figure 32: Measured Impulse Response Output 2 0 2 4 6 8 10 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 time(s) Magnitude(V) Meausured Impulse Response Output 3 Figure 33: Measured Impulse Response Output 3 35
  • 41. 4.4 Hankel Matrix The Hankel matrices of the system are now constructed with the Markov pa- rameters from the measured impulse response, hk, according to H(k − 1) =        hk hk+1 hk+2 · · · hk+J−1 hk+1 hk+2 hk+3 · · · hk+J hk+2 hk+3 hk+4 · · · hk+J+1 ... ... ... ... ... hk+L−1 hk+L hk+L+1 · · · hk+L+J−2        where H(k − 1) ∈ Rp∗L×m∗J . Of particular interest are the Hankel matrices for the cases where k = 1 and k = 2 H(0) =        h1 h2 h3 · · · hJ h2 h3 h4 · · · hJ+1 h3 h4 h5 · · · hJ+2 ... ... ... ... ... hL hL+1 hL+2 · · · hL+J−1        H(1) =        h2 h3 h4 · · · hJ+1 h3 h4 h5 · · · hJ+2 h4 h5 h6 · · · hJ+3 ... ... ... ... ... hL+1 hL+2 hL+3 · · · hL+J        We determine how many data points is necessary in construction of the Hankel matrices in order to adequately capture the characteristics of the system. If the minimal dimension of the system is not immediately apparent, a large number of Markov parameters is used to insure that the dimension is not underestimated. If it is known or guessed that the system is not of dimension larger than some n, then a minimum of 2n elements of the output vector is required. While nine seconds of measurement from time of impulse at 100Hz was collected in this case, we will assume less measured data will suffice. We will instead consider J = 300, L = 300. It can be seen that these Hankel matrices are closely related to the control- lability matrix, CJ , and the observability matrix, OL, given as CJ = B AB A2 B · · · AJ−1 B OL =        C CA CA2 ... CAL−1        36
  • 42. where CJ ∈ Rn×m∗J and OL ∈ Rp∗L×n . The Hankel matrices H(0) and H(1) are related to the controllability and observability matrices by the factorizations H(0) = OLCJ H(1) = OLACJ In addition, the first m columns of H(0) and the first p rows of H(0) can be factored according to H(0)      Im 0 ... 0      = OLB Ip 0 · · · 0 H(0) = CCJ It is clear that the Hankel matrices contain information about the system matrices (A, B, C). We would like to first obtain the controllability and observ- ability matrices from a suitable factorization of H(0). It would then follow that the system matrices can be estimated from factorizations of the Hankel matrices according to A = (O∗ LOL)−1 O∗ LH(1)C∗ J (CJ C∗ J )−1 B = (O∗ LOL)−1 O∗ LH(0)      Im 0 ... 0      C = Ip 0 · · · 0 H(0)C∗ J (CJ C∗ J )−1 We note that separately, we can always obtain D = h0 4.5 Model Estimation from Singular Value Decomposition It is now desired to obtain the controllability and observability matrices from a factorization of the Hankel matrix H(0). By using a singular value decomposi- tion (SVD), the matrix H(0) is factored as H(0) = OLCJ = UΣV ∗ Several choices are considered regarding how the results of the SVD will be factored into the product of the controllability and observability matrices. One choice is to use an internally balanced factorization where CJ = Σ1/2 V ∗ OL = UΣ1/2 37
  • 43. This will be particularly helpful for later reduction of the system model. It then follows that the system matrix A can be estimated from H(1) as A = Σ−1/2 U∗ H(1)V Σ−1/2 and the system matrices B and C can be estimated from the first m columns of H(0) and first p rows of H(0), respectively, as B = Σ−1/2 U∗ H(0)      Im 0 ... 0      C = Ip 0 · · · 0 H(0)V Σ−1/2 It can be seen from the results of the SVD, that while Σ is unique, U and V are not. This means that the estimated systems matrices (A, B, C) are not unique, because they are dependent on U and V . 4.6 Full-Order Model Having obtained (A, B, C, D) for the estimated system model, its characteristics are now compared with those obtained from the measured data. By having solved for these system matrices with Hankel matrices constructed with J = 300 and L = 300, the system model has 300 states. Figures 34 - 36 compare the impulse response of the estimated system model with the measured impulse response data and show the identified model closely approximating the actual one. Figures 37 - 39 show the errors in the impulse response of the estimated model. They show that construction of the Hankel matrices from 3s of measured data was indeed enough to adequately capture characteristics of the systems; there is relatively little error even after 3s. While the estimated system models the first 3s of impulse response very accurately with no error, this does mean that it includes any noise in the measured system, which perhaps does not accurately approximate the “true” noise-free system. Additionally, the estimated model can be compared with the measured data in the frequency domain. This is possible because the measured data is from an impulse response. Taking the z-transform of the state space equations converts the system to a transfer function in the frequency domain, which is equivalent to taking the discrete Fourier transform of the impulse response. Figures 40 - 42 compare the frequency response of the estimated system model with the actual system, obtained by taking the fast Fourier transform of the measured impulse response data. The model closely approximates the measured data in the frequency domain, with the presence of noise particularly noticeable at lower magnitudes. 38
  • 44. 0 1 2 3 4 5 −0.2 0 0.2 0.4 0.6 0.8 1 1.2 time(s) Magnitude(V) Output 1 Full−Order System Model Impulse Response Meaured Estimated Full−Order Model Figure 34: Full-Order Model Impulse Response Output 1 0 1 2 3 4 5 −0.01 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 time(s) Magnitude(V) Output 2 Full−Order System Model Impulse Response Meaured Estimated Full−Order Model Figure 35: Full-Order Model Impulse Response Output 2 39
  • 45. 0 1 2 3 4 5 −0.1 −0.08 −0.06 −0.04 −0.02 0 0.02 0.04 time(s) Magnitude(V) Output 3 Full−Order System Model Impulse Response Meaured Estimated Full−Order Model Figure 36: Full-Order Model Impulse Response Output 3 0 1 2 3 4 5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 x 10 −4 time(s) Magnitude(V) Output 1 Full−Order Model Impulse Response Error Figure 37: Full-Order Model Impulse Response Error Output 1 40
  • 46. 0 1 2 3 4 5 −1.5 −1 −0.5 0 0.5 1 1.5 x 10 −4 time(s) Magnitude(V) Output 2 Full−Order Model Impulse Response Error Figure 38: Full-Order Model Impulse Response Error Output 2 0 1 2 3 4 5 −1.5 −1 −0.5 0 0.5 1 1.5 2 x 10 −4 time(s) Magnitude(V) Output 3 Full−Order Model Impulse Response Error Figure 39: Full-Order Model Impulse Response Error Output 3 41
  • 47. 10 0 10 1 10 2 10 −0.7 10 −0.4 10 −0.1 rad/sec mag(V/V) Channel 1 Magnitude Measured Estimated Full−Order Model 10 0 10 1 10 2 −200 −100 0 100 200 rad/sec phase(deg) Channel 1 Phase Figure 40: Full-Order Model Frequency Response Output 1 10 0 10 1 10 2 10 −2 10 −1 10 0 rad/sec mag(V/V) Channel 2 Magnitude Measured Estimated Full−Order Model 10 0 10 1 10 2 −200 −100 0 100 200 rad/sec phase(deg) Channel 2 Phase Figure 41: Full-Order Model Frequency Response Output 2 42
  • 48. 10 0 10 1 10 2 10 −4 10 −2 10 0 rad/sec mag(V/V) Channel 3 Magnitude Measured Estimated Full−Order Model 10 0 10 1 10 2 −200 −100 0 100 200 rad/sec phase(deg) Channel 3 Phase Figure 42: Full-Order Model Frequency Response Output 3 43
  • 49. Figure 43 shows the 300 eigenvalues of the identified system model, most are very slow ones close to the unit circle. For the system to be asymptotically stable, it requires |λi| < 1, i = 1, 2, · · · , 300, but there are actually 18 eigen- values of magnitude larger than one. Because these eigenvalues are so slow, the instability was not seen in the previous 5 second impulse response plots. Ex- panding the impulse response data out to 100 seconds confirms the instability of the system model due to a handful of very slow eigenvalues (Figure 44). While it initially appeared that the obtained system realization modelled the actual system very accurately, it actually is inadequate due its lack of stability. −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 0.1π/T 0.2π/T 0.3π/T 0.4π/T 0.5π/T 0.6π/T 0.7π/T 0.8π/T 0.9π/T 1π/T 0.1π/T 0.2π/T 0.3π/T 0.4π/T 0.5π/T 0.6π/T 0.7π/T 0.8π/T 0.9π/T 1π/T 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Eigenvalues for Full−Order System Model Real Axis ImaginaryAxis Figure 43: Eigenvalues for Full-Order System 44
  • 50. −50 0 50 To:Out(1) −20 −10 0 10 20 To:Out(2) 0 10 20 30 40 50 60 70 80 90 100 −50 0 50 To:Out(3) Instability of Full−Order System Model from Impulse Response time (seconds) Magnitude(V) Figure 44: Instability of Full-Order system Model shown in Impulse Response 4.7 Model Reduction It is now desired to compute an asymptotically stable system realization of reduced order whose impulse response best fits the measured one. Figure 45 shows the singular values for the Hankel matrix (constructed with J = 300, L = 300) that the 300 state system model was constructed from. With perfect, noise- free data, the minimal realization can easily be obtained by only keeping the non-zero singular values from the SVD of H(0). However, with experimentally obtained data addled with noise, the rank of the Hankel matrix will usually be the order of the system, as is the case here. As a result, the system realization may be of much larger dimensions than the dynamic system actually warrants, motivating the desire to reduce the system. While the system model is composed of 300 states, it is clear that several singular values are much larger than most others. This leads to the belief that this systems can be represented by a realizations of much smaller order with relatively little loss in system input- output properties. 45
  • 51. 0 50 100 150 200 250 300 10 −4 10 −3 10 −2 10 −1 10 0 10 1 Number SingularValues Singular Values Figure 45: Singular Values of H(0) The discrete-time controllability and observability grammians for the system are defined as Gc,J = CJ C∗ J = J−1 k=0 Ak BB∗ A∗k Go,L = O∗ LOL = L−1 k=0 A∗k C∗ CAk Having chosen an internally balanced realization where CJ = Σ1/2 V ∗ OL = UΣ1/2 we see that our controllability and observability grammians satisfy Gc,J = Go,L = Σ Thus, the system is in a coordinate system that is as controllable as it is ob- servable. This makes it a convenient one to remove low gain system modes from. In order to reduce the order of the system realizations, we wish to approxi- mate the original Hankel matrix H(0) of rank n by another matrix ˆH(0) of rank r < n, such that we minimize H(0) − ˆH(0) 46
  • 52. This desired matrix ˆH(0) is obtained by taking the SVD of H(0), setting the smallest non-zero singular values to zero, and multiplying the matrices back together. That is, H(0) = UΣV ∗ Σ =       σ1 0 · · · 0 0 σ2 ... ... ... 0 0 · · · 0 σn       while ˆH(0) = U ˆΣV ∗ ˆΣ =                σ1 0 · · · · · · · · · · · · 0 0 σ2 ... ... ... ... ... σr ... ... 0 ... ... ... ... 0 · · · · · · · · · · · · · · · 0                where ˆH(0) is the closest matrix of rank r to H(0). Having obtained ˆH(0) = U ˆΣV ∗ of reduced rank r in this manner, we seek to determine the corresponding system matrix ˆA, such that we minimize UΣ1/2 AΣ1/2 V ∗ − U ˆΣ1/2 ˆAˆΣ1/2 V ∗ = U∗ H(1)V − ˆΣ1/2 ˆAˆΣ1/2 By noting that ˆΣ1/2 ˆAˆΣ1/2 is zero in the last n − r rows and columns (based on our construction of ˆΣ) we can see that this minimum will occur if ˆΣ1/2 ˆAˆΣ1/2 = U∗ H(1)V in the upper r × r submatrix. We are left with a freedom of choice in selection of the remaining elements of ˆA in the last n − r rows and columns; this selection will have no effect on ˆΣ1/2 ˆAˆΣ1/2 . Thus, in the interest of being able to obtain a system matrix of the smallest dimension, the last n − r rows and columns are set equal to zero. This allows us to obtain the following system matrices ˆA = ˆΣ1/2† U∗ H(1)V ˆΣ1/2† ˆB = ˆΣ−1/2 U∗ H(0)      Im 0 ... 0      47
  • 53. ˆC = Ip 0 · · · 0 H(0)V ˆΣ−1/2 but with ˆA constructed with zeros in the last n − r rows and columns, ˆB con- structed with zeros in the last n − r rows, and ˆC constructed with zeros in the last n − r columns. By taking the upper r × r submatrix of ˆA, the first r rows of ˆB, and the first r columns of ˆC, we calculate a smaller system of rank r from H(0) of rank n, that approximates the original full-order model. We now consider just how small we can make the rank of our Hankel matrix and the corresponding state size of our system model, while still best modeling the full-order system. Figure 46 shows the fifteen largest singular values of H(0). The Hankel matrix has seven singular values in particular that are much larger than the remaining singular values. This motivates the belief that the system can perhaps be reduced to a seven state model with relatively little loss in system input-output properties. 0 2 4 6 8 10 12 14 16 10 −3 10 −2 10 −1 10 0 10 1 Number SingularValues Fifteen Largest Singular Values Figure 46: Fifteen Largest Singular Values Figures 47 - 49 show the errors in impulse response that result from reducing the full-order model to various sizes. We see that indeed, reducing the model to anything less than a seven-state model introduces relatively large errors, this corresponds to removing one of the relatively larger singular values. We see that by keeping the seven larger singular values in our seven-state model, we have relatively little error. We also see that there is much less benefit in increasing our system model to an eight-state one; this corresponds to keeping an extra singular value that is significantly smaller. The difference in impulse responses between the seven-state model and the eight-state one are relatively small, the seven-state model already has little error. 48
  • 54. 0 1 2 3 4 5 −0.12 −0.1 −0.08 −0.06 −0.04 −0.02 0 0.02 0.04 time(s) Error(V) Output 1 Impulse Response Errors for Reduced System Models 3 State Model 4 State Model 5 State Model 6 State Model 7 State Model 8 State Model Figure 47: Output 1 Impulse Response Errors for Reduced Systems 0 1 2 3 4 5 −0.02 −0.01 0 0.01 0.02 0.03 0.04 0.05 0.06 time(s) Error(V) Output 2 Impulse Response Errors for Reduced System Models 3 State Model 4 State Model 5 State Model 6 State Model 7 State Model 8 State Model Figure 48: Output 2 Impulse Response Errors for Reduced Systems 49
  • 55. 0 1 2 3 4 5 −0.1 −0.08 −0.06 −0.04 −0.02 0 0.02 0.04 0.06 time(s) Error(V) Output 3 Impulse Response Errors for Reduced System Models 3 State Model 4 State Model 5 State Model 6 State Model 7 State Model 8 State Model Figure 49: Output 3 Impulse Response Errors for Reduced Systems 50
  • 56. 4.8 Final Reduced Model The final result is an asymptotically stable reduced seven-state, one-input/three- output system that models the actual system from which the measured data was collected. Figures 50 - 52 compare the impulse response of the reduced-order model with the measured data, while figures 53 - 55 compare the frequency response. The reduced-order system closely approximates the actual system, maintaining the high-pass filter, low-pass filter, and resonant characteristics. Asymptotic stability is confirmed by noting that |λi| < 1, i = 1, 2, · · · , r in Figure 56. The complex conjugate pair contributing to the resonant frequency is seen at 0.7203±j0.6404, while the remaining eigenvalues closer to the imaginary axis contribute primarily to the low-pass filter and high-pass filter features. Figures 57 - 59 show the error in impulse response for the reduced-order model compared to the measured data. It can be argued that this is noise that was removed from the measured data in construction of a better model of the “true” system. 0 1 2 3 4 5 −0.2 0 0.2 0.4 0.6 0.8 1 1.2 time(s) Magnitude(V) Output 1 Impulse Response for Reduced System Model Meaured 7 State Reduced Model Figure 50: Reduced-Order Model Impulse Response Output 1 51
  • 57. 0 1 2 3 4 5 −0.01 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 time(s) Magnitude(V) Output 2 Impulse Response for Reduced System Model Meaured 7 State Reduced Model Figure 51: Reduced-Order Model Impulse Response Output 2 0 1 2 3 4 5 −0.1 −0.08 −0.06 −0.04 −0.02 0 0.02 0.04 time(s) Magnitude(V) Output 3 Impulse Response for Reduced System Model Meaured 7 State Reduced Model Figure 52: Reduced-Order Model Impulse Response Output 3 52
  • 58. 10 0 10 1 10 2 10 −0.7 10 −0.4 10 −0.1 rad/sec mag(V/V) Channel 1 Magnitude Measured 7 State Reduced Model 10 0 10 1 10 2 −200 −100 0 100 200 rad/sec phase(deg) Channel 1 Phase Figure 53: Reduced-Order Model Frequency Response Output 1 10 0 10 1 10 2 10 −2 10 −1 10 0 rad/sec mag(V/V) Channel 2 Magnitude Measured 7 State Reduced Model 10 0 10 1 10 2 −200 −100 0 100 200 rad/sec phase(deg) Channel 2 Phase Figure 54: Reduced-Order Model Frequency Response Output 2 53
  • 59. 10 0 10 1 10 2 10 −4 10 −2 10 0 rad/sec mag(V/V) Channel 3 Magnitude Measured 7 State Reduced Model 10 0 10 1 10 2 −200 −100 0 100 200 rad/sec phase(deg) Channel 3 Phase Figure 55: Reduced-Order Model Frequency Response Output 3 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 0.1π/T 0.2π/T 0.3π/T 0.4π/T 0.5π/T 0.6π/T 0.7π/T 0.8π/T 0.9π/T 1π/T 0.1π/T 0.2π/T 0.3π/T 0.4π/T 0.5π/T 0.6π/T 0.7π/T 0.8π/T 0.9π/T 1π/T 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Eigenvalues for Reduced System Model Real Axis ImaginaryAxis Figure 56: Eigenvalues for Reduced-Order Model 54
  • 60. 0 1 2 3 4 5 −3 −2 −1 0 1 2 3 4 5 x 10 −4 time(s) Magnitude(V) Output 1 Impulse Response Error for Reduced System Model Figure 57: Reduced-Order Model Impulse Response Error Output 1 0 1 2 3 4 5 −3 −2 −1 0 1 2 3 4 5 x 10 −4 time(s) Magnitude(V) Output 2 Impulse Response Error for Reduced System Model Figure 58: Reduced-Order Model Impulse Response Error Output 2 55
  • 61. 0 1 2 3 4 5 −4 −3 −2 −1 0 1 2 3 4 x 10 −4 time(s) Magnitude(V) Output 3 Impulse Response Error for Reduced System Model Figure 59: Reduced-Order Model Impulse Response Error Output 3 56
  • 62. 5 Conclusion Through the various methods presented, a four-state continuous-time state- space model, a fifth-order transfer function, and a seven-state discrete-time state-space model were identified from noisy input-output data to model more complex MIMO systems with relatively good accuracy. Consideration was taken to arrive at these specific system sizes; it was shown that using smaller models adversely affected the system input-output properties, while using larger models provided little added benefit. Depending on their intended use, these reduced models could provide adequate representations of the actual systems, without the cost associated with larger models. It was also shown that it is possible that the reduced models remove noise from larger models and actually provide a more accurate representation of the true system. The same principles can be applied to much larger and complex systems, where there is likely much more benefit in utilizing a model of reduced complexity. It was also demonstrated that asymptotic stability is not necessarily guaranteed for the methods presented and special consideration needs to be taken to ensure such is achieved. Lastly, it should be noted that the methods presented are limited to linear time-invariant system models. 57
  • 63. Bibliography [1] Panos J. Antsaklis and Anthony N. Michel. A Linear Systems Primer. Birkh¨auser, 2007. [2] Gene F. Franklin, J. David Powell, and Michael L. Workman. Digital Control of Dynamic Systems. Ellis-Kagle Press, 2007. [3] E. C. Levy. Complex-curve fitting. IRE Transactions on Automatic Control, AC-4(1):37–43, May 1959. [4] R. Pintelon, P. Guillaume, Y. Rolain, J. Schoukens, and H. Van hamme. Parametric identification of transfer functions in the frequency domain - a survey. IEEE Transactions on Automatic Control, 39(11):2245–2260, November 1994. [5] C. K. Sanathanan and J. Koerner. Transfer function synthesis as a ratio of two complex polynomials. IEEE Transactions on Automatic Control, 8(1):56–58, January 1963. [6] Jason L. Speyer and Walter H. Chung. Stochastic Processes, Estimation, and Control. SIAM, 2008. [7] H. Zeiger and A. McEwen. Approximate linear realizations of given dimen- sion via ho’s algorithm. IEEE Transactions on Automatic Control, 19(2):153, April 1974. 58