Sequential MCMC Methods for Parameter Estimation
of LTI Systems Subjected to Non-stationary
Earthquake Excitations
A report submitted in partial fulfillment of the requirements for the degree of
Bachelor Of Technology
in
Civil Engineering
Submitted by
Anshul Goyal
(10010410)
Under the supervision of
Dr. Arunasis Chakraborty
DEPARTMENT OF CIVIL ENGINEERING
INDIAN INSTITUTE OF TECHNOLOGY GUWAHATI
April, 2014
Certificate
It is certified that the work contained in the project report entitled “Sequential MCMC
Methods for Parameter Estimation of LTI Systems Subjected to Non-stationary
Earthquake Excitations”, by Anshul Goyal (10010410) has been carried out under my
supervision and that this work has not been submitted elsewhere for the award of a degree
or diploma.
Date:
Dr. Arunasis Chakraborty
Associate Professor
Department of Civil Engineering
Indian Institute of Technology Guwahati
i
Acknowledgements
I would like to express my sincere thanks and gratitude to my project supervisor Dr. Aruna-
sis Chakraborty for his guidance, motivation and support throughout the course of the
project work. The thoughts and suggestions have been very useful to shape my current work
in the best possible way. Moreover, the regular talks and discussions have helped me to
establish my future goals. Throughout the project work he was always approachable and I
enjoyed working under his able guidance. Next, I would like to thank Prof. Anjan Dutta
and Prof S.K.Deb for providing the necessary field data for carrying out the simulation.
I am also thankful to the HOD (Prof. Arup Kumar Sharma) of the Department of Civil
Engineering IIT Guwahati for providing the opportunity and the facilities to complete my
project work. I would also like to thank Mr. Swarup Mahato who is currently pursuing his
PhD under Dr. Chakraborty for all his help and discussions during the project work. My
friends and colleagues have always supported me during the entire project work. Finally, I
thank my family for their kind support and co-operation.
Date:
Anshul Goyal
(10010410)
IIT Guwahati,India
ii
Abstract
In this report, sequential Markov Chain Monte Carlo (MCMC) simulation based algorithms
(a.k.a Particle filters) are used for parameter estimation of a linear second order dynamical
system. In comparison to Kalman filters, they are more general and applicable to systems
where model and measurement equations are highly nonlinear. The present study mainly fo-
cuses on the implementation of Sequential importance sampling (SIS),Sequential importance
Resampling (SIR) and Bootstrap filter (BF) for identifying the parameters of a three storied
shear building model and a fixed base multi storied RC framed building in IIT Guwahati
referred as BRNS buiding. All the three algorithms have been implemented for synthetic as
well as the field measurement data. The synthetic study has been carried out using the three
storied model whereas field data is used for BRNS building.Using these measurements, the
parameters identified are the stiffness and damping at all the degrees of freedom. Initially
random values (i.e. particles) of these parameters are generated from a pre-selected proba-
bility distribution function (e.g. uniform distribution). Each particle is then passed through
the model equation and the state is updated using the measurement at every time step. A
weight is then assigned to each particle by evaluating their likelihood to the measurement.
Once the likelihoods for all the particles are evaluated, the new sample for the next iteration
is drawn from the simulated initial pool of particles. All the three filters, SIS,SIR and BF
have been compared on the basis of identified natural frequency of the structures in all the
modes as well as iterative steps required for convergence of parameter values.Furthermore,
four different traditional re-sampling strategies (e.g. multinomial, wheel, systematic and
stratified) are used to test their relative performance while using the resampling step in BF.
The performances of the re-sampling algorithms are compared on the basis of number of
convergence steps and the accuracy of the identified parameters as well as natural frequen-
cies. It is observed that systematic and stratified re-samplings are superior in comparison to
other re-sampling algorithms. Issues like degeneracy and sample impoverishment have been
explained with the help of a SDOF oscillator example.
iii
Contents
Certificate i
Acknowledgements ii
Abstract iii
Contents iv
List of Figures vi
List of Tables viii
List of Symbols and Abbreviations ix
List of Symbols and Abbreviations ix
1 Introduction 1
1.1 Literature review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 Organization of report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2 Dynamic State Estimation 9
2.1 Bayesian Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1.1 Bayesian Model Updating . . . . . . . . . . . . . . . . . . . . . . . . 10
iv
2.2 Monte Carlo Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2.1 Perfect Sampling & Sequential Importance Sampling (SIS) . . . . . . 14
2.2.2 Sequential Importance Resampling (SIR) & Bootstrap Filter . . . . . 19
2.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3 Parameter Estimation of LTI Systems 31
3.1 System Identification of Linear Time Invariant (LTI) Systems . . . . . . . . 31
3.1.1 Synthetic Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.1.2 BRNS Building . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4 Conclusion 57
4.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
References 61
Appendix 62
v
List of Figures
1.1 Dynamic System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Schematic flowchart of system identification (Source:Soderstrom (2001)) . . . 8
2.1 Schematic diagram of SDOF system . . . . . . . . . . . . . . . . . . . . . . . 25
2.2 Ground excitation due to Elcentro earthquake . . . . . . . . . . . . . . . . . 25
2.3 Response of oscillator to ground excitation . . . . . . . . . . . . . . . . . . . 26
2.4 Estimation of ratio of identified stiffness to original stiffness as function of time 26
2.5 Evolution of weights of particles over time . . . . . . . . . . . . . . . . . . . 27
2.6 Evolution of posterior density with time . . . . . . . . . . . . . . . . . . . . 27
2.7 States estimation from the original and the identified system . . . . . . . . . 28
2.8 Posterior evolution of distribution at iteration number a) initial, b) interme-
diate (100) and c) final . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.9 Mean and Standard Deviation of the identified stiffness parameter . . . . . . 29
2.10 Expected value of stiffness by addition of noise (2% expected value) . . . . . 29
2.11 Convergence of the stiffness due to addition of noise (2% expected value) . . 30
3.1 Plan and Elevation of Synthetic Model . . . . . . . . . . . . . . . . . . . . . 39
3.2 Plan and Elevation of BRNS Building . . . . . . . . . . . . . . . . . . . . . . 40
3.3 Ground motion excitations a) Elcentro, b)Lomaprieta, c) Chichi, d) Kobe and
e) Parkfield . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.4 Response of model: Lomaprieta earthquake . . . . . . . . . . . . . . . . . . . 41
vi
3.5 Response of model: Elcentro earthquake . . . . . . . . . . . . . . . . . . . . 42
3.6 Ratio of identified to original parameters:SIS Filter: Elcentro earthquake . . 42
3.7 Ratio of identified to original parameters:SIR Filter: Elcentro earthquake . . 43
3.8 Ratio of identified stiffness to original stiffness: Elcentro earthquake . . . . . 43
3.9 Standard deviation of stiffness: Elcentro earthquake . . . . . . . . . . . . . . 44
3.10 Ratio of identified stiffness to original stiffness: Lomaprieta earthquake . . . 44
3.11 Standard deviation of stiffness: Lomaprieta earthquake . . . . . . . . . . . . 45
3.12 Original and estimated states of model: El-Centro earthquake . . . . . . . . 45
3.13 Mode shape of the original and identified structure . . . . . . . . . . . . . . 46
3.14 Response of the synthetic model due to addition of noise for El-Centro and
Lomaprieta earthquake . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.15 Ground excitation due to recorded earthquakes on 03/09/2009 . . . . . . . . 47
3.16 Ground excitation due to recorded earthquakes on 21/09/2009 . . . . . . . . 47
3.17 Response of BRNS building to multicomponent earthquake recorded on 03/09/2009 48
3.18 Response of BRNS building to multicomponent earthquake recorded on 21/09/2009 48
3.19 Ratio of identified stiffness to original stiffness at all the floor levels . . . . . 49
3.20 Coffiecients α and β & the convergence of damping coefficients . . . . . . . . 50
3.21 First four true modes and estimated modes of BRNS building . . . . . . . . 50
3.22 Original and estimated states of BRNS building a) first storey x direction, b)
First story y direction, c) top storey x direction and d) top storey y direction 51
vii
List of Tables
3.1 Parameter values for solving the forward problem . . . . . . . . . . . . . . . 50
3.2 Comparison of SIS,SIR and Bootstrap filter on the basis identified frequency
in three modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.3 Comparison of SIS,SIR and Bootstrap filter on the basis number of conver-
gence steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.4 Comparison of different resampling algorithms on the basis of identified value
of natural frequency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.5 Ratio of identified value of parameters to original value and comparison on
the basis of convergence steps . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.6 Ratio of identified value of parameters to original value and comparison on
the basis of convergence steps . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.7 Sensitivity analysis due to addition of noise with different SNR . . . . . . . . 54
3.8 Original parameters of the BRNS building . . . . . . . . . . . . . . . . . . . 54
3.9 Comparison of SIS, SIR and Bootstrap filter on the basis of identified values
of natural frequency in eight modes . . . . . . . . . . . . . . . . . . . . . . . 55
3.10 Identified frequency for BRNS building in all the eight modes using the all
the resampling algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.11 Comparison of resampling algorithms on the basis of %error in identified nat-
ural frequency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
viii
List of Symbols and Abbreviations
Symbol Description
p Probability of occurrence of event
pdf Probability density function
X(t) Vector denoting the state of the system
q(.) Function that relate the input and output
wk White noise process denoting the model noise
vk White noise process denoting the measurement noise
hk Non-linear function that relates the measurements to system states at time k
Yk Measurement at time k
Mk Vector comprising the set of measurement till time k
p(Xk|Mk) Pdf of system state conditioned on measurements till time k
p(Xk|Xk−1) Pdf of system’s state at k, conditioned on system’s state at k-1
δ(.) Delta function
ϕ System parameter vector to be identified
M Mass Matrix
K Stiffness Matrix
C Damping Matrix
u Nodal displacement
˙u Nodal velocity
¨u Nodal acceleration
τ(.) Function form which relates the state of the system to its first derivative with respect to
µ Mean value
σ Standard deviation value
E Expectation operator
δ Dirac delta function
Eq Expectation operation with samples drawn from distribution q(.)
ix
˜wk Normalized weights
¨ug Ground motion excitations
Z(t) State of the vibrating system at time t
α, β Damping coefficients considering the Rayleigh damping
ζi
Modal damping ratio in ith
mode
wi Natural frequency in the ith
mode
LTI Linear time invariant
PGA Peak ground acceleration
RC Reinforced concrete
MCMC Markov chain monte carlo
BRNS Board of research in nuclear sciences
SDOF Single degree of freedom
RC Reinforced concrete
EKF Extended Kalman Filter
SIS Sequential importance sampling
SIR Sequential importance resampling
BF Bootstrap filter
SNR Signal to noise ratio
x
Chapter 1
Introduction
System identification is the field of mathematical modeling of the inverse problem from the
experimental data. It has acquired widespread applications in several areas like controls and
systems engineering where system identification methods are used to get appropriate models
for synthesis of a regulator, design of prediction algorithm and in signal processing appli-
cations (such as in communications, geophysical engineering and mechanical engineering).
Models obtained by system identification are used for spectral analysis, fault detection, pat-
tern recognition, adaptive filtering, linear prediction and other purposes. These techniques
are also successfully used in other fields such as biology, environmental sciences and econo-
metrics to develop models for increasing scientific knowledge on the identified object, or for
prediction and control. A dynamic system can be conceptually described in Fig 1.1. The
system is driven by user controlled input variables u(t) while disturbances v(t) cannot be
controlled. The output y(t) provide useful information about the system.
Figure 1.1: Dynamic System
There are several kinds of mathematical models used for solving the inverse problem which
are mostly governed by the underlying differential equations. The mathematical models can
be segregated into two paradigms
• Modeling, which refers to derivation of models from the basic laws of physics. Often,
1
one uses fundamental balance equations for range of variables like energy, force, mass
etc.
• Identification, which refers to the determination of the model parameters from the ex-
perimental data. It includes the set up of identification experiment i.e data acquisition
and determination of a suitable form of the model which is fitted to the recorded data
by assigning suitable numerical values to its parameters.
Though system identification methods are useful for large and complex structures where it
is difficult to obtain the mathematical models directly, it has some limitations. They have
a limited validity i.e they are valid for a certain working point, a certain type of input, a
certain process,etc. Identification is not a foolproof methodology that can be used without
interaction from the user. The reasons for this are
• An appropriate model structure must be found. This can be a difficult problem, par-
ticularly if the dynamics of structure is non-linear.
• The real life recorded data is not perfect always as these are always disturbed by noises.
• The process may vary with time, which can cause problems if an attempt is made to
describe it with a time invariant model.
How to apply System Identification
In general terms an identification experiment is performed by exciting the system and ob-
serving its output over an interval of time. These signals are normally recorded in a computer
mass storage for subsequent information processing. We then try to fit a parametric model
of the process to the recorded input and output sequences. The first step is to determine
an appropriate form of the model (typically a differential equation of certain order). In the
second step, several statistical approaches are used to estimate the unknown parameters of
the model. This estimation is often done iteratively. The model obtained is then tested to
see whether it is an appropriate representation of the system. If this is not the case, some
more complex model structure is considered, its parameters estimated and validated again.
Fig 1.2 shows the schematic of the steps used in system identification.
Following the above discussion, there are two main purposes of model updating or system
identification of the structural system. The common goal is to identify the physical param-
eters eg (stiffness) of a structural element. These identified parameters can further be used
2
Literature review
as indicator for the status of the system. For example, stiffness parameter of a structural
member can be monitored from time to time and an abnormal reduction indicates possible
damage of the member.But this reduction may also be simply due to statistical uncertainty.
Hence the quantification of uncertainty becomes important. Another purpose of model up-
dating can be to obtain a mathematical model to represent the underlying system for future
prediction. This is broadly known as Structural Health Monitoring.Another important area
of application of system identification is structural vibration control which has received great
attention in the last several decades (Housner et al., 1997).
1.1 Literature review
System identification has remained an active area of research over last two decades. Many
researchers have come up with various methods and have solved several problems ranging
from experimental models to real life large scale structures. The general approach can be
divided in following categories.
• Conventional model-based approaches
• Time domain identification methods
• Biologically inspired approaches such as neural network and genetic algorithm
• Time-frequency based approaches using Wavelet,Hilbert Transform
• Chaos theory
Conventional model-based approaches for system identification typically use a computer
model of the structure, such as a Finite-Element Method (FEM) model, to identify structural
parameters primarily from field or laboratory test data. Damage identification in beams is a
common theme in system identification. (Kim and Stubbs, 2002) studied damage identifica-
tion of a two-span continuous beam using modal information.(Lee and Shin, 2002) detected
the changes in the stiffness of beams based on a frequency-based response function.Model-
based system identification methods cannot be used effectively for large and complicated
real-world structures with nonlinear behavior. For such cases, biologically-inspired or soft
computing techniques such as Neural Networks , Genetic Algorithms (GA), or particle swarm
optimization have been proposed as a more effective approach.(Franco et al., 2004) used an
evolutionary algorithm to identify the structural parameters of a 10-DOF shear frame.(Raich
3
Literature review
and Liszkai, 2007) used Genetic algorithm to identify the stiffness changes in a steel beam and
a 3-story, 3 bay frame. In the past two decades because of their ability to retain both time
and frequency information, wavelets have been used increasingly to solve complicated time
series pattern recognition problems in different areas.(Liew and Wang, 1998) used wavelets
to identify cracks in simply supported beams.(Bao et al., 2009) employed the Hilbert-Huang
transform for system identification of concrete-steel composite beams. A few researchers
have employed the chaos theory to model the complicated structural dynamics for system
identification.
However, in this report the main focus of the literature review is on the time domain dy-
namic state estimation methods. The dynamic state estimation methods derive their origin
from the Bayesian Methods. Bayesian theory was originally discovered by the British re-
searcher Thomas Bayes in a publication (Bayes, 1763). The methods have been widely
used in many areas due to the pioneering work done by Thomas Bayes. The modern
form of the theory was rediscovered by French mathematician Simon de Laplace in in
Theorieanalytiquedesprobailites. One of the earliest researches in iterative Bayesian es-
timation can be found in(Ho and Lee, Oct. 1964). (Spragins, 1965) discussed the iterative
application of Bayes rule to sequential parameter estimation and called it as ”Bayesian learn-
ing”.
The methods for dynamic state estimation can be categorized into two groups. The first
includes the well-known Kalman filter (Kalman, 1960) & its variants and the other is the
Monte Carlo simulation based algorithms named as particle filters (Gordon et al., 1993).
The Kalman filter provides an exact solution to the problem of state estimation for linear
Gaussian state space models. The most popular variant of Kalman filter is EKF, where
linearization of the process equation is done to provide a Gaussian approximation of what
really is a non-Gaussian quantity (Hoshiya and Saito, 1984).
(Ghanem and Shinozuka, 1995) provided a review of methods of system identification by
application to experimental data obtained on three and five-story steel building structures
subjected to seismic loading, including the EKF, maximum-likelihood technique, recursive
least-squares, and recursive instrumental-variable method.
(Moaveni et al., 2011) examined six variations of model-based approach including, data-
driven stochastic subspace identification, frequency domain decomposition, observer/Kalman
filter identification, and general realization algorithm for system identification of a full-
scale 7-story RC building structure subjected to shake table loading, and concluded that
probabilistic system identification methods in connection with FE model updating provide
4
Literature review
the most desirable results.
The second group of methods known as the Monte carlo methods begin by considering
the exact form of the recursive integral equations that govern the evolution of filtering
pdf and employ Monte Carlo simulation procedures to solve these equations in a recursive
manner by approximating the complex integrals. Monte Carlo (MC) methods are stochastic
computational algorithms and these are efficient for simulating the highly complex systems.
The MC approach was conceived by Ulam(1945), developed by Ulam and von Neumann
(1947), and coined by Metropolis(1949)(Candy, 2007). The technique evolved during the
Manhattan project in 1940s, when the scientists were investigating the calculations related
to atomic weapon designs. MC methods have wide variety of applications in engineering
and finance. It offers alternative approach to solve numerical integration and optimization
problems. The approach has been used in the next chapter where a detailed formulation of
the methods is presented.
There are several variants of the particle filters available in the literature as well (Chen, 2003).
These methods have been widely used in robotics and for solving the tracking problems
(Thrun, 2002). The application of these methods to problems in structural mechanics is not
yet widely explored. The following passage describes the work done by the researchers on
implementation of these methods to structural mechanics.
(Ching et al., 2006) compared the performance of the Extended kalman filter and parti-
cle filter by applying theses methods on planar four-story shear building with time-varying
system parameters and non-linear hysteretic damping system with unknown system param-
eters. The mass of the shear building is assumed to be time invariant with m1 = m2 =
m3 = m4 = 2, 50, 000kg whereas the stiffness and damping at each of the floor level changes
with time. Synthetic data is generated which is contaminated with noise. The non-linear
model considered is a single degree of freedom (SDOF) Bouc-Wen hysteretic damping sys-
tem. He concluded that Particle filter is the better one to use, since EKF can sometimes
create misleading results. Also EKF is not suitable for highly non-linear models.
(Manohar and Roy, 2006) identified the parameters of nonlinear structures using dynamic
state estimation techniques. They considered two single-degree of freedom nonlinear oscil-
lators, namely, the Duffing oscillator and the one with Coulomb friction damping.In par-
ticular,identification of parameter alpha and mu was done on noisy observations using the
density based Monte Carlo filter, bootstrap filter and sequential importance sampling filter.
The basic objective of the study has been to construct the posterior pdf of the augmented
state vector based on all available information.
5
Motivation
(Nasrellah and Manohar, 2011) did the combined computational and experimental study
using multiple test and sensor data for structural system identification. They considered the
problem of identification of parameters of a beam with spatially varying density and flexural
rigidity as well as the identification of parameters of a rigidly jointed truss. It was concluded
that various factors affect the accuracy of identification like number of particles used in
filtering, closeness of the initial guess on system parameters to the true values, number
of global iterations, noise levels in measurements and model,imperfections, the number of
parameters to be identified and sensitivity of measurements with respect to the parameters
being identified.
Similar studies were carried out by (Namdeo and Manohar, 2007), (Ghosh et al., 2008) and
(Sajeeb and Roy, 2007).The particle filter algorithm has also been used for identification of
fatigue cracks in vibrating beams (Rangaraj, 2012).
1.2 Motivation
State estimation is the process of using dynamic data from a system to estimate quantities
that give a complete description of the state according to some representative model of it.
The methods have wide application in various areas of study. It can be used in structural
health monitoring to detect the changes in the dynamical properties of structural systems
during earthquakes. Apart from the this the methods are applicable to better understand the
nonlinear behavior of structures during seismic loading. The ability to estimate the system
state in real time is useful for efficient control of structures. Models of physical system
always have uncertainties associated with them. These may be due to the approximations
while modeling the system or due to the noisy corrupt measurements by the sensors. Hence,
obtaining the parameters of the system optimally out of the limited noise corrupted data
is a challenge. The present study is different from the earlier ones since it implements the
most common variants of Particle filter i.e SIS, SIR and BF to both synthetic as well as field
data. Moreover, the present work attempts to identify the parameters of a real life structure
subjected to multi-component non-stationary ground motion excitation using all the three
algorithms. Such a study has not been conducted in the past. The algorithm is very well able
to identify the natural frequency even in higher modes when the signal processing techniques
are generally not capable to do so. Implementation to the online health monitoring of a large
scale structure is also one of the applications.
6
Organization of report
1.3 Organization of report
The report mainly focuses on formulation and implementation of dynamic state estimation
techniques for the identification problems in structural mechanics.
• Chapter 1 deals with the introduction to system identification methods and its appli-
cations to several other fields of study. A brief literature review is provided where the
methods used in system identification particularly dynamic state estimation have been
discussed. The chapter concludes with a motivation paragraph where the importance
of the present work has been described in light of the uniqueness of the work.
• Chapter2 deals with developing the mathematical framework of identification meth-
ods in time domain. A general background of Bayesian model is given which is followed
by Monte carlo methods used for approximating the complex integrals in Bayesian
theory. Till here a general background and mathematical equations are derived. A
(SDOF) oscillator is taken as an example to explain the implementation of SIS and BF
algorithms. Issues like degeneracy and sample impoverishment have been discussed.
• Chapter 3 deals with the implementing the methods developed in chapter 2 to solve
system identification problem. Here the focus is on implementing Bootstrap filter to
two different class of structures. The first one is a synthetic study done on a three
storied shear building model whereas the other one is using the field data for a fixed
base RC framed multi-storied building. The building is subjected to multi-component
non-stationary earthquake ground motions with sensors placed at the top and first
story. Overall a set of 10 parameters were identified. A comparison study among the
three algorithms as well as the comparison study of the resampling algorithms have
been done based on the number of iterative steps for convergence as well as the values
of the natural frequency identified by using the algorithms.
• Chapter 4 gives the conclusions and the future work. The strong and weak points in
simulations have been discussed suggesting further ways to improve the present study.
• Appendix contains the MATLAB codes used in simulation.
7
Organization of report
Figure 1.2: Schematic flowchart of system identification (Source:Soderstrom (2001))
8
Chapter 2
Dynamic State Estimation
2.1 Bayesian Methods
In many of the engineering problems, modeling of uncertain parameters is necessary for
various purposes. In this context, Bayes Theoram offers the framework of modeling and
inferring the uncertain models from the measurements. These methods have been applied
to many different disciplines of natural sciences, social sciences and engineering, especially
in statistical physics, engineering hydrology, econometrics, archeology, information sciences,
medical sciences, forensic sciences, marketing, mechanical engineering, computer science,
engineering geology, aerospace engineering, finance, population migration, and many other
areas. In Structural Engineering , these methods have been used in system reliability, predic-
tion of concrete strength , structural dynamics and system identification. Bayesian inference
is very important for Structural engineering applications because of the wide variety of uncer-
tainty associated with the structures. The examples of such uncertainties can be earthquake
ground motion or complete time varying description of the wind pressure, material proper-
ties which are difficult to determine for heterogenous materials like concrete and the number
and size of cracks present in concrete. Not only this, the modeling errors and uncertain-
ties are also associated with the joints.Therefore, Bayesian statistics has wide application
in Structural engineering as well. In many scenarios, the solutions gained through Bayesian
inference are viewed as “optimal”.
9
Bayesian Methods
2.1.1 Bayesian Model Updating
Many real world data analysis tasks involve estimating the unknown quantities from some
given observations. In such type of problems, generally the prior knowledge about the
phenomenon to be modeled is available. This knowledge can be used to formulate the
Bayesian models where prior knowledge about the state is updated using the likelihood
function to generate a posterior distribution. Often the measurements arrive sequentially
and it is possible to both carry out the offline as well as online inferences. Thus, one
of the most important steps of this process is to update the states recursively once the
measurements are available.The focus of dynamic state estimation techniques is to estimate
the state of the system using the measurement data. The governing equation can be written
as:
X(t) = q(P(t), t) (2.1)
where X(t) is the response of the structure when an input force P(t) is given to the system
and q(.) relates the input to the output.
Since the measurements are available at discrete time steps, it becomes obvious to discretize
the above model equation as
Xk+1 = qk(Xk, wk) (2.2)
where Xk represents the state of the system at time t = k ; Xk+1 represents predicted state
at time t = k + 1 and wk represents the white noise.
The discretized measurement equation can be written as
Yk = hk(Xk, vk) (2.3)
where Yk is the measurement at time t = k corresponding to the state Xk and vk is the
measurement noise similar to the model noise. However the model as well the measurement
noise has been assumed as uncorrelated.
The measurements from the sensors are sampled at a particular rate and can be denoted as
a vector
10
Bayesian Methods
Mk = [Y1, Y2, ., Yk] (2.4)
The objective of this formulation is to estimate the current state Xk based on the measure-
ment Yk. As the model and the measurements are corrupted with noise it is required the
problem of state estimate reduces to estimating the probability density function p(Xk|Mk).
Since estimating p(Xk|Mk) is itself not easy, so the more simplified problem is to determine
the moments of Xk. Mathematically, this can be written as:
µ =
∫
Xkp(Xk|Mk)dXk (2.5)
σ =
∫
(Xk − µ)T
(Xk − µ)p(Xk|Mk)dXk (2.6)
where µ and σ are the first moment or mean and the second moment or variance of the pdf
p(Xk|Mk) respectively.
In the following, a detailed derivation of the recursive Bayesian Estimation is presented,
which underlines the principles of sequential Bayesian filter. Two assumptions are used to
derive the recursive Bayesian Filter.
• The sates follow a first order Markov process
p(Xk|X0:k−1) = p(Xk|Xk−1); (2.7)
• The observations are independent of the given states.
At any time t, the posterior is given by the Bayes theorem as
p(X0:t|Y1:t) =
p(Y1:t|X0:t)p(X0:t)
∫
p(Y1:t|X0:t)p(X0:t)dX0:t
(2.8)
The recursive equation can be obtained as
p(X0:t+1|Y1:t+1) = p(X0:t|Y1:t)
p(Yt+1|Xt+1)p(Xt+1|Xt)
p(Yt+1|Y1:t)
(2.9)
The following recursive relations are used for prediction and updating
11
Bayesian Methods
The prediction equation is given by
p(Xt|Y1:t−1) =
∫
p(Xt|Xt−1)p(Xt−1|Y1:t−1)dXt−1 (2.10)
Based on this prediction the model updating equation is
p(Xt|Y1:t) =
p(Yt|Xt)p(Xt|Y1:t−1)
∫
p(Yt|Xt)p(Xt|Y1:t−1)dXt
(2.11)
It is however difficult to compute the normalizing constant p(Y1:t) and the marginal of the
posterior p(X0:t|Y1:t) as it requires evaluation of complex high dimensional integrals. The
above expressions are modified in the following way when the system and the model noise
are also present.
Adopting the notations as p(Xk|Yk−1) is the estimate of the state at time k based on the
measurements Yk−1 and p(Xk|Yk) denotes the pdf of the state at time k based on the mea-
surements Yk. Therefore, the first one is the priori pdf or the prediction, while the latter is
the posteriori pdf or the correction to the state once the measurements are available at time
k.
It is also assumed that
p(X1|M0) = p(X1) (2.12)
is known. The prediction equation can be expressed as:
p(Xk|Mk−1) =
∫
p(Xk|Xk−1)p(Xk−1|Mk−1)dXk−1 (2.13)
Here p(Xk|Xk−1) can be derived from the Eq 2.2. The conditional density can be used to
write the following expressions.
p(Xk|Xk−1) =
∫
p(Xk|Xk−1, wk−1)p(wk−1|Xk−1)dwk−1 (2.14)
Since wk is independent of the state, it can be written that
p(wk−1|Xk−1) ≡ p(wk−1) (2.15)
12
Bayesian Methods
It can be clearly seen from the process equation that if Xk−1 and wk−1 are known, then
Xk can be obtained deterministically from the process equation 2.2.Therefore the pdf of
p(Xk|Xk−1, wk−1) can be mathematically written as
p(Xk|Xk−1, wk−1) ≡ δ(Xk − fk−1(Xk−1, wk−1)) (2.16)
where δ(.) is the Dirac-Delta function. Substituting in this in the above Eq 2.14, we get
p(Xk|Xk−1, w(k − 1)) =
∫
δ(Xk − fk−1(Xk−1, wk−1))p(wk−1|Xk−1)dwk−1 (2.17)
The above expression can be substituted in Eq 2.10
As soon as the measurement Yk is available at the time step k the prediction can be updated
using the Bayesian relation
p(Xk|Mk) =
p(Yk|Xk)p(Xk|Mk−1)
p(Yk|Mk−1)
(2.18)
where the normalizing denominator is given by
p(Yk|Mk−1) =
∫
p(Yk|Xk)p(Xk|Mk−1)dXk (2.19)
The only unknown in the Eq 2.18 is p(Yk|Xk) which can be obtained as:
p(Yk|Xk) =
∫
p(Yk|Xk, vk)p(vk)dvk (2.20)
which again takes the form of the Dirac-Delta function if Xk and vk are known. The mea-
surement Yk is obtained from the measurement Eq 2.3. Thus the above equations form the
basis of the recursive Bayesian Model updating.
If the functions f(.) and h(.) are linear and the noise wk and vk are Gaussian; then the
closed form expressions of the above integrals are available and this leads to the well- known
Kalman Filter,(Kalman, 1960). However if the f(.) and h(.) are non-linear, then several other
methods have been prescribed in literature like EKF, (Hoshiya and Saito, 1984). However
the most recent interest is to exploit the cheap and faster computational facilities to develop
methods based on the Monte Carlo Simulations for approximating the integrals in the above
equations.
13
Monte Carlo Methods
2.2 Monte Carlo Methods
The underlying principle of the MC methods is that they utilize Markov chain theory.
The resulting empirical distribution converges to the desired posterior distribution through
random sampling. The method is widely used in signal processing where one is interested
in determining the moment of the stochastic signal f(X) with respect to some underlying
probabilistic distribution p(X). However the similar concept is used in system identification
problem where one is interested to estimate the expected values of the system parameters.
The methods have the great advantage since these are not subject to constraints of linear-
ity and Gaussianity. The methods as well have appealing convergence properties. Several
variants of MC methods are available in the literature. This includes Perfect Monte carlo
sampling,Sequential importance sampling, Sequential importance resampling and the Boot-
strap particle filter. The following section presents the mathematical formulation of each of
the method. The concept has been illustrated by solving single degree of freedom oscillator
at the end of the chapter.
2.2.1 Perfect Sampling & Sequential Importance Sampling (SIS)
Monte Carlo methods use statistical sampling and estimation techniques to evaluate the
solutions to mathematical problems. The underlying mathematical concept of Monte Carlo
approximation is simple. Consider the statistical problem of estimating the expected value
of E[f(x)] with respect to some probabilistic distribution p(X):
E[f(X)] =
∫
f(X)p(X)dX (2.21)
Here the motivation is to integrate the above expression using stochastic sampling techniques
rather than using the numerical integration techniques. Such a practice is useful to estimate
complex integral where it is difficult to obtain the closed form solution. In MC approach, the
required distribution is represented by random samples rather than analytic function. The
approximation becomes better and more exact when the number of number of such random
samples increases. Thus, MC integration evaluates Eq 2.21 by drawing samples X(i) from
p(X). Assuming perfect sampling, the empirical distribution is given by
p(x) =
1
N
N∑
i=1
δ(X − X(i)) (2.22)
14
Monte Carlo Methods
The above equation can be substituted to give
E[f(x)] =
∫
f(X)p(X)dX≃
1
N
N∑
i=1
f(X(i)) (2.23)
Generalization of this approach is known as Importance sampling where the integral is writ-
ten as
I =
∫
p(x)dx =
∫
p(x)
q(x)
q(x)dx (2.24)
given ∫
q(x)dx = 1 (2.25)
Here q(X) is known as the importance sampling distribution since it samples p(X) non-
uniformly giving more importance to some values of p(x). The Eq 2.24 can be written
as
I = Eq[
p(X)
q(X)
] =
1
N
N∑
i=1
p(X(i))
q(X(i))
(2.26)
where X(i) are drawn from the importance distribution q(.).
The central theme of importance sampling is to choose importance distribution q(.) which
can approximate the target distribution p(.) as close as possible. Using the concept of
importance sampling, it is possible to approximate the posterior distribution. Since it is
generally not easy to sample from the posterior, we use importance sampling coupled with
an easy to sample proposal distribution q(Xt|Yt).This is one of the most important steps
of the Bayesian importance sampling methodology. Using the importance sampling concept
the mean of f(Xt) can be estimated as follows:
E[f(Xt)] =
∫
f(Xt)p(Xt|Yt)dXt (2.27)
where (Xt|Yt) is the posterior distribution. Here, we insert the importance proposal density
function q(Xt|Yt) such that the estimate becomes
F(t) = E[f(Xt)] =
∫
f(Xt)
p(Xt|Yt)
q(Xt|Yt)
q(Xt|Yt)dXt (2.28)
15
Monte Carlo Methods
Now using Eq 2.18 (Bayes Rule) to the posterior distribution and defining the weighting
function as
˜W(t) =
p(Xt|Yt)
q(Xt|Yt)
=
p(Yt|Xt)p(Xt)
p(Yt)q(Xt|Yt)
(2.29)
Calculation of ˜W(t) requires the knowledge of the normalizing constant p(Yt) which is given
by
p(Yt) =
∫
p(Yt|Xt)p(Xt)dXt (2.30)
This normalizing constant is generally not available and hence the new weight W(t) can be
defined by substituting Eq 2.29 into Eq 2.28.
F(t) =
1
p(Yt)
∫
f(Xt)
p(Yt|Xt)p(Xt)
q(Xt|Yt)
q(Xt|Yt)dXt
=
1
p(Yt)
∫
W(t)f(Xt)q(Xt|Yt)dXt
=
1
p(yt)
Eq[W(t)f(Xt)]
(2.31)
The above equation can be also be written as:
W(t)q(Xt|Yt) = p(Yt|Xt)p(Xt) (2.32)
Thus, the normalizing constant in Eq 2.30 can be replaced by 2.32
F(t) =
Eq[W(t)f(Xt)]
p(Yt)
=
Eq[W(t)f(Xt)]
∫
W(t)q(Xt|Yt)
=
Eq[W(t)f(Xt)]
Eq[W(t)]
(2.33)
Now, if the samples are drawn from the distribution q(Xt|Yt), from perfect sampling distri-
16
Monte Carlo Methods
bution we have
˜q =
1
N
N∑
i=1
δ(X − X(i)) (2.34)
and therefore, the normalized weights ˜wi
of the ith
sample can be written as
˜wi
=
Wi(t)
∑N
i=1 Wi(t)
(2.35)
where
Wi(t) =
p(Yt|Xi
t )p(Xi
t )
p(Yt)q(Xi
t |Yt)
(2.36)
Therefore the final estimate of the 2.28 becomes
F(t)≈
N∑
i=1
˜wi
f(Xt(i)) (2.37)
As the number of samples (N → ∞), the approximation of posterior becomes
p(Xt|Yt)≈
N∑
i=1
˜wi
δ(Xt − Xt(i)) (2.38)
With the above mathematical framework in place, we can derive the expressions for sequen-
tial interfacing of measurement data available at time instant t = k. One can write the
approximation of posterior as:
p(Xk|Y1:k)≈
N∑
i=1
˜wi
kδ(Xk − Xi
k) (2.39)
where δ(.) is the dirac delta function and ˜wi
k is the normalized weight of the ith
particle at
time k.
p(X0:k|Y1:k) ∝ p(Yk|X0:k, Y1:k−1)p(X0:k|Y1:k−1)
= p(Yk|Xk)p(Xk|X0:k−1, Y1:k−1)p(X0:k−1|Y1:k−1)
= p(Yk|Xk)p(Xk|Xk−1)p(X0:k−1|Y1:k−1)
(2.40)
17
Monte Carlo Methods
We could now construct an importance distribution Xi
0:k∼q(X0:k|Y1:k) and compute the cor-
responding (normalized) importance weights as
˜wi
k∝
p(Yk|Xi
k)p(Xi
k|Xi
k−1)p(Xi
0:k−1|Y1:k−1)
q(Xi
0:k|Y1:k)
(2.41)
The recursive form of the importance distribution can be written as:
q(X0:k|Y1:k) = q(Xk|X0:k−1, Y1:k)q(X0:k−1|Y1:k−1) (2.42)
Substituting Eq 2.42 in Eq 2.41 we obtain the following expression
˜wi
k =
p(Yk|Xi
k)p(Xi
k|Xi
k−1)p(Xi
0:k−1|Y1:k−1)
q(Xi
k|Xi
0:k−1, Y1:k)q(Xi
0:k−1|Y1:k−1)
(2.43)
Thus the recursive weight can be given as:
˜wi
k∝
p(Yk|Xi
k)p(Xi
k|Xi
k−1)
q(Xi
k|Xi
0:k−1, Y1:k)
˜wi
k−1 (2.44)
So, the algorithm works the following way
• Initilization: Draw N samples Xi
0 from the prior
Xi
0∼p(x0) (2.45)
• Prediction: Draw N new samples Xi
k from importance distribution
Xi
k∼q(Xk|Xi
0:k−1, Y1:k) (2.46)
• Update: Calculate new weights according to Eq 2.44. Once the weights are updated
the posterior can be calculated using equation 2.39
One of the major problems associated with SIS Filter is the degeneracy where all the particles
have negligible weight except one particle after few iterations. The variance of the importance
weights increases with time and it becomes impossible to control the degeneracy phenomenon.
A suitable measure of the degeneracy of the algorithm is the effective sample size (Gordon
18
Monte Carlo Methods
et al., 1993) Neff which can be defined as
Neff =
Ns
1 + V ar(w∗i
k )
(2.47)
where w∗i
k can be obtained from Eq 2.29 The estimate of Neff is given by the following
relation
˜Neff =
1
∑N
i=1 ( ˜wi
k)2
(2.48)
where w is the normalized weight obtained using the Eq 2.44
When Neff becomes less than N; it implies degeneracy and a small Neff indicates severe
degeneracy. Therefore to counter this (Arulampalam et al., 2002) suggested two ways
• Good choice of Importance density: This involves the choosing the importance density
such that the V ar(w∗i
k ) can be reduced and hence the value of Neff increases.
• Resampling: This is another important step which differentiates SIR filter from SIS
filter and has been discussed in detail in the following section.
Both of the above issues form the basis of “Sequential Importance Resampling” also known
as “Adaptive Particle Filters” and have been discussed the following section.
2.2.2 Sequential Importance Resampling (SIR) & Bootstrap Fil-
ter
The SIR filter is an MC method which can be applied to recursive Bayesian filtering
problems. To use SIR algorithm, both the state dynamics Eq 2.1 as well as the measurement
equations Eq 2.3 must be known. Further it is required to be able to sample from the noise
distribution of the process as well as from the prior. A likelihood functions p(Yk|Xk) needs
to be known for computing the particle weights. SIR algorithm is very similar to SIS filter
except the choice of optimal importance density as well as Resampling step included in the
SIR algorithm.
The SIR algorithm can be easily derived from SIS algorithm by appropriate choice of the
importance density. The optimal importance density used in SIR is
19
Monte Carlo Methods
q(Xi
k|Xi
0:k−1, Y1:k) = p(Xi
k|Xi
k−1, Yk) (2.49)
By substituting Eq 2.49 in Eq 2.44 the updated weight becomes
wi
k∝wi
k−1p(Yk|Xi
k−1), (2.50)
This optimal importance distribution can be used when the state space is finite.The present
report also uses the similar assumption of importance density. However, the report deals
with the problem of system identification where we are more interested in identifying the
system parameters rather then tracking the sate vector.
The algorithm can be implemented in the following manner
• Draw particles Xi
k from the importance distribution
Xi
k∼q(Xk|Xi
0:k−1, Y1:k), i = 1, ..., N (2.51)
• The new weights can be calculated from Eq 2.44 for all the particles an normalize them
to unity.
• If Neff calculated in Eq 2.48 becomes too low, perform the resampling step.
• Interpret each weight wi
k as the probability of obtaining the sample index i in the set
Xi
k for [i = 1, . . . ,N].
• Draw N samples from that discrete distribution and replace the old sample set with
this new one.
• Set all weights to the constant value wi
k = 1
N
.
The Bootstrap filter is a special case of SIR filter where the dynamic model is used as impor-
tance distribution as in Eq 2.50 and the resampling is done at each step. A brief algorithm is
presented here for a more clear illustration. However, the problem formulation section gives
the detailed implementation of Bootstrap filter to System identification problem.
• Draw point Xi
k from the dynamic model
Xi
k ∼ p(Xk|Xi
k−1)i = 1, ..., N (2.52)
20
Monte Carlo Methods
• Calculate new weights and normalize them to unity.
wi
k ∝ p(Yk|Xi
k)i = 1, ..., N (2.53)
• Perform resampling after each iteration.
One of the important steps in the above algorithm is resampling from the discrete probability
mass function containing the normalized weights. Resampling ensures that particles with
larger weights are more likely to be preserved than particles with smaller weights. Although
the resampling solves the degeneracy, but it introduces sample impoverishment which is
explained through an example problem solved at the end of the chapter. There are wide
variety of resampling algorithms available in the literature (Li, 2013). This report discusses
the traditional resampling strategies as well as the comparative study of these algorithms in
light of the system identification problem. The traditional resampling algorithms discussed
are namely Multinomial or simple resampling, Systematic & Stratified resampling and Wheel
resampling. A brief description of the algorithms is presented below
Multinomial Resampling
Multinomial Resampling also known as binary search resampling or simple resampling is
one of the simplest of the resampling algorithms which generates N random numbers un
t
and use them to sample particles from the array containing the normalized weights of the
particles wi
. The cumulative sum of the weights is done to select the interval in which the
random number lies. The selection of the mth
particle must satisfy the following equation
m−1∑
i=1
wi < un
t <
m∑
i=1
wi (2.54)
Since the sampling of each particle is purely random so a given particle can be sampled a
minimum of zero times and a maximum of N times.
Stratified Resampling
Stratified sampling divides the total population in sub-populations or the interval of (0,1]
into 1
N
equal intervals. Hence the disjoint sub-intervals are (0, 1
N
] ∪ ( 1
N
, 2
N
] ∪ (1 − 1
N
, 1]. The
21
Monte Carlo Methods
random numbers are drawn from each of the sub-intervals as
un
= U(
n − 1
N
,
n
N
), n = 1, 2....., N (2.55)
After the random number is generated, each sub-interval is tested using cumulative sum of
normalized weights as shown in Eq 2.54
Systematic Resampling
Systematic resampling is similar to stratified resampling where the first random number
is generated from the uniform distribution between (0, 1
N
]. After this the random numbers
are generated deterministically using the equation
un
t = u1
t +
n − 1
N
, n = 2, 3....., N (2.56)
The literature suggests that the systematic resampling is computationallye more efficient
due to smaller number of random numbers that have to be generated Li (2013).
Wheel Resampling
In this resampling method the particles are represented as big wheel with each particle
occupying the circumference proportional to its normalized weight. Particles with bigger
weight occupy more space and the ones with smaller occupy smaller space. An iterative loop
is run for N times where particles will be chosen in proportion to their circumference on the
circle.
Although the resampling step reduces the degeneracy, it introduces various problems. To
begin with, it limits the opportunity to parallelize since all the particle must be combined.
Moreover, resampling introduces the problem of sample impoverishment as particles having
higher weights are selected multiple number of times. This also leads to lack of diversity
among the particles. For the case of very small process noise, all the particles will collapse
to a single point. Different researchers have tried various schemes to deal with sample
impoverishment(Nasrellah and Manohar, 2011). The following section demonstrates the
implementation of the algorithm to a single degree of freedom (SODF) oscillator.
22
Examples
2.3 Examples
In this section numerical examples are presented to demonstrate the implementation of
particle filters. We consider a single-degree-of-freedom (SDOF) oscillator excited by Elcentro
earthquake. We start with a simple problem which aims at identifying the stiffness of the
SDOF oscillator, given the response of the oscillator to earthquake excitation. Both SIS and
Bootstrap filters have been used to solve this example. The measurement data has been
synthetically generated by solving the forward problem by assuming known values of the
system parameters. Once the synthetic measurements are known, the inverse problem is
solved using various time domain methods described above. A schematic diagram of the
oscillator is shown in Fig 2.1. The governing equation of motion of SDOF oscillator is given
by the second order differential equation as:
M ¨u(t) + C ˙u(t) + Ku(t) = −M ¨ug(t) (2.57)
where , M is the Mass, C is the damping, K is the stiffness, ¨ug(t) is the acceleration due
to the ground motion. Here we have considered the ground motion due to 1940 Elcentro
earthquake. Elcentro was the first earthquake to be recorded by strong motion seismograph
and had the magnitude of 6.9. For solving the forward problem, M is assumed to be 40 kg,
C is assumed as 15 N-s and the stiffness value is assumed as 6 × 104
(N/m). The natural
frequency of the oscillator is 38.72 rad/s. The problem in hand is to identify the stiffness
of the SDOF oscillator. The forward problem has been solved by using the time marching
algorithm to obtain the response numerically. We use β Newmark algorithm which is an
implicit unconditionally stable time marching algorithm (Newmark, 1959). The MATLAB
code forβ Newmark algorithm has been provided in the appendix.
The ground excitation due to the Elcentro earthquake has been plotted in Fig 2.2. The
overall duration of the excitation is 40s. The time step considered in the analysis for solving
the forward problem is 0.01 sec. Hence the total number of data points are 4000. The
response of the oscillator is shown in Fig 2.3
The SIS filter is now applied to identify the stiffness value of the oscillator. The total number
of particles considered are 50. The initial values of the stiffness are generated in the domain
of [10000,90000] from the uniform distribution. The algorithm is dependent on the parameter
values generated at time t = 0. The identified value over the entire time history is shown in
the Fig 2.4
Hence, the algorithm acts as a filter and returns the best value among all the values generated
23
Examples
at t = 0. The effect of domain dependency can be bypassed and the algorithm can be made
more general by mutating the particles so obtained by adding a small Gaussian noise with
a controlled value of σ which can be obtained by several test run of the algorithm. The
method has been demonstrated by solving the similar problem using Bootstrap filter.
The example is also useful to study the evolution of weights of the particles as the time
increases. The degeneracy phenomenon explained above can be seen in the Fig 2.5. For
simplicity and clarity, only time history of 4 particles is given. The evolution of the posterior
density with time is given by Fig 2.6. The red dots in Fig 2.6 shows the evolution of the
weight of the best particle across the time history. The estimated states and the states of
the original system are plotted in Fig. 2.7 The degeneracy can be seen in Fig 2.6 where the
particle weights of all the particles becomes zero except the one traced with a red dot.
Now let us see the similar example where the same SDOF oscillator is considered and the
system parameters are identified by implementing the Bootstrap Filter.
Here the degeneracy phenomenon and the sample impoverishment due to resampling is
focussed. The problem of degeneracy can be solved by introducing the resampling step. Fig
2.8 shows the evolution of posterior distribution at iteration number i = 1 (initial), i = 100
(intermediate), i= 4001 (final). Though the degeneracy phenomenon is successfully bypassed
but sample impoverishment dominates here. At the last step of the iteration, 50 copies of
the best particle are available. However, ratio of identified value of stiffness parameter to
its original value as well as the convergence of the stiffness parameter is shown in Fig: 2.9.
The standard deviation of the stiffness samples become zero indicating the convergence of
the algorithm.
To solve the problem of sample impoverishment, a small noise is added to the updated
posterior. The noise added in present study is 2%. The value is obtained after several
test run of the algorithm using different noise levels. The addition of small noise maintains
diversity of samples but the statistical fluctuations increases. The expected value of the
identified parameter is shown in Fig 2.10.The convergence is shown in Fig 2.11. It can be
seen the identified value matches closely with the original value and the convergence is also
achieved.
With the background provided through the mathematical framework and the examples
solved, we will formulate the problem of system identification in the next chapter for both
small and large scale structures. The following section focuses on the implementation of SIS,
SIR and the Bootstrap Filter for solving the parameter estimation problem for a synthetic
model as well as fixed base BRNS building. The synthetic model is solved by generating
24
Examples
the synthetic data with comparative results using several resampling algorithms. On the
other hand BRNS building is solved for multi-component ground excitations with sensor
measurements in both the directions.
Figure 2.1: Schematic diagram of SDOF system
0 5 10 15 20 25 30 35 40
−0.2
−0.1
0
0.1
0.2
0.3
t (s)
¨ug(g)
Figure 2.2: Ground excitation due to Elcentro earthquake
25
Examples
0 5 10 15 20 25 30 35 40
−1.5
−1
−0.5
0
0.5
1
1.5
t (s)
¨ut(g)
Figure 2.3: Response of oscillator to ground excitation
0 5 10 15 20 25 30 35 40
0.8
0.85
0.9
0.95
1
1.05
1.1
1.15
t (s)
µ
k
iden
/µ
k
or
Figure 2.4: Estimation of ratio of identified stiffness to original stiffness as function of time
26
Examples
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
t (s)
weights
K1 6.02E4
K2 6.77E4
K3 6.45E4
K4 1.57E4
Figure 2.5: Evolution of weights of particles over time
0
0.5
1
1.5
2
0
2
4
6
8
10
x 10
4
0
0.2
0.4
0.6
0.8
1
t (s)K (N/m)
weights
Figure 2.6: Evolution of posterior density with time
27
Examples
0 5 10 15 20 25 30 35 40
−2
−1.5
−1
−0.5
0
0.5
1
1.5
2
t (s)
¨ut(g)
True States
Estimated States
5 5.5 6
−1
0
1
Figure 2.7: States estimation from the original and the identified system
0 5 10
x 10
4
0
2
4
6
8
10
−1 0 1 2
x 10
5
0
0.5
1
1.5
x 10
−5
0 5 10
x 10
4
0
5
10
15
20
25
0 5 10
x 10
4
0
1
2
3
4
5
6
x 10
−5
5.957 5.9575 5.958 5.9585
x 10
4
0
10
20
30
40
50
5.957 5.9575 5.958 5.9585
x 10
4
0
0.1
0.2
0.3
0.4
a b c
Figure 2.8: Posterior evolution of distribution at iteration number a) initial, b) intermediate
(100) and c) final
28
Examples
0 5 10 15 20 25 30 35 40
0.7
0.8
0.9
1
1.1
1.2
1.3
µk
iden
/µk
or
0 5 10 15 20 25 30 35 40
0
0.5
1
1.5
2
2.5
x 10
4
t (s)
σ(N/m)
Figure 2.9: Mean and Standard Deviation of the identified stiffness parameter
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
0.8
0.85
0.9
0.95
1
1.05
1.1
1.15
t (s)
µ
k
iden
/µ
k
or
Figure 2.10: Expected value of stiffness by addition of noise (2% expected value)
29
Examples
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
0
0.5
1
1.5
2
2.5
x 10
4
t (s)
σ(N/m)
Figure 2.11: Convergence of the stiffness due to addition of noise (2% expected value)
30
Chapter 3
Parameter Estimation of LTI Systems
3.1 System Identification of Linear Time Invariant (LTI)
Systems
We consider the problem of identifying the system parameters using the Bootstrap Particle
filter algorithm. The natural frequency is calculated by solving the Eigen value problem
involving Mass and Stiffness Matrix. The general equation of motion for a linear system can
be written as
M(t)¨u(t) + C ˙u(t) + Ku(t) = −M ¨ug(t) (3.1)
where, M is the mass matrix, C is the damping matrix, K is the stiffness matrix,¨ug(t) is
the ground excitation and u(t), ˙u(t) & ¨u(t) are respectively the displacement, velocity and
acceleration of the nodes where the sensors are placed for recording.
The system parameters to be identified are represented by ϕ. Here ϕ could represent param-
eters such as stiffness, damping, mass density etc. The above equation can be represented
in continuous state space form as:
˙Z(t) = τ(Z(t), ϕ(t), t) (3.2)
where Z(t) is the state vector of the vibrating system and τ (.) relates the state of the
system to its first order derivative with respect to time. Now the problem in hand is to
identify the system parameters ϕ. The particle filter algorithm simulates particles based on
31
System Identification of Linear Time Invariant (LTI) Systems
the updated posteriori distribution of the state. More samples are generated from the region
where the likelihood is greater. To solve the problem for identification of parameters, ϕ, the
state vector can be augmented as Xk = [Zkϕk] and assuming model noise as the sequence
of i.i.d random variable wk, the above Eq 3.2 can be discretized in the form of Eq 2.2. The
dimension of the problem is equal to sum of vector Z and ϕ. Hence one is able to identify the
state vector as well as the parameters. In the system identification problem we are generally
interested in identifying the system parameters rather than tracking the state of the system.
(Nasrellah and Manohar, 2011) suggested that a larger computational effort can be reduced
by formulating the problem in terms of system parameters . Hence,the systems which remain
invariant with time, the system equation can be expressed as:
dϕ
dt
= 0
ϕj(0) = ϕ0 j = 1, 2.........n
(3.3)
where ϕ0 is the value of the value of the system parameters at time t = 0. The discrete
version of the equation can be presented as
ϕk+1 = ϕk + wk (3.4)
where ϕk is the system parameters at time k, wk is the model noise. The corresponding
measurement equation can be written as
Yk = hk(ϕk) + vk (3.5)
The advantage of the above modeling is that it reduces the dimensions of the state vector
which is equal to dimension of the ϕ vector and hence the associated computational effort.
The MATLAB code for the bootstrap particle filter for system identification of LTI system
is given in the appendix. However, the implementation and the key steps involved in the
algorithm are discussed below.
1. The algorithm starts with simulating N samples for all the parameters to be identified
(ϕ0), from the assumed pdf of (ϕ0) at the time instant t = 0. The random particles are
generated in a suitable domain identified by the upper and the lower bounds. These
are also known as the prior estimates.
2. The next step involves solving N linear forward problems, Eq 3.1 corresponding to
each of the prior estimate ϕk−1. The forward problems are solved using the β Newmark
32
System Identification of Linear Time Invariant (LTI) Systems
algorithm where the value of alpha and delta are taken as 1/6 and 1/2 respectively ??.
3. The predicted value obtained from step 2 are compared to the measurement values.
The measurement values are either available through sensor recordings or are generated
synthetically using Eq 3.5. We have considered both the measurements synthetic as
well as field data.
4. The comparison between the predicted values and the measurement data is made
using the likelihood function where p(Yk|Xk) at time t = k. We model the likelihood
function as the normal distribution centered about the measurement with a small value
of standard deviation. Thus, each particle propagated at t = k is weighted.
5. The weight calculated is normalized using the equation:
δj =
p(Yk|ϕkj)
∑N
j=1 p(Yk|ϕkj)
(3.6)
The calculated weights are passed in the resampling algorithms to calculate the sample
for the next iterate. Hence the above calculated normalized weights constitute the
discrete probability mass function for the next iterate. This is known as the posterior
estimates of ϕk for the time step k.
6. The mean of the estimates is calculated by averaging the ensemble expressed as
µ =
∑N
j=1ϕkj
N
(3.7)
7. The standard deviation of the samples across the ensemble is calculated as
σ =
1
N
N∑
j=1
(ϕkj − µ)T (ϕkj − µ) (3.8)
8. The above algorithm is repeated iteratively by incrementing the time steps as t = k+1.
In this way the filtering is carried out for the entire process.
Now we will describe the models and the results obtained upon the implementation of SIS,
SIR and Bootstrap filter to solve the system identification problem.
33
System Identification of Linear Time Invariant (LTI) Systems
3.1.1 Synthetic Experiment
The following Fig 3.1 shows the plan and elevation of three story shear building model kept
in the Structural Engineering laboratory of Department of Civil Engineering, IIT Guwahati.
We have considered synthetic measurement from the known value of system parameters and
then validate the results using all the three filter algorithm described in the previous chapter.
The lumped mass for the slab (600×300×10mm) has been calculated for solving the forward
problem. The density of the steel taken is 8400 kg/m3
. The classical damping matrix is
obtained by considering the Rayleigh Damping, (Chopra, 2007).
C = αM + βK (3.9)
Here M,K and C are the Mass, Stiffness and Damping matrix respectively, α & β are the
coefficients. For the given model, the mass and the stiffness matrix is given as:
M =



m1 0 0
0 m2 0
0 0 m3


 (3.10)
K =



k1 + k2 −k2 0
−k2 k2 + k3 −k3
0 −k3 k3


 (3.11)
where m1,m2 and m3 are the lumped mass at the floor levels & k1, k2 and k3 are the stiffness
of the each story. The coefficients α and β can be obtained from specified damping ratios ζi
and ζj for the ith
and jth
mode respectively. If both the modes are assumed to have same
damping ratios then
α = ζ
(2wiwj
wi + wj
β = ζ
2
wi + wj
(3.12)
where wi and wj are the natural frequency of the system in it
h and jt
h modes respectively.
The response of the model has been calculated using the values of the parameters given in
Table 3.1 below. This is known as solving the forward problem.
We have considered the response of the structure due to the ground motion excitation by the
1940 El-Centro Earthquake, the 1989 Lomaprieta Earthquake, the 1995 Kobe earthquake,
the 1999 Chichi earthquake and the 2004 Parkfield earthquake. The plots of the ground
34
System Identification of Linear Time Invariant (LTI) Systems
motion excitations is shown in Fig 3.3. The response of the model for any of the given
excitations can be plotted and is shown in Fig 3.4. For illustration we have shown the
response of all the three story of the model due to El-Centro & Lomaprieta earthquake in
Fig 3.4 & Fig 3.5.
The inverse problem starts with simulating random values from the uniform distribution, for
the parameters which are to be identified, at time t = 0. Here we simulate random values
for stiffness and damping at all the floor levels. The domain over which the stiffness values
are simulated is between 10000 to 90000 N/m and the damping values are between 0 to 50
N-s/m. The number of values generated at t= 0 are 100. This number remains constant for
each and every iteration of the algorithm. The choice of number of particles depends upon
the computational time and the number of unknown parameters. For this specific study, the
identification problem is solved using all the three filters SIS, SIR & Bootstrap. Therefore,
a set of random particles was generated at t = 0 with 100 number of particles which would
remain for all the filters considering a particular ground motion. This is done in order to
present the comparison study for all the three algorithms. Normal distribution has been used
to calculate the likelihood or the weights of the particles with error covariance been chosen
as 0.001. The estimation of parameters become more accurate by increasing the number of
particles but the computational time increases.
To maintain the same order of study as was done while solving the example problem in
chapter 2, the synthetic three storied model is also solved using the SIS and SIR filter first.
All the five ground motions mentioned above are considered for the present study. The Fig
3.6 and 3.7 shows the identified values of the parameters using the SIS filter and SIR filter.
If the pool of the simulated samples is kept the same, the identified values will converge to
the same values and hence plots for only El-Centro ground motion have been given. Here,
the algorithm implements the resampling step when the value of Neff calculated using Eq
2.48 falls below a threshold.Finally, the Bootstrap filter is implemented with the resampling
step incorporated after every iteration. The Table 3.2 shows the identified values of natural
frequencies in all the three modes. It can be seen that for a particular set of ground motion
all the three filters give the same values of the natural frequencies in first three modes.
Moreover the algorithms have also been compared on the basis of number of iterative steps
for obtaining the convergence. The Table 3.3 gives the number of steps and it can be seen
that Bootstrap is slightly faster than the other two algorithms. In some cases all the three
converge after same number of iterations whereas in some cases Bootstrap converges slightly
better.
Several traditional resampling algorithms have been compared based on the % error in the
35
System Identification of Linear Time Invariant (LTI) Systems
values identified as well as the time steps required to attain the convergence.The plots are
given for El-Centro and Lomaprieta earthquake since the methodology remains the same
for other excitations also. For other excitations, results are tabulated and compared. The
mean ratio of identified system parameters to the original values as well as the standard
deviation of the parameters has been plotted in Fig 3.8 & 3.9 and Fig 3.10 & 3.11 for
El-Centro & Lomaprieta earthquake respectively. The INSET view shows the finer details
and the fluctuations which take place over a very short period of time. The statistical
fluctuations die out once the parameters are identified and the standard deviation becomes
zero. Various traditional resampling schemes have been used to resample the distribution
after each iteration in the algorithm. The robustness of the algorithm is clearly depicted in
Fig 3.13 and Fig 3.12 which shows the mode shapes of the identified as well original system
and the estimated states of the identified and original system for El-Centro earthquake using
Bootstrap filter.
The comparative study shown in Table 3.4 and 3.5 suggests that systematic and stratified
resampling algorithms give a better estimate of the stiffness values. However the other two
resampling algorithms Wheel and Simple converges at a much faster rate than the Stratified
and Systematic algorithms.
The sensitivity analysis for different SNR (Signal to noise ratio) has been done for both
Elcentro and Lomaprieta earthquake. The results shows that bootstrap filter is very robust
even for low SNR values. The response of the model due to addition of noise can been
seen in Fig 3.14. The parameters identified due to additional noise are unaffected and these
remain the same. This has been shown in Table 3.7. Table 3.6 shows the effect on ratio of
identified values to original values by increasing the number of particles from 100 to 1000.
Here, it is noteworthy to mention that the computational time also increases. It cannot be
said with full confidence that this increase in particles would lead to better estimation of
parameters. However, with generation of more and more samples the probability of obtaining
near accurate values increases.
3.1.2 BRNS Building
This section aims at implementing the SIS, SIR and Bootstrap filter to identify the pa-
rameters of BRNS building subjected to multi-component earthquake ground excitations.
Response of the building has been measured in both the directions (x and y) by the sensors
placed at the first and the top storey of the building.The Fig 3.2 shows the plan and elevation
of the BRNS building. Fig 3.15 and 3.16 shows the x and y component of ground motion
36
System Identification of Linear Time Invariant (LTI) Systems
excitations recorded on 3rd
and 21th
September,2009. The multi-component response of the
BRNS building is shown in Fig 3.17 and Fig 3.18 for both the dates of measurement.
The parameters identified are the stiffness values in both the directions at each of the story
level as well as the cofficients α and β of the modal damping matrix given in Eq: 3.9.
Therefore, a total of ten values are identified.The mass and stiffness matrix of the BRNS
building is given in Eq 3.13 and 3.14.
M =
















m1 0 0 0 0 0 0 0
0 m2 0 0 0 0 0 0
0 0 m3 0 0 0 0 0
0 0 0 m4 0 0 0 0
0 0 0 0 m5 0 0 0
0 0 0 0 0 m6 0 0
0 0 0 0 0 0 m7 0
0 0 0 0 0 0 0 m8
















(3.13)
K =
















k1 + k3 0 −k3 0 0 0 0 0
0 k2 + k4 0 −k4 0 0 0 0
−k3 0 k3 + k5 0 −k5 0 0 0
0 −k4 0 k4 + k6 0 −k6 0 0
0 0 −k5 0 k5 + k7 0 −k7 0
0 0 0 −k6 0 k6 + k8 0 −k8
0 0 0 0 −k7 0 k7 0
0 0 0 0 0 −k8 0 k8
















(3.14)
where m1, m2m3, m4, m5, m6, m7, m8 is the lumped mass in x and y direction at each of the
story level & k1, k3, k5, k7 is the story stiffness in x direction and k2, k4, k6, k8 be the story
stiffness in y direction. The damping matrix is given by
C = αM + βK (3.15)
The original value of the parameters of the building are given in Table: 3.8.The algorithm
for solving the inverse problem remains the same. A pool of random particles is generated at
time t = 0. The particle weights are evaluated by calculating the likelihood function which
is modeled as a normal distribution. The Table 3.9 shows the values of the identified natural
frequency in all the 8 modes using SIS,SIR and Bootstrap filter. It can be seen that the all
37
System Identification of Linear Time Invariant (LTI) Systems
the three algorithms perform equally well with the identified frequencies more or less the
same in all the eight modes. The resampling algorithms have also been compared based on
the values of the natural frequency identified. The Table 3.10 shows the identified values of
the frequencies in all the models using the traditional resampling algorithms. The % error
is given in Table 3.11 . It can be seen that error levels in both stratified and systematic
resampling schemes are lesser than compared to multinomial and wheel resampling. Hence,
this is consistent to the results of synthetic experimentation done using laboratory model in
the previous section where also the synthetic and stratified showed superior performance.
Moreover, the results for the field measurement of data recorded on 21/09/2009 have been
plotted. Fig 3.19 and 3.20 shows the identified values of parameters and the convergence
of the stiffness values using the Bootstrap filter. The first four mode shapes of the BRNS
building is shown in Fig 3.21. It can be seen that the identified mode shapes are very close
to the original mode shapes of the building. The original and estimated states of the first
and the top story has shown in Fig 3.22. The states match closely for the fundamental mode
of vibration of BRNS building.
38
System Identification of Linear Time Invariant (LTI) Systems
Figure 3.1: Plan and Elevation of Synthetic Model
39
System Identification of Linear Time Invariant (LTI) Systems
Figure 3.2: Plan and Elevation of BRNS Building
40
System Identification of Linear Time Invariant (LTI) Systems
0 10 20 30 40
−1
−0.5
0
0.5
1
t (s)
¨ug(g)
0 10 20 30 40
−1
−0.5
0
0.5
1
t (s)
¨ug(g)
0 20 40 60 80 100
−1
−0.5
0
0.5
1
t (s)
¨ug(g)
0 10 20 30 40 50
−1
−0.5
0
0.5
1
t (s)
¨ug(g)
0 10 20 30 40 50
−0.5
0
0.5
t (s)
¨ug(g)
a b
c d
e
Figure 3.3: Ground motion excitations a) Elcentro, b)Lomaprieta, c) Chichi, d) Kobe and
e) Parkfield
0 5 10 15 20 25 30 35 40
−2
−1
0
1
2
¨ut(g)
0 5 10 15 20 25 30 35 40
−4
−2
0
2
4
¨ut(g)
0 5 10 15 20 25 30 35 40
−4
−2
0
2
4
t (s)
¨ut(g)
Third Floor
Second Floor
First Floor
Figure 3.4: Response of model: Lomaprieta earthquake
41
System Identification of Linear Time Invariant (LTI) Systems
0 5 10 15 20 25 30 35 40
−1
−0.5
0
0.5
1
¨ut(g)
0 5 10 15 20 25 30 35 40
−2
−1
0
1
2
¨ut(g)
0 5 10 15 20 25 30 35 40
−2
−1
0
1
2
t (s)
¨ut(g)
First Floor
Second Floor
Third Floor
Figure 3.5: Response of model: Elcentro earthquake
0 20 40
0.5
1
1.5
t (s)
µk1iden
/µk1org
0 20 40
0.5
1
1.5
t (s)
µ
k1iden
/µ
k2org
0 20 40
0.5
1
1.5
t (s)
µ
k3iden
/µ
k3org
0 20 40
0.2
0.4
0.6
0.8
1
1.2
t (s)
µc1iden
/µc1org
0 20 40
0.5
1
1.5
t (s)
µ
c2iden
/µ
c2org
0 20 40
0.5
1
1.5
t (s)
µ
c3iden
/µ
c3org
Figure 3.6: Ratio of identified to original parameters:SIS Filter: Elcentro earthquake
42
System Identification of Linear Time Invariant (LTI) Systems
0 20 40
0.5
1
1.5
t (s)
µk1iden
/µk1org
0 20 40
0.5
1
1.5
t (s)
µ
k1iden
/µ
k2org
0 20 40
0.5
1
1.5
t (s)
µ
k3iden
/µ
k3org
0 20 40
0.2
0.4
0.6
0.8
1
1.2
t (s)
µc1iden
/µc1org
0 20 40
0.5
1
1.5
t (s)
µ
c2iden
/µ
c2org
0 20 40
0.5
1
1.5
t (s)
µ
c3iden
/µ
c3org
Figure 3.7: Ratio of identified to original parameters:SIR Filter: Elcentro earthquake
0 5 10 15 20 25 30 35 40
0.5
1
1.5
2
µk1
Wheel Systematic Stratified Multinomial
0 5 10 15 20 25 30 35 40
0.6
0.8
1
1.2
µk2
0 5 10 15 20 25 30 35 40
0.5
1
1.5
T (s)
µk3
0 2 4
0.5
1
1.5
2
Wheel Systematic Stratified Multinomial
0 2 4
0.6
0.8
1
1.2
0 2 4
0.5
1
1.5
Figure 3.8: Ratio of identified stiffness to original stiffness: Elcentro earthquake
43
System Identification of Linear Time Invariant (LTI) Systems
0 5 10 15 20 25 30 35 40
0
1
2
3
x 10
4
σ(N/m)
0 5 10 15 20 25 30 35 40
0
1
2
3
x 10
4
σ(N/m)
0 5 10 15 20 25 30 35 40
0
1
2
3
x 10
4
t (s)
σ(N/m)
Wheel Systematic Stratified Multinomial
0 2 4
0
1
2
3
x 10
4
Wheel Systematic Stratified Multinomial
0 2 4
0
1
2
3
x 10
4
0 2 4
0
1
2
3
x 10
4
Figure 3.9: Standard deviation of stiffness: Elcentro earthquake
0 5 10 15 20 25 30 35 40
0.5
1
1.5
2
µk1
Wheel Systematic Stratified Multinomial
0 5 10 15 20 25 30 35 40
0.4
0.6
0.8
1
1.2
µk2
0 5 10 15 20 25 30 35 40
0.8
1
1.2
1.4
t (s)
µk3
0 2 4
0.5
1
1.5
2
Wheel Systematic Stratified Multinomial
0 2 4
0.4
0.6
0.8
1
1.2
0 2 4
0.5
1
1.5
Figure 3.10: Ratio of identified stiffness to original stiffness: Lomaprieta earthquake
44
System Identification of Linear Time Invariant (LTI) Systems
0 5 10 15 20 25 30 35 40
0
1
2
3
x 10
4
σ(N/m)
Wheel Systematic Stratified Multinomial
0 5 10 15 20 25 30 35 40
0
1
2
3
x 10
4
σ(N/m)
0 5 10 15 20 25 30 35 40
0
1
2
3
x 10
4
t (s)
σ(N/m)
0 2 4
0
1
2
3
x 10
4
Wheel Systematic Stratified Multinomial
0 2 4
0
1
2
3
x 10
4
0 2 4
0
1
2
3
x 10
4
t (s)
Figure 3.11: Standard deviation of stiffness: Lomaprieta earthquake
0 5 10 15 20 25 30 35 40
−1
0
1
¨ut(g)
Original states
Estimated States
0 5 10 15 20 25 30 35 40
−2
0
2
¨ut(g)
Original states
Estimated States
0 5 10 15 20 25 30 35 40
−2
0
2
t (s)
¨ut(g)
Original states
Estimated States
5 5.5 6
−1
0
1
5 5.5 6
−1
0
1
¨ut(g)
5 5.5 6
−1
0
1
¨ut(g)
Figure 3.12: Original and estimated states of model: El-Centro earthquake
45
System Identification of Linear Time Invariant (LTI) Systems
−0.2 −0.1 0
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8DOF
original shape identified shape
−0.2 0 0.2
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
DOF
−0.5 0 0.5
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
DOF
Figure 3.13: Mode shape of the original and identified structure
0 5 10 15 20 25 30 35 40
−1.5
−1
−0.5
0
0.5
1
1.5
¨ug(g)
No noise
SNR 0.005
0 5 10 15 20 25 30 35 40
−3
−2
−1
0
1
2
3
t (s)
¨ug(g)
No noise
SNR 0.005
Figure 3.14: Response of the synthetic model due to addition of noise for El-Centro and
Lomaprieta earthquake
46
System Identification of Linear Time Invariant (LTI) Systems
0 5 10 15 20 25 30 35 40 45 50
−0.015
−0.01
−0.005
0
0.005
0.01
0.015
¨ug(g)
x component
0 5 10 15 20 25 30 35 40 45 50
−0.015
−0.01
−0.005
0
0.005
0.01
0.015
¨ug(g)
t (s)
y component
Figure 3.15: Ground excitation due to recorded earthquakes on 03/09/2009
0 5 10 15 20 25 30 35 40 45 50
−0.03
−0.02
−0.01
0
0.01
0.02
0.03
¨ug(g)
x component
0 5 10 15 20 25 30 35 40 45 50
−0.03
−0.02
−0.01
0
0.01
0.02
0.03
¨ug(g)
t (s)
y component
Figure 3.16: Ground excitation due to recorded earthquakes on 21/09/2009
47
System Identification of Linear Time Invariant (LTI) Systems
0 5 10 15 20 25 30 35 40 45 50
−0.02
0
0.02
¨uxt(g)
0 5 10 15 20 25 30 35 40 45 50
−0.02
0
0.02
¨uyt(g)
0 5 10 15 20 25 30 35 40 45 50
−0.05
0
0.05
¨uxt(g)
0 5 10 15 20 25 30 35 40 45 50
−0.02
0
0.02
¨uyt(g)/g
t (s)
Fourth Floor y direction
First Floor x direction
First Floor y direction
Fourth Floor x direction
Figure 3.17: Response of BRNS building to multicomponent earthquake recorded on
03/09/2009
0 5 10 15 20 25 30 35 40 45 50
−0.05
0
0.05
¨uxt(g)
0 5 10 15 20 25 30 35 40 45 50
−0.05
0
0.05
¨uyt(g)
0 5 10 15 20 25 30 35 40 45 50
−0.05
0
0.05
¨uxt(g)
0 5 10 15 20 25 30 35 40 45 50
−0.1
0
0.1
t (s)
¨uyt(g)
First Floor x direction
First Floor y direction
Fouth Floor x direction
Fourth Floor x direction
Figure 3.18: Response of BRNS building to multicomponent earthquake recorded on
21/09/2009
48
System Identification of Linear Time Invariant (LTI) Systems
0 5 10 15 20 25 30 35 40 45 50
0.9
0.95
1
1.05
1.1
1.15
µ
k
iden
/µ
k
or
K1x
K1y
K2x
K2y
K
3x
K3y
K4x
K4y
0 5 10 15 20 25 30 35 40 45 50
0
0.02
0.04
0.06
0.08
0.1
t (s)
σ(kN/m)
K1x
K1y
K
2x
K2y
K
3x
K3y
K4x
K4y
Figure 3.19: Ratio of identified stiffness to original stiffness at all the floor levels
49
System Identification of Linear Time Invariant (LTI) Systems
0 5 10 15 20 25 30 35 40 45 50
0.7
0.8
0.9
1
1.1
1.2
1.3
µαβiden
/µαβorg
α
β
0 5 10 15 20 25 30 35 40 45 50
0
1
2
3
4
5
6
x 10
−3
t (s)
σ
αβ
α
β
Figure 3.20: Coffiecients α and β & the convergence of damping coefficients
−4
−2
0
x 10
−3
−1
0
1
0
5
10
15
−1
0
1
−4
−2
0
x 10
−3
0
5
10
15
−0.01
0
0.01
−1
0
1
0
5
10
15
−1
0
1
−0.01
0
0.01
0
5
10
15
Figure 3.21: First four true modes and estimated modes of BRNS building
Table 3.1: Parameter values for solving the forward problem
Mass (kg) Stiffness (N/m) Damping (N-s/m) Natural Frequency (Hz)
15.2 41987 19.032 4.1565
15.2 76842 34.173 12.8093
15.2 74812 33.630 19.8511
50
System Identification of Linear Time Invariant (LTI) Systems
0 10 20 30 40 50
−0.03
−0.02
−0.01
0
0.01
0.02
0.03
t (s)
¨ut(g)
0 10 20 30 40 50
−0.04
−0.02
0
0.02
0.04
t (s)
¨ut(g)
0 10 20 30 40 50
−0.05
0
0.05
t (s)
¨ut(g)
0 10 20 30 40 50
−0.1
−0.05
0
0.05
0.1
t (s)
¨ut(g)
Estiamted States
Original States
Estiamted States
Original States
Estiamted States
Original States
Estiamted States
Original States
5 5.5 6
−0.01
0
0.01
10 10.5 11
−0.01
0
0.01
5 5.5 6
−0.01
0
0.01
10 10.5 11
−0.01
0
0.01
a b
c d
Figure 3.22: Original and estimated states of BRNS building a) first storey x direction, b)
First story y direction, c) top storey x direction and d) top storey y direction
Table 3.2: Comparison of SIS,SIR and Bootstrap filter on the basis identified frequency in
three modes
Earthquake Identified Natural Frequency (Hz)
SIR SIR Bootstrap
f1 f2 f3 f1 f2 f3 f1 f2 f3
Chichi 4.162 11.54 19.32 4.162 11.54 19.32 4.162 11.54 19.32
El-Centro 4.159 12.721 19.371 4.159 12.721 19.371 4.159 12.721 19.371
Kobe 4.162 12.381 18.29 4.152 12.215 18.04 4.178 12.656 20.11
Lomaprieta 4.112 12.083 19.717 4.112 12.083 19.717 4.112 12.083 19.717
Parkfield 4.157 11.282 18.672 4.157 11.282 18.672 4.157 11.282 18.672
Table 3.3: Comparison of SIS,SIR and Bootstrap filter on the basis number of convergence
steps
Earthquake Convergence Steps
SIS SIR BF
Chichi 4696 4694 4581
El-Centro 178 177 177
Kobe 695 679 269
Lomaprieta 298 295 280
Parkfield 296 296 262
51
System Identification of Linear Time Invariant (LTI) Systems
Table 3.4: Comparison of different resampling algorithms on the basis of identified value of
natural frequency
Earthquake Resampling
Identified Natural
Frequency (Hz)
% Error
f1 f2 f3 f1 f2 f3
Multinomial 4.916 13.844 21.065 18.270 8.076 6.115
Wheel 3.530 11.702 18.536 -15.085 -8.645 -6.623
Chichi Stratified 4.199 10.992 18.053 1.020 -14.184 -9.056
Systematic 4.199 10.992 18.053 1.020 -14.184 -9.056
Multinomial 4.400 12.804 19.454 5.847 -0.039 -2.002
Wheel 4.280 13.157 19.993 2.978 2.718 0.716
ElCentro Stratified 4.187 12.669 19.748 0.729 -1.093 -0.520
Systematic 4.187 12.669 19.748 0.729 -1.093 -0.520
Multinomial 3.715 10.486 15.889 -10.634 -18.138 -19.958
Wheel 3.364 10.433 15.888 -19.057 -18.554 -19.963
Kobe Stratified 4.187 12.669 19.748 0.729 -1.093 -0.520
Systematic 4.187 12.669 19.748 0.729 -1.093 -0.520
Multinomial 4.529 13.746 17.980 8.953 7.309 -9.423
Wheel 4.081 12.984 20.307 -1.824 1.362 2.296
Lomaprieta Stratified 4.203 12.863 19.496 1.110 0.417 -1.788
Systematic 4.187 12.669 19.748 0.729 -1.093 -0.520
Multinomial 3.896 11.118 16.070 -6.277 -13.204 -19.046
Wheel 4.687 12.965 18.229 12.768 1.214 -8.173
Parkfield Stratified 4.187 12.669 19.748 0.729 -1.093 -0.520
Systematic 4.187 12.669 19.748 0.729 -1.093 -0.520
52
System Identification of Linear Time Invariant (LTI) Systems
Table 3.5: Ratio of identified value of parameters to original value and comparison on the
basis of convergence steps
Earthquake Resampling
Ratio of identified
to original parameters
Steps
K1 K2 K3 C1 C2 C3
Multinomial 1.56 1.16 1.02 2.45 0.85 0.31 300
Wheel 0.67 0.87 0.89 1.14 0.26 0.97 500
Chichi Stratified 1.14 0.93 0.58 1.94 0.90 0.17 5000
Systematic 1.14 0.93 0.58 1.94 0.90 0.17 5000
Multinomial 1.21 0.96 0.92 0.33 1.30 0.19 150
Wheel 1.10 0.94 1.10 2.13 0.55 1.06 150
Elcentro Stratified 1.02 1.02 0.94 0.79 0.55 1.34 200
Systematic 1.02 1.02 0.94 0.79 0.55 1.34 200
Multinomial 0.89 0.65 0.59 2.48 1.15 0.63 300
Wheel 0.68 0.58 0.70 1.57 1.12 0.70 300
Kobe Stratified 1.02 1.02 0.94 0.79 0.55 1.34 350
Systematic 1.02 1.02 0.94 0.79 0.55 1.34 350
Multinomial 1.96 0.57 1.00 0.52 1.03 0.85 150
Wheel 0.93 1.04 1.06 1.72 0.61 0.93 150
Lomaprieta Stratified 1.07 0.89 1.04 0.52 1.08 1.30 250
Systematic 1.02 1.02 0.94 0.79 0.55 1.34 250
Multinomial 1.07 0.61 0.67 1.43 0.09 1.02 200
Wheel 1.76 0.77 0.81 0.73 0.12 0.51 200
Parkfield Stratified 1.02 1.02 0.94 0.79 0.55 1.34 250
Systematic 1.02 1.02 0.94 0.79 0.55 1.34 250
Table 3.6: Ratio of identified value of parameters to original value and comparison on the
basis of convergence steps
Earth-
quake
No. of
particles
Ratio of Identified
to original value
Identified
Freqency
K1 K2 K3 C1 C2 C3 f1 f2 f3
100 1.04 0.90 1.00 0.30 0.78 1.21 4.16 12.72 19.37
El-Centro 500 1.00 0.95 1.01 0.45 0.75 1.02 4.14 12.78 19.66
1000 1.03 0.96 1.01 1.93 1.32 0.70 4.18 12.95 19.86
53
System Identification of Linear Time Invariant (LTI) Systems
Table 3.7: Sensitivity analysis due to addition of noise with different SNR
Earthquake
Signal to
Noise ratio
Ratio of identified
values to original values
K1 K2 K3 C1 C2 C3
no noise 1.036 0.904 1.004 0.300 0.771 1.205
0.005 1.036 0.904 1.004 0.300 0.771 1.205
El-Centro 0.05 1.036 0.904 1.004 0.300 0.771 1.205
0.1 1.036 0.904 1.004 0.300 0.771 1.205
no noise 1.036 0.904 1.004 0.300 0.771 1.205
0.01 1.036 0.904 1.004 0.300 0.771 1.205
Lomaprieta 0.05 1.036 0.904 1.004 0.300 0.771 1.205
0.1 1.036 0.904 1.004 0.300 0.771 1.205
Table 3.8: Original parameters of the BRNS building
Mass (kg) Stiffness (N/m) Natural Frequency (Hz)
27636.97 130215257.8 4.7474
27636.97 198788000.6 5.8439
25618.62 230377923.8 13.1977
25618.62 344149172 16.2495
25618.62 230377923.7 19.9821
25618.62 344149172 24.5701
17805.65 130215257.9 27.0526
17805.65 198788001 33.1045
54
System Identification of Linear Time Invariant (LTI) Systems
Table 3.9: Comparison of SIS, SIR and Bootstrap filter on the basis of identified values of
natural frequency in eight modes
Recording Iden Nat. Algorithm % Error
Date Frequency (Hz) SIS SIR BF SIS SIR BF
f1 4.744 4.744 4.7536 -0.072 -0.072 0.131
f2 5.7817 5.7817 5.8042 -1.064 -1.064 -0.679
f3 13.1682 13.1682 13.1659 -0.224 -0.224 -0.241
3/9/2009 f4 16.0314 16.0314 16.0607 -1.342 -1.342 -1.162
f5 19.9516 19.9516 19.9316 -0.153 -0.153 -0.253
f6 24.3253 24.3253 24.4062 -0.996 -0.996 -0.667
f7 27.0414 27.0414 27.0667 -0.041 -0.041 0.052
f8 33.2141 33.2141 33.3639 0.331 0.331 0.784
f1 4.744 4.744 4.7392 -0.076 -0.076 -0.173
f2 5.748 5.748 5.8485 -1.648 -1.648 0.079
f3 13.163 13.163 13.0686 -0.267 -0.267 -0.978
21/09/2009 f4 16.027 16.027 16.3301 -1.371 -1.371 0.496
f5 19.955 19.955 19.7618 -0.138 -0.138 -1.102
f6 24.260 24.260 24.9543 -1.260 -1.260 1.564
f7 27.025 27.025 26.6724 -0.103 -0.103 -1.405
f8 33.068 33.068 34.3376 -0.112 -0.112 3.725
Table 3.10: Identified frequency for BRNS building in all the eight modes using the all the
resampling algorithms
Recording Identified Natural Resampling Algorithms
Date Frequency (Hz) Multinomial Wheel Stratified Systematic
f1 4.8043 4.6294 4.7494 4.7465
f2 5.7172 5.9018 5.8288 5.7995
f3 13.2598 13.2777 13.1368 13.1889
3/9/2009 f4 15.8172 15.8525 16.1078 16.0658
f5 20.0562 20.1657 19.9225 19.9894
f6 24.748 24.4544 24.5621 24.3583
f7 27.2657 27.0644 27.0203 27.0641
f8 33.1886 33.5388 33.8024 33.3197
f1 4.7871 4.7193 4.7781 4.7474
f2 5.6735 5.6726 5.8349 5.8425
f3 13.3091 13.1979 13.3176 13.1333
21/09/2009 f4 15.8104 16.2785 16.0635 16.2395
f5 20.1184 19.9776 20.1428 19.8837
f6 23.4591 24.1307 24.5139 24.7591
f7 26.9039 26.646 26.8554 26.7694
f8 32.2265 33.2485 33.7687 34.1282
55
System Identification of Linear Time Invariant (LTI) Systems
Table 3.11: Comparison of resampling algorithms on the basis of %error in identified natural
frequency
Recording Identified Natural % Error
Date Frequency (Hz) Multinomial Wheel Stratified Systematic
f1 1.199 -2.486 0.042 -0.019
f2 -2.168 0.991 -0.258 -0.760
f3 0.471 0.606 -0.461 -0.067
3/9/2009 f4 -2.660 -2.443 -0.872 -1.130
f5 0.371 0.919 -0.298 0.037
f6 0.724 -0.471 -0.033 -0.862
f7 0.788 0.044 -0.119 0.043
f8 0.254 1.312 2.108 0.650
f1 0.836 -0.592 0.647 0.000
f2 -2.916 -2.931 -0.154 -0.024
f3 0.844 0.002 0.908 -0.488
21/09/2009 f4 -2.702 0.178 -1.145 -0.062
f5 0.682 -0.023 0.804 -0.492
f6 -4.522 -1.788 -0.229 0.769
f7 -0.550 -1.503 -0.729 -1.047
f8 -2.652 0.435 2.006 3.092
56
Chapter 4
Conclusion
4.1 Conclusion
From the example solved in Chapter 2 and the numerical results obtained in Chapter 3
the following conclusions can be drawn
• The example problem presented in chapter 2 clearly shows the problem of degeneracy in
SIS filter which incorporates the need for Adaptive particle filters (SIR) or Bootstrap
particle filters which involve the resampling step. However, resampling also led to
sample impoverishment and hence to create a diversity among the samples a small
noise was added. This was one of the ways to reduce sample impoverishment. However,
it becomes difficult when the dynamic system becomes complex. Thus, it appears as
one of the shortcomings of the present study which states that the algorithm is highly
dependent on the sample values generated at t = 0. However the problem becomes
trivial if one is able to sample from particles every time from the updated posterior
distribution.
• The algorithm gives fairly good results with the natural frequency and mode shapes of
the identified systems is close to the original system in case of both synthetic as well
as field data. The advantage of using the particle based approach is the robustness
of the algorithm which suggests that the it is able to pick the best value among the
samples generated at t = 0. All the three algorithms namely SIS, SIR and BF gives
almost the similar results when a similar pool of simulated particles is used. However,
the identified values are slightly different in case of field data. The sensitivity analysis
for the lab model shows that the BF is robust enough to converge to accurate values
57
Future work
even upon incorporation of SNR values as low as 0.005. However the results becomes
far accurate if number of particles are increased. Theoretically if N goes to infinity,
the posterior approximated becomes exact. However this again comes at the cost of
computation.
• The comparative study of the traditional resampling algorithms clearly suggests that
Stratified and Systematic resamplings give better estimates than Multinomial and
Wheel resamplings for both the studies i.e. synthetic model as well as BRNS building.
However one can clearly observe that the number of time steps required are more for
the Systematic& Stratified resamplings. This is acceptable for an offline case but may
become an issue while implementing the algorithm for an online health monitoring
system.
• The performance of all the algorithms for synthetic as well as field data suggests that
the algorithms work well for both the studies synthetic as well as field. However,
modeling uncertainties can never be denied and the technique is still not full-proof
to implement on any general structure. The building geometry and the number of
unknown parameters play an important role in the performance of the algorithm.
4.2 Future work
Based on the current work done we plan the following future works:
• We look to make the current algorithm better by implementing the Metropolis Hasting
algorithm so that we overcome the problem of sample impoverishment in a more general
and better way.
• We look forward to implement the present studies to solve the Base isolated building
with a Bouc-Wen hysteretic damping system at each of the floor levels. The present
work has successfully identified the natural frequencies of fixed base structure which is
an encouragement to apply similar studies for future work.
• We are also interested to test the algorithm for large scale real life models like bridge.
One such bridge is the railway overhead bridge connecting to the main campus of IIT
Guwahati. The data has been recorded and we are in the process of modeling the
bridge in free vibration mode in a professional Finite element software package.
58
Bibliography
M.S. Arulampalam, S. Maskell, N. Gordon, and T. Clapp. A tutorial on particle filters for
online nonlinear/non-gaussian bayesian tracking. Signal Processing, IEEE Transactions,
50(2):174–188, Feb 2002.
C. Bao, H. Hao, Z.X. Li, and X.F. Zhu. Time-varying system identification using a newly
improved hht algorithm. Computers and Structures, 87:1611 – 1623, 2009.
T.R Bayes. Essay towards solving a problem in the doctrine of chances. Phil. Trans. Roy.
Soc. Lond, 53:370–418, 1763.
J.V. Candy. Bootstrap particle filtering. Signal Processing Magazine, IEEE, 24(4):73–85,
July 2007.
Z. Chen. Bayesian filtering: From kalman filters to particle filters, and beyond. Manuscript,
2003.
J. Ching, J.L. Beck, and K.A. Porter. Bayesian state and parameter estimation of uncertain
dynamical systems. Probabilistic Engineering Mechanics, 21(1):81 – 96, 2006.
A.K. Chopra. Dynamics of structure. PEARSON, third edition edition, 2007.
G. Franco, R. Betti, and H. Lu. Identification of structural systems using an evolutionary
strategy. Journal of Engineering Mechanics, 130(10):1125–1139, 2004.
R. Ghanem and M. Shinozuka. Structural-system identification i and ii: Theory. Journal of
Engineering Mechanics, ASCE, 121(2):265–273, 1995.
S. Ghosh, C.S. Manohar, and D. Roy. Sequential importance sampling filters with a new
proposal distribution for parameter identification of structural systems. In Proceedings of
Royal Society of London, Series A, volume 464, pages 25–47, 2008.
59
BIBLIOGRAPHY
N.J. Gordon, D.J. Salmond, and A. F M Smith. Novel approach to nonlinear/non-gaussian
bayesian state estimation. Radar and Signal Processing, IEE Proceedings, 140(2):107–113,
Apr 1993.
Y.C. Ho and R.C.K. Lee. A bayesian approach to problems in stochastic estimation and
control. IEEE Trans. Automat Control, 9:333–329, Oct. 1964.
M. Hoshiya and E. Saito. Structural identification by extended kalman filter. Journal of
Engineering Mechanics, 110(12):1757–1770, 1984.
G. Housner, L. Bergman, T. Caughey, A. Chassiakos, R. Claus, S. Masri, R. Skelton,
T. Soong, B. Spencer, and J. Yao. Structural control: Past, present, and future. Journal
of Engineering Mechanics, 123(9):897–971, 1997.
R.E. Kalman. A new approach to linear filtering and prediction problems. Journal of Fluids
Engineering, ASME, 82:35–45, 1960.
J.T. Kim and N. Stubbs. Improved damage identification method based on modal informa-
tion. Journal of Sound and Vibration, 252(2):223 – 238, 2002.
U. Lee and J. Shin. A frequency response function-based structural damage identification
method. Computers and Structures, 80(2):117 – 132, 2002.
T. Li. Resampling methods for particle filtering. Manuscript, 2013.
K. Liew and Q. Wang. Application of wavelet theory for crack identification in structures.
Journal of Engineering Mechanics, 124(2):152–157, 1998.
C.S. Manohar and D. Roy. Monte carlo filters for identification of nonlinear structural
dynamical systems. Sadhana, 31(4):399–427, 2006.
B. Moaveni, X. He, J. Conte, J. Restrepo, and M. Panagiotou. System identification study of
a 7-story full-scale building slice tested on the ucsd-nees shake table. Journal of Structural
Engineering, 137(6):705–717, 2011.
V. Namdeo and C.S. Manohar. Nonlinear structural dynamical system identification using
adaptive particle filters. Journal of Sound and Vibration, 306(35):524 – 563, 2007.
H. A. Nasrellah and C. S. Manohar. Particle filters for structural system identification
using multiple test and sensor data: A combined computational and experimental study.
Structural Control and Health Monitoring, 18(1):99–120, 2011.
60
BIBLIOGRAPHY
N.M. Newmark. A method of computation for structural dynamics. Journal of the Engi-
neering Mechanics Division, 85(3):67–94, July 1959.
A. Raich and T. Liszkai. Improving the performance of structural damage detection methods
using advanced genetic algorithms. Journal of Structural Engineering, 133(3):449–461,
2007.
R. Rangaraj. Identificartion of fatigue cracks in vibrating beams using particle filtering
algorithm. Master’s thesis, Indian Institute of Tecnology Madras, September 2012.
C.S. Sajeeb, R.and Manohar and D. Roy. Control of nonlinear structural dnamical systems
with noise using particle filters. Journal of Sound and Vibration, 306:111–135, 2007.
T. Soderstrom. System Identification. Prentice Hall International, 2001.
J. Spragins. A note on the iterative application of bayes rule. IEEE Trans. Informa. Theory,
11(4):544549,, 1965.
S. Thrun. Particle filters in robotics (invited talk). In Proceedings of the Eighteenth Confer-
ence Annual Conference on Uncertainty in Artificial Intelligence (UAI-02), pages 511–518,
San Francisco, CA, 2002. Morgan Kaufmann.
61
Appendix
MATLAB code for parameter estimation of SDOF oscillator using SIS filter
This program finds out the stiffness parameter of the SDOF oscillator using SIS filter. The
code is for the example problem solved in chapter 2
% =========================================================================
% PROGRAMMER : ANSHUL GOYAL
%DATE : 02.01.2014(Last modified: 25:01:2014)
% ABSTRACT : Stiffness parameter identification of SDOF oscillator using
% Sequential Importance Sampling
%
% =========================================================================
clear all
close all
clc
% *************************************************************************
% Input Section:
% ==============
m = 40;
c = 15;
k = 60000;
k or = 60000;
N = 150; % Number of particles
x R = 0.001; % Error covariance
% ***********************************************************************
% Forward Problem for Synthetic Measurements:
% ===========================================
% load El Centro EW.dat
62
Appendix A
% exct = El Centro EW;
% time = El Centro EW(:,1);
% plot(exct(:,1),exct(:,2));
% xlabel('T (s)'); ylabel('$ddot{X} {g}(m/{sˆ2})$ ','interpreter','latex')
% Inc = zeros(1,3);
% [U,Ud,Udd] = Newmark Beta MDOF(m,k,c,Inc,exct);
% response = [time Udd'];
% save sis measurement.dat −ascii response
%**********************************************************************
% Inverse Problem for System Identification using SIS Filter:
% =================================================================
load sis measurement.dat
load El Centro EW.dat
exct = El Centro EW;
time = sis measurement(:,1);
t = length(time);
acc = sis measurement(:,2);
k inv = 10000 + (80000).*rand(N,1); % particles from uniform dist.
sorted = sort(k inv); % sorted value of particles
k =1;
for i = 1:N
w(i,k) = 1/N;
end
wk(:,k) = w(:,k)./sum(w(:,k));
Inc prior(:,:,N) = zeros(1,3);
for k = 2:t
k
for i = 1:N
[Inc update] = Newmark Beta MDOF instant(m,k inv(i),c,...
Inc prior(:,:,i),exct,k,0.5,1/6);
C(:,:,i)= Inc update;
% Estimate Likelihood of Simulation:
% ==================================
w(i,k) = wk(i,k−1)*(1/sqrt(2*pi*x R)) * exp(−((acc(k)...
− Inc update(1,3))ˆ2)/(2*(x R)));
end
63
Appendix A
% Updating the particle weights:
% ==================================
wk(:,k) = w(:,k)./sum(w(:,k));
Inc prior = C;
k estimate = 0;
% Estimating the parmeter value:
% ==================================
for i = 1:N
k estimate = k estimate + wk(i,k)*k inv(i);
end
k iden(k−1) = k estimate/k or;
end
%**********************************************************************
% Plots
% =================================================================
% plot(time(2:t),k iden);
% xlabel('T (s)'); ylabel('K (kN/m)');
% plot(time(1:200),wk(30,1:200),'*r',time(1:200),wk(9,1:200),...
...'*g',time(1:200),wk(2,1:200),'*b',time(1:200),wk(4,1:200),'*k')
% xlabel('T (s)');ylabel('weights')
% legend('6.02E4','6.77E4','6.45E4','1.57E4')
% [ksort,index] = sort(k inv);
% for i = 1:4:200
% tp = ((i−1)*0.01)*ones(50,1);
% plot3(tp,ksort,wk(index,i),'b')
% xlabel('T (s)'); ylabel('K (N/m)'); zlabel('weights');
% hold on
% end
% hold on
% for i = 1:4:200
% plot3(time(i),k inv(30),wk(30,i),'.r','Markersize',14)
% hold on
% end
64
Appendix A
MATLAB code for parameter identification of synthetic model using Sequential
Importance Sampling (SIS) filter
This code solves the parameter identification for the three storied shear building synthetic
model using SIS Filter.
% =========================================================================
% PROGRAMMER : ANSHUL GOYAL
% DATE : 02.01.2014(Last modified: 25:01:2014)
% ABSTRACT : Sequential Importance Sampling (SIS) filter Code for
% parameter estimation of laboratory model
% =========================================================================
clear all
close all
clc
% *************************************************************************
% Input Section:
% ==============
m = [15.2 15.2 15.2];
c = [19.032 34.173 33.63];
k = [41987 76842 74812];
N = 100;
x R = 0.001;
% ***********************************************************************
% Forward Problem for Synthetic Measurements:
% ===========================================
% load Elcentro X.dat
% exct = Elcentro X;
% time = Elcentro X(:,1);
% Inc = zeros(3,3);
% [M mat,K mat,C mat]=LTI System Matrices(m,k,c);
% [U,Ud,Udd] = Newmark Beta MDOF(M mat,K mat,C mat,Inc,exct);
% response = [time Udd'];
% save resp elcentro.dat −ascii response
% ***********************************************************************
% Eigen Analysis of laboratory model:
% ===========================================
65
Appendix A
[Phi,D] = eig(K mat,M mat);
wn or = sqrt(diag(D))/(2*pi); % Natural fequency in Hz
%**********************************************************************
% Inverse Problem for System Identification using SIS Filter:
% =================================================================
load resp elcentro.dat
load Elcentro X.dat
load test stiffness 3dof elcentro.dat % data file containing 100 samples
rand sample = test stiffness 3dof elcentro;
exct = Elcentro X;
time = resp elcentro(:,1);
acc1 = resp elcentro(:,2);
Inc = zeros(3,3);
% Simulating particles
% ==================================
% k1 = 10000 + (80000).*rand(N,1);
% k2 = 10000 + (80000).*rand(N,1);
% k3 = 10000 + (80000).*rand(N,1);
% c1 = 50.*rand(N,1);
% c2 = 50.*rand(N,1);
% c3 = 50.*rand(N,1);
% Defining initial weights
% ==================================
q =1;
for i = 1:N
w1(i,q) = 1/N;
w2(i,q) = 1/N;
w3(i,q) = 1/N;
end
wk(:,q) = w1(:,q)./sum(w1(:,q));
Inc prior(:,:,N) = zeros(3,3);
for q = 2:length(time)
q
for ii = 1:N
m = [15.2 15.2 15.2];
k = [rand sample(ii,1) rand sample(ii,2) rand sample(ii,3)];
66
Appendix A
c = [rand sample(ii,4) rand sample(ii,5) rand sample(ii,6)];
[M mat,K mat,C mat]=LTI System Matrices(m,k,c);
[Inc update] = Newmark Beta MDOF instant(M mat,K mat,C mat,...
Inc prior(:,:,ii),exct,q,0.5,1/6);
C(:,:,ii)= Inc update;
% Estimate Likelihood of Simulation:
% ==================================
w1(ii,q) = wk(ii,q−1)*(1/sqrt(2*pi*x R)) * exp(−((acc1(q) −...
Inc update(1,3))ˆ2)/(2*(x R)));
end
% Updating the particle weights:
% ==================================
w = w1;
Inc prior = C;
wk(:,q) = w(:,q)./sum(w(:,q));
% Estimating and storing the parameter values:
% ==================================
k1 iden = 0;
k2 iden = 0;
k3 iden = 0;
c1 iden = 0;
c2 iden = 0;
c3 iden = 0;
for ii = 1:N
k1 iden = k1 iden + wk(ii,q)*rand sample(ii,1);
k2 iden = k2 iden + wk(ii,q)*rand sample(ii,2);
k3 iden = k3 iden + wk(ii,q)*rand sample(ii,3);
c1 iden = c1 iden + wk(ii,q)*rand sample(ii,4);
c2 iden = c2 iden + wk(ii,q)*rand sample(ii,5);
c3 iden = c3 iden + wk(ii,q)*rand sample(ii,6);
end
k11 iden(q−1) = k1 iden;
k22 iden(q−1) = k2 iden;
k33 iden(q−1) = k3 iden;
c11 iden(q−1) = c1 iden;
c22 iden(q−1) = c2 iden;
67
Appendix A
c33 iden(q−1) = c3 iden;
end
% Plotting the results
% ==================================
% subplot(2,3,1)
% plot(time(2:4001),k11 iden/k(1),'b')
% xlabel('t (s)'); ylabel('mu {k1} {iden}/mu {k1} {org}');
% subplot(2,3,2)
% plot(time(2:4001),k22 iden/k(2),'b')
% xlabel('t (s)'); ylabel('mu {k1} {iden}/mu {k2} {org}');
% subplot(2,3,3)
% plot(time(2:4001),k33 iden/k(3),'b')
% xlabel('t (s)'); ylabel('mu {k3} {iden}/mu {k3} {org}');
% subplot(2,3,4)
% plot(time(2:4001),c11 iden/c(1),'b')
% xlabel('t (s)'); ylabel('mu {c1} {iden}/mu {c1} {org}');
% subplot(2,3,5)
% plot(time(2:4001),c22 iden/c(2),'b')
% xlabel('t (s)'); ylabel('mu {c2} {iden}/mu {c2} {org}');
% subplot(2,3,6)
% plot(time(2:4001),c33 iden/c(3),'b')
% xlabel('t (s)'); ylabel('mu {c3} {iden}/mu {c3} {org}');
MATLAB code for parameter identification of synthetic model using Sequential
Importance Re-Sampling (SIR) filter
This code identifies the system parameters of three storied shear building synthetic model
using SIR filter
% =========================================================================
% PROGRAMMER : ANSHUL GOYAL
% DATE : 02.01.2014(Last modified: 25:01:2014)
% ABSTRACT : Sequential Importance Re−Sampling (SIR) filter/ Adaptive
% Particle Filter Code for
% parameter estimation of laboratory model
% =========================================================================
clear all
68
Appendix A
close all
clc
% *************************************************************************
% Input Section:
% ==============
m = [15.2 15.2 15.2];
c = [19.032 34.173 33.63];
k = [41987 76842 74812];
N = 100;
x R = 0.001;
% ***********************************************************************
% Forward Problem for Synthetic Measurements:
% ===========================================
% load ChiChi X.dat
% exct = ChiChi X;
% time = ChiChi X(:,1);
% Inc = zeros(3,3);
% [M mat,K mat,C mat]=LTI System Matrices(m,k,c);
% [U,Ud,Udd] = Newmark Beta MDOF(M mat,K mat,C mat,Inc,exct);
% response = [time Udd'];
% save resp ChiChi.dat −ascii response
% ***********************************************************************
% Eigen Analysis of laboratory model:
% ===========================================
[Phi,D] = eig(K mat,M mat);
wn or = sqrt(diag(D))/(2*pi); % Natural fequency in Hz
%**********************************************************************
% Inverse Problem for System Identification using SIS Filter:
% =================================================================
tm = cputime;
load resp ChiChi.dat
load ChiChi X.dat
load test stiffness 3dof ChiChi.dat % data file containing 100 samples
rand sample = test stiffness 3dof ChiChi;
exct = ChiChi X;
time = resp ChiChi(:,1);
acc1 = resp ChiChi(:,2);
69
Appendix A
acc2 = resp ChiChi(:,3);
acc3 = resp ChiChi(:,4);
Inc = zeros(3,3);
% Simulating particles
% ==================================
% k1 = 10000 + (80000).*rand(N,1);
% k2 = 10000 + (80000).*rand(N,1);
% k3 = 10000 + (80000).*rand(N,1);
% c1 = 50.*rand(N,1);
% c2 = 50.*rand(N,1);
% c3 = 50.*rand(N,1);
% Defining initial weights
% ==================================
q =1;
for i = 1:N
w1(i,q) = 1/N;
w2(i,q) = 1/N;
w3(i,q) = 1/N;
end
wk(:,q) = w1(:,q)./sum(w1(:,q));
Inc prior(:,:,N) = zeros(3,3);
for q = 2:500
q
for ii = 1:N
m = [15.2 15.2 15.2];
k = [rand sample(ii,1) rand sample(ii,2) rand sample(ii,3)];
c = [rand sample(ii,4) rand sample(ii,5) rand sample(ii,6)];
[M mat,K mat,C mat]=LTI System Matrices(m,k,c);
[Inc update] = Newmark Beta MDOF instant(M mat,K mat,C mat,...
Inc prior(:,:,ii),exct,q,0.5,1/6);
C(:,:,ii)= Inc update;
% Estimate Likelihood of Simulation:
% ==================================
w1(ii,q) = wk(ii,q−1)*(1/sqrt(2*pi*x R)) * exp(−((acc1(q) −...
Inc update(1,3))ˆ2)/(2*(x R)))*(1/sqrt(2*pi*x R)) *....
exp(−((acc2(q) − Inc update(2,3))ˆ2)/(2*(x R)))...
70
Appendix A
*(1/sqrt(2*pi*x R)) * exp(−((acc3(q) − Inc update(3,3))ˆ2)...
/(2*(x R)));
end
% Updating the particle weights:
% ==================================
w = w1;
wk(:,q) = w(:,q)./sum(w(:,q));
Inc prior = C;
Neff = 1/sum(wk(:,q).ˆ2);
resample percentaje = 0.2;
Nt = resample percentaje*N;
% Calculating Neff and threshold criteria :
% ==================================
if Neff < Nt
Ind = 1;
% Resampling step : Adaptive control
% ==================================
disp('Resampling ...')
[rand sample,index] = Resampling(rand sample,wk(:,q)',Ind);
Inc prior = C(:,:,index);
for i = 1:N
wk(i,q) = 1/N;
end
end
% Estimating and storing the parameter values:
% ===========================================
k1 iden = 0;
k2 iden = 0;
k3 iden = 0;
c1 iden = 0;
c2 iden = 0;
c3 iden = 0;
for ii = 1:N
k1 iden = k1 iden + wk(ii,q)*rand sample(ii,1);
k2 iden = k2 iden + wk(ii,q)*rand sample(ii,2);
k3 iden = k3 iden + wk(ii,q)*rand sample(ii,3);
c1 iden = c1 iden + wk(ii,q)*rand sample(ii,4);
c2 iden = c2 iden + wk(ii,q)*rand sample(ii,5);
71
Appendix A
c3 iden = c3 iden + wk(ii,q)*rand sample(ii,6);
end
k11 iden(q−1) = k1 iden;
k22 iden(q−1) = k2 iden;
k33 iden(q−1) = k3 iden;
c11 iden(q−1) = c1 iden;
c22 iden(q−1) = c2 iden;
c33 iden(q−1) = c3 iden;
end
k inv = [k11 iden(499) k22 iden(499) k33 iden(499)];
m = [15.2 15.2 15.2];
[M mat,K mat,C mat]=LTI System Matrices(m,k inv,c);
[Phi in,D] = eig(K mat,M mat);
wn inv = sqrt(diag(D))/(2*pi) % Natural frequency in Hz
cpu time = cputime−tm
% Plotting the results
% ==================================
% subplot(2,3,1)
% plot(time(2:4001),k11 iden/k(1),'b')
% xlabel('t (s)'); ylabel('mu {k1} {iden}/mu {k1} {org}');
% subplot(2,3,2)
% plot(time(2:4001),k22 iden/k(2),'b')
% xlabel('t (s)'); ylabel('mu {k1} {iden}/mu {k2} {org}');
% subplot(2,3,3)
% plot(time(2:4001),k33 iden/k(3),'b')
% xlabel('t (s)'); ylabel('mu {k3} {iden}/mu {k3} {org}');
% subplot(2,3,4)
% plot(time(2:4001),c11 iden/c(1),'b')
% xlabel('t (s)'); ylabel('mu {c1} {iden}/mu {c1} {org}');
% subplot(2,3,5)
% plot(time(2:4001),c22 iden/c(2),'b')
% xlabel('t (s)'); ylabel('mu {c2} {iden}/mu {c2} {org}');
% subplot(2,3,6)
% plot(time(2:4001),c33 iden/c(3),'b')
% xlabel('t (s)'); ylabel('mu {c3} {iden}/mu {c3} {org}');
72
Appendix A
MATLAB code for parameter identification of synthetic model using Bootstrap
filter (BF)
This is the code for Bootstrap filter to identify system parameters of synthetic model.
% =========================================================================
% PROGRAMMER : ANSHUL GOYAL & ARUNASIS CHAKRABORTY
% DATE : 02.01.2014(Last modified: 25:01:2014)
% ABSTRACT : Bootstrap filter for system identification of laboratory
% model
% =========================================================================
% Input Section:
% ==============
clear all
close all
clc
%=======================================================================
% Original parameters
m = [15.2 15.2 15.2];
c = [19.032 34.173 33.63];
k = [41987 76842 74812];
N p = 100; % No. of Particles
x R = 0.001; % Error Covariance
% RP = [10000 80000;10000 80000;10000 80000;0 50;0 50;0 50];
Ind = 1; % Indicator for different resampling strategy
%
%*************************************************************************
% Forward Problem for Synthetic Measurements:
% ===========================================
%
load Elcentro X.dat
t = Elcentro X(:,1);
xg t = Elcentro X(:,2);
exct = [t xg t];
dof = length(m);
Inc = zeros(dof,2);
% %
% [M mat,K mat,C mat]=LTI System Matrices(m,k,c);
73
Appendix A
% Eigen Analysis for Modal Parameters:
% ====================================
%
% [Phi,D] = eig(K mat,M mat);
% wn = sqrt(diag(D))/(2*pi) % Natural frequency in /s
%
% Direct Time Integration for Response:
% =====================================
[U,Ud,Udd] = Newmark Beta MDOF(M mat,K mat,C mat,Inc,exct);
% Out put = [t Udd'];
% snr = 0.005;
% [noisy out put] = noisy output(snr,Out put,dof);
% save −ascii resp elcentro.dat Out put
% save −ascii resp loma.dat Out put
% save −ascii noisy resp loma0005.dat noisy out put
% Plot response Lomaprieta X accelerations:
% *************************************************************************
% Inverse Problem for System Identification using Bootstrap Filter:
% =================================================================
tm = cputime;
load Elcentro X.dat
load resp elcentro.dat
load test stiffness 3dof elcentro.dat
t1 = resp elcentro(:,1);
% subplot(2,1,1)
% plot(t1,resp elcentro(:,3),'.b',t1,noisy resp elcentro0005(:,3),'.r')
% xlabel('T (s)'); ylabel('$ddot{X} {t}(m/sˆ2)$','interpreter', 'latex');
% legend ('No noise','SNR 0.005')
% subplot(2,1,2)
% plot(t,resp loma(:,3),'.b',t,noisy resp loma0005(:,3),'.r')
% xlabel('T (s)'); ylabel('$ddot{X} {t}(m/sˆ2)$','interpreter', 'latex');
% legend ('No noise','SNR 0.005')
nt = length(t);
dof = length(m);
Nu = 2*dof; % No. of Unknown
% R samp = zeros(N p,Nu);
% for ii = 1:Nu
% R samp(:,ii) = random('unif',RP(ii,1),RP(ii,2),N p,1);
% end
74
Appendix A
R samp = test stiffness 3dof elcentro;
Mean Estimate = zeros(nt,Nu);
Std Estimate = zeros(nt,Nu);
Weights = zeros(N p,dof);
for ii = 1:Nu
Mean Estimate(1,ii) = mean(R samp(:,ii));
Std Estimate(1,ii) = std(R samp(:,ii));
end
% Bootstrap Algorithm:
% ====================
Inc(:,:,N p) = zeros(dof,2);
Inc prior(:,:,N p) = zeros(3,3);
Ind = 1;
ab=500
for ii = 2:ab
ii
% exct = [t(ii−1:ii,1) xg t(ii−1:ii,1)];
exct = [Elcentro X(:,1),Elcentro X(:,2)];
for jj = 1:N p
k1 = R samp(jj,1:dof);
c1 = R samp(jj,dof+1:Nu);
[M mat,K mat,C mat]=LTI System Matrices(m,k1,c1);
[Inc update] = Newmark Beta MDOF instant(M mat,K mat,...
C mat,Inc prior(:,:,jj),exct,ii,0.5,1/6);
C(:,:,jj) = Inc update;
for kk = 1:dof
Weights(jj,kk)=(1/sqrt(2*pi*x R))*exp...
(−((resp elcentro(ii,kk+1)−Inc update(kk,3))ˆ2)/(2*(x R)));
end
end
Wt = prod(Weights,2);
weight = (Wt./sum(Wt))';
% Resampling:
% ===========
[R samp,index] = Resampling(R samp,weight,Ind);
Inc prior = C(:,:,index);
75
Appendix A
for kk = 1:Nu
Mean Estimate(ii,kk) = mean(R samp(:,kk));
Std Estimate(ii,kk) = std(R samp(:,kk));
end
end
% Analysis of Identified System:
%====================================
k inv = Mean Estimate(ab−1,1:dof);
c inv = Mean Estimate(ab−1,dof+1:end);
[M mat,K mat,C mat]=LTI System Matrices(m,k inv,c inv);
[Phi,D] = eig(K mat,M mat);
wn = sqrt(diag(D))/(2*pi) % Natural frequency in Hz
cpu time = cputime−tm
% *************************************************************************
% End Program.
% *************************************************************************
MATLAB code for identification of BRNS building using Sequential Importance
Sampling (SIS) filter
This code estimates all the 10 unknown parameters for fixed base RC framed BRNS building
using SIS filter
% =========================================================================
% PROGRAMMER : ANSHUL GOYAL
% DATE : 02.01.2014(Last modified: 25:01:2014)
% ABSTRACT : Sequential Importance Sampling (SIS) filter Code for
% LTI System Identification of BRNS building
% on field measurement (21/09/2009)
% =========================================================================
clear all
close all
clc
% *************************************************************************
% Input Section:
76
Appendix A
% ==============
m = [27636.9724 27636.9724 25618.62385 25618.62385 25618.62385 25618.62385 17805.65745
k = [130215257.8 198788000.6 230377923.8 344149172 230377923.7 344149172 130215257.9 19
[M mat,K mat]=BRNS FB Matrices(m,k);
[Phi,D] = eig(K mat,M mat);
wn = sqrt(diag(D))/(2*pi); % Natural frequency in Hz
alf = 0.001;
bta = 0.02;
N p = 20; % No. of Particles
x R = 0.01; % Error Covariance
RP = [1.2E8 1.4E8;1.8E8 2.0E8;2.2E8 2.4E8;3.0E8 4.0E8;2.2E8 2.4E8;3.0E8 4.0E8;1.2E8 1.4E
%**********************************************************************
% Inverse Problem for System Identification using SIS Filter:
% =================================================================
load msrmt2.dat
Response = [msrmt2(:,1) msrmt2(:,4) msrmt2(:,5) msrmt2(:,6) msrmt2(:,7)];
t = msrmt2(:,1);
nt = length(t);
xg t = msrmt2(:,2);
yg t = msrmt2(:,3);
dof = length(m);
Nu = dof+2; % No. of Unknown
% Memory allocation:
% ==================================
R samp = zeros(N p,Nu);
weight = zeros(N p,nt−1);
w n = zeros(N p,nt);
iden para = zeros(nt−1,Nu);
% Generating initial particles:
% ==================================
for ii = 1:Nu
R samp(:,ii) = random('unif',RP(ii,1),RP(ii,2),N p,1);
end
% SIS Algorithm
% ==================================
77
Appendix A
Inc(:,:,N p) = zeros(dof,2);
Ifl = [−1 0;0 −1;−1 0;0 −1;−1 0;0 −1;−1 0;0 −1];
for i = 1:N p
w n(i,1) = 1/N p;
end
for ii = 2:nt
ii
for jj = 1:N p
k1 = R samp(jj,1:dof);
c1 = R samp(jj,dof+1:Nu);
[M mat,K mat] = BRNS FB Matrices(m,k1);
C mat = c1(1).*M mat+c1(2).*K mat;
tt = t(ii−1:ii,1);
Ft = M mat*Ifl*[xg t(ii−1:ii)';yg t(ii−1:ii)'];
Incd = Inc(:,:,jj);
[U,Ud,Udd] = Newmark Beta MDOF(M mat,K mat,C mat,Incd,tt,Ft);
Inc cond = [U(:,2) Ud(:,2)];
C(:,:,jj) = Inc cond;
% Estimate Likelihood of Simulation and updating weights:
% ======================================================
weight(jj,ii) = w n(jj,ii−1)*(1/sqrt(2*pi*x R))*exp(−...
((Response(ii,2)−Udd(1,2))ˆ2)/(2*(x R)))*...
(1/sqrt(2*pi*x R))*exp(−...
((Response(ii,3)−Udd(2,2))ˆ2)/(2*(x R)))*...
(1/sqrt(2*pi*x R))*exp(−...
((Response(ii,4)−Udd(7,2))ˆ2)/(2*(x R)))*...
(1/sqrt(2*pi*x R))*exp(−...
((Response(ii,5)−Udd(8,2))ˆ2)/(2*(x R)));
end
Inc prior = C;
w n(:,ii) = weight(:,ii)./sum(weight(:,ii));
% Estimating the parameters:
% ==================================
for p = 1:Nu
iden para(ii,p) = sum(w n(:,ii).*R samp(:,p));
end
end
78
Appendix A
MATLAB code for identification of BRNS building using Sequential Importance
Re-Sampling (SIR) filter
This code estimates all the 10 unknown parameters for fixed base RC framed BRNS building
using SIR filter
% =========================================================================
% PROGRAMMER : ANSHUL GOYAL
% DATE : 02.01.2014(Last modified: 25:01:2014)
% ABSTRACT : Sequential Importance Sampling (SIS) filter Code for
% LTI System Identification of BRNS building
% on field measurement (03/09/2009)
% =========================================================================
clc
clear all
close all
% *************************************************************************
% Input Section:
% ==============
tm = cputime;
m = [27636.9724 27636.9724 25618.62385 25618.62385 25618.62385 25618.62385 17805.65745
k = [130215257.8 198788000.6 230377923.8 344149172 230377923.7 344149172 130215257.9 19
[M mat,K mat]=BRNS FB Matrices(m,k);
[Phi,D] = eig(K mat,M mat);
wn = sqrt(diag(D))/(2*pi); % Natural frequency in Hz
alf = 0.001;
bta = 0.02;
N p = 100; % No. of Particles
x R = 0.01; % Error Covariance
% RP = [1.2E8 1.4E8;1.8E8 2.0E8;2.2E8 2.4E8;3.0E8 4.0E8;2.2E8 2.4E8;3.0E8 4.0E8;1.2E8 1.
%**********************************************************************
% Inverse Problem for System Identification using SIS Filter:
% =================================================================
load msrmt1.dat
Response = [msrmt1(:,1) msrmt1(:,4) msrmt1(:,5) msrmt1(:,6) msrmt1(:,7)];
t = msrmt1(:,1);
nt = length(t);
79
Appendix A
xg t = msrmt1(:,2);
yg t = msrmt1(:,3);
dof = length(m);
Nu = dof+2; % No. of Unknown
% Memory allocation:
% ==================================
% R samp = zeros(N p,Nu);
weight = zeros(N p,nt−1);
w n = zeros(N p,nt);
iden para = zeros(nt−1,Nu);
% Generating initial particles:
% ==================================
% for ii = 1:Nu
% R samp(:,ii) = random('unif',RP(ii,1),RP(ii,2),N p,1);
% end
load particles 1brns.dat
R samp = particles 1brns;
% SIS Algorithm
% ==================================
Inc(:,:,N p) = zeros(dof,2);
Ifl = [−1 0;0 −1;−1 0;0 −1;−1 0;0 −1;−1 0;0 −1];
for i = 1:N p
w n(i,1) = 1/N p;
end
for ii = 2:nt
ii
for jj = 1:N p
k1 = R samp(jj,1:dof);
c1 = R samp(jj,dof+1:Nu);
[M mat,K mat] = BRNS FB Matrices(m,k1);
C mat = c1(1).*M mat+c1(2).*K mat;
tt = t(ii−1:ii,1);
Ft = M mat*Ifl*[xg t(ii−1:ii)';yg t(ii−1:ii)'];
Incd = Inc(:,:,jj);
[U,Ud,Udd] = Newmark Beta MDOF(M mat,K mat,C mat,Incd,tt,Ft);
Inc cond = [U(:,2) Ud(:,2)];
C(:,:,jj) = Inc cond;
80
Appendix A
% Estimate Likelihood of Simulation and updating weights:
% ======================================================
weight(jj,ii) = w n(jj,ii−1)*(1/sqrt(2*pi*x R))*exp(−...
((Response(ii,2)−Udd(1,2))ˆ2)/(2*(x R)))*...
(1/sqrt(2*pi*x R))*exp(−...
((Response(ii,3)−Udd(2,2))ˆ2)/(2*(x R)))*...
(1/sqrt(2*pi*x R))*exp(−...
((Response(ii,4)−Udd(7,2))ˆ2)/(2*(x R)))*...
(1/sqrt(2*pi*x R))*exp(−...
((Response(ii,5)−Udd(8,2))ˆ2)/(2*(x R)));
end
Inc prior = C;
w n(:,ii) = weight(:,ii)./sum(weight(:,ii));
Neff = 1/sum(w n(:,ii).ˆ2);
resample percentaje = 0.8;
Nt = resample percentaje*N p;
if Neff < Nt
Ind = 1;
% Resampling step : Adaptive control
% ==================================
disp('Resampling ...')
[R samp,index] = Resampling(R samp,w n(:,ii)',Ind);
Inc prior = C(:,:,index);
for a = 1:N p
w n(a,ii) = 1/N p;
end
end
% Estimating the parameters:
% ==================================
for p = 1:Nu
iden para(ii,p) = sum(w n(:,ii).*R samp(:,p));
end
end
k inv = iden para(nt,1:dof);
[M mat,K mat]=BRNS FB Matrices(m,k inv);
[Phi in,D] = eig(K mat,M mat);
wn inv = sqrt(diag(D))/(2*pi) % Natural frequency in Hz
81
Appendix A
cpu time = cputime−tm
MATLAB code for system identification of BRNS Building (Fixed Base) using
Bootstrap filter
his code estimates all the 10 unknown parameters for fixed base RC framed BRNS building
using BF.
% =========================================================================
% PROGRAMMER : ANSHUL GOYAL & ARUNASIS CHAKRABORTY
% DATE : 02.01.2014(Last modified: 25:01:2014)
% ABSTRACT : Bootstrap Particle Filter Code for LTI System Identification.
%
%
% =========================================================================
clear all
close all
clc;
tm = cputime;
% *************************************************************************
% Input Section:
% ==============
m = [27636.9724 27636.9724 25618.62385 25618.62385 25618.62385 ...
25618.62385 17805.65745 17805.65745];
k = [130215257.8 198788000.6 230377923.8 344149172 230377923.7...
344149172 130215257.9 198788001];
alf = 0.001;
bta = 0.02;
SNR = 0.01;
N p = 150; % No. of Particles
x R = 0.01; % Error Covariance
RP = [1.2E8 1.4E8;1.8E8 2.0E8;2.2E8 2.4E8;3.0E8 4.0E8;2.2E8 2.4E8;3.0E8...
4.0E8;1.2E8 1.4E8;1.8E8 2.0E8;0.0005 0.0015;0.01 0.03];
Ind = 1; % Indicator for different resampling strategy
% % ***********************************************************************
82
Appendix A
% % Forward Problem for Synthetic Measurements:
% % ===========================================
%
% load El Centro EW.dat
%
% t = El Centro EW(:,1);
% nt = length(t);
% xg t = El Centro EW(:,2);
% yg t = 0.5.*xg t;
%
% dof = length(m);
% Inc = zeros(dof,2);
%
% [M mat,K mat] = BRNS FB Matrices(m,k);
% C mat = alf.*M mat+bta.*K mat;
%
% % Eigen Analysis for Modal Parameters:
% % ====================================
%
% [Phi,Lam] = eig(K mat,M mat);
% wn = diag(sqrt(Lam))./(2*pi);
%
% % Direct Time Integration for Response:
% % =====================================
%
% Ifl = [−1 0;0 −1;−1 0;0 −1;−1 0;0 −1;−1 0;0 −1];
% Ft = M mat*Ifl*[xg t';yg t'];
%
% [U,Ud,Udd] = Newmark Beta MDOF(M mat,K mat,C mat,Inc,t,Ft);
%
% % Add Noise to Simulate Synthetic Data:
% % =====================================
%
% Mean Signal = mean(Udd,2);
% SD Noise = Mean Signal./SNR;
% Syn Recd = zeros(nt,dof);
% for ii = 1:dof
% Syn Recd(:,ii) = Udd(ii,:)'+SD Noise(ii).*randn(nt,1);
% end
%
% Out put = [t Syn Recd];
%
% save −ascii Response.dat Out put
83
Appendix A
%
% % Plot Responses:
% % ===============
%
% figure
% subplot(2,1,1)
% plot(t,xg t)
% xlabel('t (s)');ylabel('Accn. (g)');
% title('Support Motion:')
% subplot(2,1,2)
% plot(t,Syn Recd(:,8))
% xlabel('t (s)');ylabel('Accn. (g)');
% title('Measured Response:')
%
% pause
% *************************************************************************
load El Centro EW.dat
load Response.dat
t = El Centro EW(:,1);
nt = length(t);
xg t = El Centro EW(:,2);
yg t = 0.5.*xg t;
dof = length(m);
Nu = dof+2; % No. of Unknown
R samp = zeros(N p,Nu);
for ii = 1:Nu
R samp(:,ii) = random('unif',RP(ii,1),RP(ii,2),N p,1);
end
Mean Estimate = zeros(nt,Nu);
Std Estimate = zeros(nt,Nu);
Weights = zeros(N p,dof);
for ii = 1:Nu
Mean Estimate(1,ii) = mean(R samp(:,ii));
Std Estimate(1,ii) = std(R samp(:,ii));
end
% Bootstrap Algorithm:
84
Appendix A
% ====================
Inc(:,:,N p) = zeros(dof,2);
Ifl = [−1 0;0 −1;−1 0;0 −1;−1 0;0 −1;−1 0;0 −1];
for ii = 2:nt
ii
for jj = 1:N p
k1 = R samp(jj,1:dof);
c1 = R samp(jj,dof+1:Nu);
[M mat,K mat] = BRNS FB Matrices(m,k1);
C mat = c1(1).*M mat+c1(2).*K mat;
tt = t(ii−1:ii,1);
Ft = M mat*Ifl*[xg t(ii−1:ii)';yg t(ii−1:ii)'];
Incd = Inc(:,:,jj);
[U,Ud,Udd] = Newmark Beta MDOF(M mat,K mat,C mat,Incd,tt,Ft);
Inc cond = [U(:,2) Ud(:,2)];
C(:,:,jj) = Inc cond;
% Estimate Likelihood of Simulation:
% ==================================
for kk = 1:dof
Weights(jj,kk)=(1/sqrt(2*pi*x R))*exp(−...
((Response(ii,kk+1)−Udd(kk,2))ˆ2)/(2*(x R)));
end
end
Wt = prod(Weights,2);
weight = (Wt./sum(Wt))';
% Resampling:
% ===========
[R samp,index] = Resampling(R samp,weight,Ind);
Inc = C(:,:,index);
for kk = 1:Nu
Mean Estimate(ii,kk) = mean(R samp(:,kk));
Std Estimate(ii,kk) = std(R samp(:,kk));
end
end
% Eigen Analysis of Identified System:
% ====================================
k inv = Mean Estimate(end,1:dof);
85
Appendix A
c inv = Mean Estimate(end,dof+1:end);
[M mat,K mat]=BRNS FB Matrices(m,k inv);
[Phi in,D] = eig(K mat,M mat);
wn inv = sqrt(diag(D))/(2*pi) % Natural frequency in Hz
% *************************************************************************
% Plots:
% ======
figure
subplot(2,1,1)
plot(t,Mean Estimate(:,1)./k(1),'m')
hold on
plot(t,Mean Estimate(:,2)./k(2),'−.b')
plot(t,Mean Estimate(:,3)./k(3),'−−g')
legend('DOF 1','DOF 2','DOF 3')
xlabel('t (s)');ylabel('K (kN/m)');
title('Identified Stiffness:')
subplot(2,1,2)
plot(t,Std Estimate(:,1),t,Std Estimate(:,2),'−.',t,Std Estimate(:,3),'−−')
legend('DOF 1','DOF 2','DOF 3')
xlabel('t (s)');ylabel('K (kN/m)');
title('Std. in Stiffness Simulation:')
figure
subplot(2,1,1)
plot(t,Mean Estimate(:,9)./alf,'m')
hold on
plot(t,Mean Estimate(:,10)./bta,'−.b')
legend('Alfa','Beta')
xlabel('t (s)');ylabel('alpha & beta');
title('Identified Damping Parameters:')
subplot(2,1,2)
plot(t,Std Estimate(:,9),t,Std Estimate(:,10),'−−')
legend('Alfa','Beta')
xlabel('t (s)');ylabel('alpha & beta');
title('Std. in Damping Simulation:')
% *************************************************************************
cpu time = cputime−tm
% *************************************************************************
86
Appendix A
% End Program.
% *************************************************************************
MATLAB Code for Resampling Algorithms
function [RS Rand No,index] = Resampling(Rand No,Weights,Ind)
% =========================================================================
% PROGRAMMER : ANSHUL GOYAL & ARUNASIS CHAKRABORTY
% DATE : 02.01.2013(Last modified: 26:01:2014)
% ABSTRACT :
%
% Input/Output argument −
% [RS Rand No,index] = Resampling(Rand No,Weights,Ind)
%
% input:
% ======
% Ind: Indicator of the resampling algorithm
% weights: normalized weights upon likelihood calculation
% R samp: Random sample at a particle time step t
% output:
% =======
% [index]: Index of the resampled particles
% [w] = new particle weights after resampling
%
% =========================================================================
if Ind == 1 % LHS
Ns = length(Weights); % Number of Particles
edges = min([0 cumsum(Weights)],1); % protect against round off
edges(end) = 1; % get the upper edge exact
UV = rand/Ns;
[˜,index] = histc(UV:1/Ns:1,edges);
NRN = length(Rand No(1,:));
RS Rand No = zeros(Ns,NRN);
for ii = 1:NRN
RN = Rand No(:,ii);
RS Rand No(:,ii) = RN(index);
end
elseif Ind == 2 % Systamatic
Ns = length(Weights);
87
Appendix A
u =((0:Ns−1)+rand(1))/Ns;
wc=cumsum(Weights);
index = 1;
index f = zeros(1,Ns);
for i = 1:Ns
while(wc(index)<u(i))
index = mod(index+1,Ns);
if index == 0
index = Ns;
end
end
index f(i) = index;
end
index = index f;
NRN = length(Rand No(1,:));
RS Rand No = zeros(Ns,NRN);
for ii = 1:NRN
RN = Rand No(:,ii);
RS Rand No(:,ii) = RN(index);
end
elseif Ind ==3 % Stratified
Ns = length(Weights);
u=((0:Ns−1)+(rand(1,Ns)))/Ns;
wc = cumsum(Weights);
index = 1;
index f = zeros(1,Ns);
for i=1:Ns
while(wc(index)<u(i))
index=mod(index+1,Ns);
if index ==0
index = Ns;
end
end
index f(i) = index;
end
index = index f;
NRN = length(Rand No(1,:));
RS Rand No = zeros(Ns,NRN);
for ii = 1:NRN
RN = Rand No(:,ii);
RS Rand No(:,ii) = RN(index);
end
elseif Ind ==4 % Simple
88
Appendix A
Ns = length(Weights);
u = cumprod(rand(1,Ns).ˆ( 1./(Ns:−1:1)));
u = u(Ns:−1: 1);
wc = cumsum(Weights);
index = 1;
index f = zeros(1,Ns);
for i = 1:Ns
while (wc(index)<u(i))
index = mod(index+1,Ns);
if index ==0
index = Ns;
end
end
index f(i) = index;
end
index = index f;
NRN = length(Rand No(1,:));
RS Rand No = zeros(Ns,NRN);
for ii = 1:NRN
RN = Rand No(:,ii);
RS Rand No(:,ii) = RN(index);
end
elseif Ind==5 % Wheel
Ns = length(Weights);
index = unidrnd(Ns);
beta = 0;
mw = max(Weights);
index f = zeros(1,Ns);
for i = 1:Ns
beta = beta + 2*mw*rand(1);
while(beta>Weights(index))
beta = beta − Weights(index);
index = mod(index + 1,Ns);
if index == 0
index = Ns;
end
end
index f(i) = index;
end
index = index f;
NRN = length(Rand No(1,:));
RS Rand No = zeros(Ns,NRN);
for ii = 1:NRN
89
Appendix A
RN = Rand No(:,ii);
RS Rand No(:,ii) = RN(index);
end
else
disp('Use proper Ind number for resampling.')
end
return
% *************************************************************************
% End Program.
% *************************************************************************
MATLAB code for solving the second order differential equation using β New-
mark algorithm
function [U,Ud,Udd] = Newmark Beta MDOF(M,K,C,Inc,t,F t,delta,alpha)
% =========================================================================
% PROGRAMMER : ARUNASIS CHAKRABORTY
% DATE : 02.01.2013(Last modified: 25:01:2014)
% ABSTRACT : This function computes the response of a MDOF system using
% Newmark−Beta method. For details, see page 780 in Bathe's
% Book. This code is for any general MDOF model excited by
% general force or support motions. Change the nargins as
% required.
%
% Input/Output argument −
% [U,Ud,Udd] = Newmark Beta MDOF(M,K,C,Inc,t,F t,delta,alpha)
%
% input:
% ======
%
% M = Mass Matrix
% K = Stiffness Matrix
% C = Damping Matrix
% Inc = Initial Conditions
% t = time in column vector
% F t = Force in Different Degrees of Freedom. The format
% of the Data is dof*nt
% delta = constant in Newmark method (default is 1/2)
% alpha = constant in Newmark method (default is 1/6)
90
Appendix A
%
% output:
% =======
%
% U = displacement
% Ud = velocity
% Udd = Acceleration
%
% =========================================================================
if nargin == 6
delta = 1/2;
alpha = 1/4;
%disp('Using default values: delta = 1/2 & alpha = 1/4');
elseif nargin == 8
if delta ˜= 1/2
disp('Warning: you are using delta not equal to 1/2');
end
elseif nargin < 6 | | nargin > 8
exit('Wrong number of input variables');
end
dof = length(diag(M));
nt = length(t);
dt = t(2)−t(1);
U 0 = Inc(:,1);
Ud 0 = Inc(:,2);
a0 = 1/(alpha*dtˆ2);
a1 = delta/(alpha*dt);
a2 = 1/(alpha*dt);
a3 = 1/(2*alpha)−1;
a4 = delta/alpha−1;
a5 = (dt/2)*(delta/alpha−2);
a6 = dt*(1−delta);
a7 = delta*dt;
K hat = K+a0*M+a1*C;
U = zeros(dof,nt);
Ud = zeros(dof,nt);
Udd = zeros(dof,nt);
91
Appendix A
U(:,1) = U 0;
Ud(:,1) = Ud 0;
for ii = 2:nt
Ut = U(:,(ii−1));
Udt = Ud(:,(ii−1));
Uddt = Udd(:,(ii−1));
R = F t(:,ii);
R hat = R+M*(a0*Ut+a2*Udt+a3*Uddt)+C*(a1*Ut+a4*Udt+a5*Uddt);
U(:,ii) = K hatR hat;
Udd(:,ii) = a0*(U(:,ii)−Ut)−a2*Udt−a3*Uddt;
Ud(:,ii) = Udt+a6*Uddt+a7*Udd(:,ii);
end
return
% *************************************************************************
% End Program.
% *************************************************************************
MATLAB code for obtaining the Mass and Stiffness matrix of multi-degree of
freedom system
function [M mat,K mat] = BRNS FB Matrices(m,k)
% =========================================================================
% PROGRAMMER : ARUNASIS CHAKRABORTY
% DATE : 17.08.2013(Last modified: 21:09:2013)
% ABSTRACT : This function evaluates mass, stiffness and damping
% matrices of the BRNS Fixed Base building.
%
% Input/Output argument −
% [M mat,K mat] = BRNS FB Matrices(m,k)
%
% input:
% ======
%
% m = Mass in different dof
% k = Stiffness in different dof
92
Appendix A
%
% output:
% =======
%
% M mat = Mass Matrix
% K mat = Stiffness Matrix
% =========================================================================
dof = length(m);
node dof = 2;
% Mass Matrix
% ===========
M mat = diag(m);
% Stiffness Matrix
% ================
K mat = zeros(dof);
K mat(1,1) = k(1)+k(node dof+1);
K mat(2,2) = k(2)+k(node dof+2);
K mat(1,node dof+1) = −k(node dof+1);
for ii = 2:dof−2
K mat(ii+node dof−1,ii−node dof+1) = −k(ii+node dof−1);
K mat(ii,ii) = k(ii)+k(ii+2);
K mat(ii,ii+node dof) = −k(ii+node dof);
end
K mat(dof−1,dof−1) = k(dof−1);
K mat(dof,dof−node dof) = −k(dof);
K mat(dof,dof) = k(dof);
% % Damping Matrix:
% % ===============
% C mat = zeros(dof);
% C mat(1,1) = c(1)+c(2);
% C mat(1,2) = −c(2);
% for ii = 2:dof−1
% C mat(ii,ii−1) = −c(ii);
% C mat(ii,ii) = c(ii)+c(ii+1);
% C mat(ii,ii+1) = −c(ii+1);
% end
% C mat(dof,dof−1) = −c(dof);
% C mat(dof,dof) = c(dof);
93
Appendix A
return
% *************************************************************************
% End Program.
% *************************************************************************
94

thesis

  • 1.
    Sequential MCMC Methodsfor Parameter Estimation of LTI Systems Subjected to Non-stationary Earthquake Excitations A report submitted in partial fulfillment of the requirements for the degree of Bachelor Of Technology in Civil Engineering Submitted by Anshul Goyal (10010410) Under the supervision of Dr. Arunasis Chakraborty DEPARTMENT OF CIVIL ENGINEERING INDIAN INSTITUTE OF TECHNOLOGY GUWAHATI April, 2014
  • 2.
    Certificate It is certifiedthat the work contained in the project report entitled “Sequential MCMC Methods for Parameter Estimation of LTI Systems Subjected to Non-stationary Earthquake Excitations”, by Anshul Goyal (10010410) has been carried out under my supervision and that this work has not been submitted elsewhere for the award of a degree or diploma. Date: Dr. Arunasis Chakraborty Associate Professor Department of Civil Engineering Indian Institute of Technology Guwahati i
  • 3.
    Acknowledgements I would liketo express my sincere thanks and gratitude to my project supervisor Dr. Aruna- sis Chakraborty for his guidance, motivation and support throughout the course of the project work. The thoughts and suggestions have been very useful to shape my current work in the best possible way. Moreover, the regular talks and discussions have helped me to establish my future goals. Throughout the project work he was always approachable and I enjoyed working under his able guidance. Next, I would like to thank Prof. Anjan Dutta and Prof S.K.Deb for providing the necessary field data for carrying out the simulation. I am also thankful to the HOD (Prof. Arup Kumar Sharma) of the Department of Civil Engineering IIT Guwahati for providing the opportunity and the facilities to complete my project work. I would also like to thank Mr. Swarup Mahato who is currently pursuing his PhD under Dr. Chakraborty for all his help and discussions during the project work. My friends and colleagues have always supported me during the entire project work. Finally, I thank my family for their kind support and co-operation. Date: Anshul Goyal (10010410) IIT Guwahati,India ii
  • 4.
    Abstract In this report,sequential Markov Chain Monte Carlo (MCMC) simulation based algorithms (a.k.a Particle filters) are used for parameter estimation of a linear second order dynamical system. In comparison to Kalman filters, they are more general and applicable to systems where model and measurement equations are highly nonlinear. The present study mainly fo- cuses on the implementation of Sequential importance sampling (SIS),Sequential importance Resampling (SIR) and Bootstrap filter (BF) for identifying the parameters of a three storied shear building model and a fixed base multi storied RC framed building in IIT Guwahati referred as BRNS buiding. All the three algorithms have been implemented for synthetic as well as the field measurement data. The synthetic study has been carried out using the three storied model whereas field data is used for BRNS building.Using these measurements, the parameters identified are the stiffness and damping at all the degrees of freedom. Initially random values (i.e. particles) of these parameters are generated from a pre-selected proba- bility distribution function (e.g. uniform distribution). Each particle is then passed through the model equation and the state is updated using the measurement at every time step. A weight is then assigned to each particle by evaluating their likelihood to the measurement. Once the likelihoods for all the particles are evaluated, the new sample for the next iteration is drawn from the simulated initial pool of particles. All the three filters, SIS,SIR and BF have been compared on the basis of identified natural frequency of the structures in all the modes as well as iterative steps required for convergence of parameter values.Furthermore, four different traditional re-sampling strategies (e.g. multinomial, wheel, systematic and stratified) are used to test their relative performance while using the resampling step in BF. The performances of the re-sampling algorithms are compared on the basis of number of convergence steps and the accuracy of the identified parameters as well as natural frequen- cies. It is observed that systematic and stratified re-samplings are superior in comparison to other re-sampling algorithms. Issues like degeneracy and sample impoverishment have been explained with the help of a SDOF oscillator example. iii
  • 5.
    Contents Certificate i Acknowledgements ii Abstractiii Contents iv List of Figures vi List of Tables viii List of Symbols and Abbreviations ix List of Symbols and Abbreviations ix 1 Introduction 1 1.1 Literature review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.3 Organization of report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2 Dynamic State Estimation 9 2.1 Bayesian Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.1.1 Bayesian Model Updating . . . . . . . . . . . . . . . . . . . . . . . . 10 iv
  • 6.
    2.2 Monte CarloMethods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.2.1 Perfect Sampling & Sequential Importance Sampling (SIS) . . . . . . 14 2.2.2 Sequential Importance Resampling (SIR) & Bootstrap Filter . . . . . 19 2.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3 Parameter Estimation of LTI Systems 31 3.1 System Identification of Linear Time Invariant (LTI) Systems . . . . . . . . 31 3.1.1 Synthetic Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 3.1.2 BRNS Building . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 4 Conclusion 57 4.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 References 61 Appendix 62 v
  • 7.
    List of Figures 1.1Dynamic System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Schematic flowchart of system identification (Source:Soderstrom (2001)) . . . 8 2.1 Schematic diagram of SDOF system . . . . . . . . . . . . . . . . . . . . . . . 25 2.2 Ground excitation due to Elcentro earthquake . . . . . . . . . . . . . . . . . 25 2.3 Response of oscillator to ground excitation . . . . . . . . . . . . . . . . . . . 26 2.4 Estimation of ratio of identified stiffness to original stiffness as function of time 26 2.5 Evolution of weights of particles over time . . . . . . . . . . . . . . . . . . . 27 2.6 Evolution of posterior density with time . . . . . . . . . . . . . . . . . . . . 27 2.7 States estimation from the original and the identified system . . . . . . . . . 28 2.8 Posterior evolution of distribution at iteration number a) initial, b) interme- diate (100) and c) final . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.9 Mean and Standard Deviation of the identified stiffness parameter . . . . . . 29 2.10 Expected value of stiffness by addition of noise (2% expected value) . . . . . 29 2.11 Convergence of the stiffness due to addition of noise (2% expected value) . . 30 3.1 Plan and Elevation of Synthetic Model . . . . . . . . . . . . . . . . . . . . . 39 3.2 Plan and Elevation of BRNS Building . . . . . . . . . . . . . . . . . . . . . . 40 3.3 Ground motion excitations a) Elcentro, b)Lomaprieta, c) Chichi, d) Kobe and e) Parkfield . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 3.4 Response of model: Lomaprieta earthquake . . . . . . . . . . . . . . . . . . . 41 vi
  • 8.
    3.5 Response ofmodel: Elcentro earthquake . . . . . . . . . . . . . . . . . . . . 42 3.6 Ratio of identified to original parameters:SIS Filter: Elcentro earthquake . . 42 3.7 Ratio of identified to original parameters:SIR Filter: Elcentro earthquake . . 43 3.8 Ratio of identified stiffness to original stiffness: Elcentro earthquake . . . . . 43 3.9 Standard deviation of stiffness: Elcentro earthquake . . . . . . . . . . . . . . 44 3.10 Ratio of identified stiffness to original stiffness: Lomaprieta earthquake . . . 44 3.11 Standard deviation of stiffness: Lomaprieta earthquake . . . . . . . . . . . . 45 3.12 Original and estimated states of model: El-Centro earthquake . . . . . . . . 45 3.13 Mode shape of the original and identified structure . . . . . . . . . . . . . . 46 3.14 Response of the synthetic model due to addition of noise for El-Centro and Lomaprieta earthquake . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 3.15 Ground excitation due to recorded earthquakes on 03/09/2009 . . . . . . . . 47 3.16 Ground excitation due to recorded earthquakes on 21/09/2009 . . . . . . . . 47 3.17 Response of BRNS building to multicomponent earthquake recorded on 03/09/2009 48 3.18 Response of BRNS building to multicomponent earthquake recorded on 21/09/2009 48 3.19 Ratio of identified stiffness to original stiffness at all the floor levels . . . . . 49 3.20 Coffiecients α and β & the convergence of damping coefficients . . . . . . . . 50 3.21 First four true modes and estimated modes of BRNS building . . . . . . . . 50 3.22 Original and estimated states of BRNS building a) first storey x direction, b) First story y direction, c) top storey x direction and d) top storey y direction 51 vii
  • 9.
    List of Tables 3.1Parameter values for solving the forward problem . . . . . . . . . . . . . . . 50 3.2 Comparison of SIS,SIR and Bootstrap filter on the basis identified frequency in three modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.3 Comparison of SIS,SIR and Bootstrap filter on the basis number of conver- gence steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.4 Comparison of different resampling algorithms on the basis of identified value of natural frequency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3.5 Ratio of identified value of parameters to original value and comparison on the basis of convergence steps . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.6 Ratio of identified value of parameters to original value and comparison on the basis of convergence steps . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.7 Sensitivity analysis due to addition of noise with different SNR . . . . . . . . 54 3.8 Original parameters of the BRNS building . . . . . . . . . . . . . . . . . . . 54 3.9 Comparison of SIS, SIR and Bootstrap filter on the basis of identified values of natural frequency in eight modes . . . . . . . . . . . . . . . . . . . . . . . 55 3.10 Identified frequency for BRNS building in all the eight modes using the all the resampling algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.11 Comparison of resampling algorithms on the basis of %error in identified nat- ural frequency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 viii
  • 10.
    List of Symbolsand Abbreviations Symbol Description p Probability of occurrence of event pdf Probability density function X(t) Vector denoting the state of the system q(.) Function that relate the input and output wk White noise process denoting the model noise vk White noise process denoting the measurement noise hk Non-linear function that relates the measurements to system states at time k Yk Measurement at time k Mk Vector comprising the set of measurement till time k p(Xk|Mk) Pdf of system state conditioned on measurements till time k p(Xk|Xk−1) Pdf of system’s state at k, conditioned on system’s state at k-1 δ(.) Delta function ϕ System parameter vector to be identified M Mass Matrix K Stiffness Matrix C Damping Matrix u Nodal displacement ˙u Nodal velocity ¨u Nodal acceleration τ(.) Function form which relates the state of the system to its first derivative with respect to µ Mean value σ Standard deviation value E Expectation operator δ Dirac delta function Eq Expectation operation with samples drawn from distribution q(.) ix
  • 11.
    ˜wk Normalized weights ¨ugGround motion excitations Z(t) State of the vibrating system at time t α, β Damping coefficients considering the Rayleigh damping ζi Modal damping ratio in ith mode wi Natural frequency in the ith mode LTI Linear time invariant PGA Peak ground acceleration RC Reinforced concrete MCMC Markov chain monte carlo BRNS Board of research in nuclear sciences SDOF Single degree of freedom RC Reinforced concrete EKF Extended Kalman Filter SIS Sequential importance sampling SIR Sequential importance resampling BF Bootstrap filter SNR Signal to noise ratio x
  • 12.
    Chapter 1 Introduction System identificationis the field of mathematical modeling of the inverse problem from the experimental data. It has acquired widespread applications in several areas like controls and systems engineering where system identification methods are used to get appropriate models for synthesis of a regulator, design of prediction algorithm and in signal processing appli- cations (such as in communications, geophysical engineering and mechanical engineering). Models obtained by system identification are used for spectral analysis, fault detection, pat- tern recognition, adaptive filtering, linear prediction and other purposes. These techniques are also successfully used in other fields such as biology, environmental sciences and econo- metrics to develop models for increasing scientific knowledge on the identified object, or for prediction and control. A dynamic system can be conceptually described in Fig 1.1. The system is driven by user controlled input variables u(t) while disturbances v(t) cannot be controlled. The output y(t) provide useful information about the system. Figure 1.1: Dynamic System There are several kinds of mathematical models used for solving the inverse problem which are mostly governed by the underlying differential equations. The mathematical models can be segregated into two paradigms • Modeling, which refers to derivation of models from the basic laws of physics. Often, 1
  • 13.
    one uses fundamentalbalance equations for range of variables like energy, force, mass etc. • Identification, which refers to the determination of the model parameters from the ex- perimental data. It includes the set up of identification experiment i.e data acquisition and determination of a suitable form of the model which is fitted to the recorded data by assigning suitable numerical values to its parameters. Though system identification methods are useful for large and complex structures where it is difficult to obtain the mathematical models directly, it has some limitations. They have a limited validity i.e they are valid for a certain working point, a certain type of input, a certain process,etc. Identification is not a foolproof methodology that can be used without interaction from the user. The reasons for this are • An appropriate model structure must be found. This can be a difficult problem, par- ticularly if the dynamics of structure is non-linear. • The real life recorded data is not perfect always as these are always disturbed by noises. • The process may vary with time, which can cause problems if an attempt is made to describe it with a time invariant model. How to apply System Identification In general terms an identification experiment is performed by exciting the system and ob- serving its output over an interval of time. These signals are normally recorded in a computer mass storage for subsequent information processing. We then try to fit a parametric model of the process to the recorded input and output sequences. The first step is to determine an appropriate form of the model (typically a differential equation of certain order). In the second step, several statistical approaches are used to estimate the unknown parameters of the model. This estimation is often done iteratively. The model obtained is then tested to see whether it is an appropriate representation of the system. If this is not the case, some more complex model structure is considered, its parameters estimated and validated again. Fig 1.2 shows the schematic of the steps used in system identification. Following the above discussion, there are two main purposes of model updating or system identification of the structural system. The common goal is to identify the physical param- eters eg (stiffness) of a structural element. These identified parameters can further be used 2
  • 14.
    Literature review as indicatorfor the status of the system. For example, stiffness parameter of a structural member can be monitored from time to time and an abnormal reduction indicates possible damage of the member.But this reduction may also be simply due to statistical uncertainty. Hence the quantification of uncertainty becomes important. Another purpose of model up- dating can be to obtain a mathematical model to represent the underlying system for future prediction. This is broadly known as Structural Health Monitoring.Another important area of application of system identification is structural vibration control which has received great attention in the last several decades (Housner et al., 1997). 1.1 Literature review System identification has remained an active area of research over last two decades. Many researchers have come up with various methods and have solved several problems ranging from experimental models to real life large scale structures. The general approach can be divided in following categories. • Conventional model-based approaches • Time domain identification methods • Biologically inspired approaches such as neural network and genetic algorithm • Time-frequency based approaches using Wavelet,Hilbert Transform • Chaos theory Conventional model-based approaches for system identification typically use a computer model of the structure, such as a Finite-Element Method (FEM) model, to identify structural parameters primarily from field or laboratory test data. Damage identification in beams is a common theme in system identification. (Kim and Stubbs, 2002) studied damage identifica- tion of a two-span continuous beam using modal information.(Lee and Shin, 2002) detected the changes in the stiffness of beams based on a frequency-based response function.Model- based system identification methods cannot be used effectively for large and complicated real-world structures with nonlinear behavior. For such cases, biologically-inspired or soft computing techniques such as Neural Networks , Genetic Algorithms (GA), or particle swarm optimization have been proposed as a more effective approach.(Franco et al., 2004) used an evolutionary algorithm to identify the structural parameters of a 10-DOF shear frame.(Raich 3
  • 15.
    Literature review and Liszkai,2007) used Genetic algorithm to identify the stiffness changes in a steel beam and a 3-story, 3 bay frame. In the past two decades because of their ability to retain both time and frequency information, wavelets have been used increasingly to solve complicated time series pattern recognition problems in different areas.(Liew and Wang, 1998) used wavelets to identify cracks in simply supported beams.(Bao et al., 2009) employed the Hilbert-Huang transform for system identification of concrete-steel composite beams. A few researchers have employed the chaos theory to model the complicated structural dynamics for system identification. However, in this report the main focus of the literature review is on the time domain dy- namic state estimation methods. The dynamic state estimation methods derive their origin from the Bayesian Methods. Bayesian theory was originally discovered by the British re- searcher Thomas Bayes in a publication (Bayes, 1763). The methods have been widely used in many areas due to the pioneering work done by Thomas Bayes. The modern form of the theory was rediscovered by French mathematician Simon de Laplace in in Theorieanalytiquedesprobailites. One of the earliest researches in iterative Bayesian es- timation can be found in(Ho and Lee, Oct. 1964). (Spragins, 1965) discussed the iterative application of Bayes rule to sequential parameter estimation and called it as ”Bayesian learn- ing”. The methods for dynamic state estimation can be categorized into two groups. The first includes the well-known Kalman filter (Kalman, 1960) & its variants and the other is the Monte Carlo simulation based algorithms named as particle filters (Gordon et al., 1993). The Kalman filter provides an exact solution to the problem of state estimation for linear Gaussian state space models. The most popular variant of Kalman filter is EKF, where linearization of the process equation is done to provide a Gaussian approximation of what really is a non-Gaussian quantity (Hoshiya and Saito, 1984). (Ghanem and Shinozuka, 1995) provided a review of methods of system identification by application to experimental data obtained on three and five-story steel building structures subjected to seismic loading, including the EKF, maximum-likelihood technique, recursive least-squares, and recursive instrumental-variable method. (Moaveni et al., 2011) examined six variations of model-based approach including, data- driven stochastic subspace identification, frequency domain decomposition, observer/Kalman filter identification, and general realization algorithm for system identification of a full- scale 7-story RC building structure subjected to shake table loading, and concluded that probabilistic system identification methods in connection with FE model updating provide 4
  • 16.
    Literature review the mostdesirable results. The second group of methods known as the Monte carlo methods begin by considering the exact form of the recursive integral equations that govern the evolution of filtering pdf and employ Monte Carlo simulation procedures to solve these equations in a recursive manner by approximating the complex integrals. Monte Carlo (MC) methods are stochastic computational algorithms and these are efficient for simulating the highly complex systems. The MC approach was conceived by Ulam(1945), developed by Ulam and von Neumann (1947), and coined by Metropolis(1949)(Candy, 2007). The technique evolved during the Manhattan project in 1940s, when the scientists were investigating the calculations related to atomic weapon designs. MC methods have wide variety of applications in engineering and finance. It offers alternative approach to solve numerical integration and optimization problems. The approach has been used in the next chapter where a detailed formulation of the methods is presented. There are several variants of the particle filters available in the literature as well (Chen, 2003). These methods have been widely used in robotics and for solving the tracking problems (Thrun, 2002). The application of these methods to problems in structural mechanics is not yet widely explored. The following passage describes the work done by the researchers on implementation of these methods to structural mechanics. (Ching et al., 2006) compared the performance of the Extended kalman filter and parti- cle filter by applying theses methods on planar four-story shear building with time-varying system parameters and non-linear hysteretic damping system with unknown system param- eters. The mass of the shear building is assumed to be time invariant with m1 = m2 = m3 = m4 = 2, 50, 000kg whereas the stiffness and damping at each of the floor level changes with time. Synthetic data is generated which is contaminated with noise. The non-linear model considered is a single degree of freedom (SDOF) Bouc-Wen hysteretic damping sys- tem. He concluded that Particle filter is the better one to use, since EKF can sometimes create misleading results. Also EKF is not suitable for highly non-linear models. (Manohar and Roy, 2006) identified the parameters of nonlinear structures using dynamic state estimation techniques. They considered two single-degree of freedom nonlinear oscil- lators, namely, the Duffing oscillator and the one with Coulomb friction damping.In par- ticular,identification of parameter alpha and mu was done on noisy observations using the density based Monte Carlo filter, bootstrap filter and sequential importance sampling filter. The basic objective of the study has been to construct the posterior pdf of the augmented state vector based on all available information. 5
  • 17.
    Motivation (Nasrellah and Manohar,2011) did the combined computational and experimental study using multiple test and sensor data for structural system identification. They considered the problem of identification of parameters of a beam with spatially varying density and flexural rigidity as well as the identification of parameters of a rigidly jointed truss. It was concluded that various factors affect the accuracy of identification like number of particles used in filtering, closeness of the initial guess on system parameters to the true values, number of global iterations, noise levels in measurements and model,imperfections, the number of parameters to be identified and sensitivity of measurements with respect to the parameters being identified. Similar studies were carried out by (Namdeo and Manohar, 2007), (Ghosh et al., 2008) and (Sajeeb and Roy, 2007).The particle filter algorithm has also been used for identification of fatigue cracks in vibrating beams (Rangaraj, 2012). 1.2 Motivation State estimation is the process of using dynamic data from a system to estimate quantities that give a complete description of the state according to some representative model of it. The methods have wide application in various areas of study. It can be used in structural health monitoring to detect the changes in the dynamical properties of structural systems during earthquakes. Apart from the this the methods are applicable to better understand the nonlinear behavior of structures during seismic loading. The ability to estimate the system state in real time is useful for efficient control of structures. Models of physical system always have uncertainties associated with them. These may be due to the approximations while modeling the system or due to the noisy corrupt measurements by the sensors. Hence, obtaining the parameters of the system optimally out of the limited noise corrupted data is a challenge. The present study is different from the earlier ones since it implements the most common variants of Particle filter i.e SIS, SIR and BF to both synthetic as well as field data. Moreover, the present work attempts to identify the parameters of a real life structure subjected to multi-component non-stationary ground motion excitation using all the three algorithms. Such a study has not been conducted in the past. The algorithm is very well able to identify the natural frequency even in higher modes when the signal processing techniques are generally not capable to do so. Implementation to the online health monitoring of a large scale structure is also one of the applications. 6
  • 18.
    Organization of report 1.3Organization of report The report mainly focuses on formulation and implementation of dynamic state estimation techniques for the identification problems in structural mechanics. • Chapter 1 deals with the introduction to system identification methods and its appli- cations to several other fields of study. A brief literature review is provided where the methods used in system identification particularly dynamic state estimation have been discussed. The chapter concludes with a motivation paragraph where the importance of the present work has been described in light of the uniqueness of the work. • Chapter2 deals with developing the mathematical framework of identification meth- ods in time domain. A general background of Bayesian model is given which is followed by Monte carlo methods used for approximating the complex integrals in Bayesian theory. Till here a general background and mathematical equations are derived. A (SDOF) oscillator is taken as an example to explain the implementation of SIS and BF algorithms. Issues like degeneracy and sample impoverishment have been discussed. • Chapter 3 deals with the implementing the methods developed in chapter 2 to solve system identification problem. Here the focus is on implementing Bootstrap filter to two different class of structures. The first one is a synthetic study done on a three storied shear building model whereas the other one is using the field data for a fixed base RC framed multi-storied building. The building is subjected to multi-component non-stationary earthquake ground motions with sensors placed at the top and first story. Overall a set of 10 parameters were identified. A comparison study among the three algorithms as well as the comparison study of the resampling algorithms have been done based on the number of iterative steps for convergence as well as the values of the natural frequency identified by using the algorithms. • Chapter 4 gives the conclusions and the future work. The strong and weak points in simulations have been discussed suggesting further ways to improve the present study. • Appendix contains the MATLAB codes used in simulation. 7
  • 19.
    Organization of report Figure1.2: Schematic flowchart of system identification (Source:Soderstrom (2001)) 8
  • 20.
    Chapter 2 Dynamic StateEstimation 2.1 Bayesian Methods In many of the engineering problems, modeling of uncertain parameters is necessary for various purposes. In this context, Bayes Theoram offers the framework of modeling and inferring the uncertain models from the measurements. These methods have been applied to many different disciplines of natural sciences, social sciences and engineering, especially in statistical physics, engineering hydrology, econometrics, archeology, information sciences, medical sciences, forensic sciences, marketing, mechanical engineering, computer science, engineering geology, aerospace engineering, finance, population migration, and many other areas. In Structural Engineering , these methods have been used in system reliability, predic- tion of concrete strength , structural dynamics and system identification. Bayesian inference is very important for Structural engineering applications because of the wide variety of uncer- tainty associated with the structures. The examples of such uncertainties can be earthquake ground motion or complete time varying description of the wind pressure, material proper- ties which are difficult to determine for heterogenous materials like concrete and the number and size of cracks present in concrete. Not only this, the modeling errors and uncertain- ties are also associated with the joints.Therefore, Bayesian statistics has wide application in Structural engineering as well. In many scenarios, the solutions gained through Bayesian inference are viewed as “optimal”. 9
  • 21.
    Bayesian Methods 2.1.1 BayesianModel Updating Many real world data analysis tasks involve estimating the unknown quantities from some given observations. In such type of problems, generally the prior knowledge about the phenomenon to be modeled is available. This knowledge can be used to formulate the Bayesian models where prior knowledge about the state is updated using the likelihood function to generate a posterior distribution. Often the measurements arrive sequentially and it is possible to both carry out the offline as well as online inferences. Thus, one of the most important steps of this process is to update the states recursively once the measurements are available.The focus of dynamic state estimation techniques is to estimate the state of the system using the measurement data. The governing equation can be written as: X(t) = q(P(t), t) (2.1) where X(t) is the response of the structure when an input force P(t) is given to the system and q(.) relates the input to the output. Since the measurements are available at discrete time steps, it becomes obvious to discretize the above model equation as Xk+1 = qk(Xk, wk) (2.2) where Xk represents the state of the system at time t = k ; Xk+1 represents predicted state at time t = k + 1 and wk represents the white noise. The discretized measurement equation can be written as Yk = hk(Xk, vk) (2.3) where Yk is the measurement at time t = k corresponding to the state Xk and vk is the measurement noise similar to the model noise. However the model as well the measurement noise has been assumed as uncorrelated. The measurements from the sensors are sampled at a particular rate and can be denoted as a vector 10
  • 22.
    Bayesian Methods Mk =[Y1, Y2, ., Yk] (2.4) The objective of this formulation is to estimate the current state Xk based on the measure- ment Yk. As the model and the measurements are corrupted with noise it is required the problem of state estimate reduces to estimating the probability density function p(Xk|Mk). Since estimating p(Xk|Mk) is itself not easy, so the more simplified problem is to determine the moments of Xk. Mathematically, this can be written as: µ = ∫ Xkp(Xk|Mk)dXk (2.5) σ = ∫ (Xk − µ)T (Xk − µ)p(Xk|Mk)dXk (2.6) where µ and σ are the first moment or mean and the second moment or variance of the pdf p(Xk|Mk) respectively. In the following, a detailed derivation of the recursive Bayesian Estimation is presented, which underlines the principles of sequential Bayesian filter. Two assumptions are used to derive the recursive Bayesian Filter. • The sates follow a first order Markov process p(Xk|X0:k−1) = p(Xk|Xk−1); (2.7) • The observations are independent of the given states. At any time t, the posterior is given by the Bayes theorem as p(X0:t|Y1:t) = p(Y1:t|X0:t)p(X0:t) ∫ p(Y1:t|X0:t)p(X0:t)dX0:t (2.8) The recursive equation can be obtained as p(X0:t+1|Y1:t+1) = p(X0:t|Y1:t) p(Yt+1|Xt+1)p(Xt+1|Xt) p(Yt+1|Y1:t) (2.9) The following recursive relations are used for prediction and updating 11
  • 23.
    Bayesian Methods The predictionequation is given by p(Xt|Y1:t−1) = ∫ p(Xt|Xt−1)p(Xt−1|Y1:t−1)dXt−1 (2.10) Based on this prediction the model updating equation is p(Xt|Y1:t) = p(Yt|Xt)p(Xt|Y1:t−1) ∫ p(Yt|Xt)p(Xt|Y1:t−1)dXt (2.11) It is however difficult to compute the normalizing constant p(Y1:t) and the marginal of the posterior p(X0:t|Y1:t) as it requires evaluation of complex high dimensional integrals. The above expressions are modified in the following way when the system and the model noise are also present. Adopting the notations as p(Xk|Yk−1) is the estimate of the state at time k based on the measurements Yk−1 and p(Xk|Yk) denotes the pdf of the state at time k based on the mea- surements Yk. Therefore, the first one is the priori pdf or the prediction, while the latter is the posteriori pdf or the correction to the state once the measurements are available at time k. It is also assumed that p(X1|M0) = p(X1) (2.12) is known. The prediction equation can be expressed as: p(Xk|Mk−1) = ∫ p(Xk|Xk−1)p(Xk−1|Mk−1)dXk−1 (2.13) Here p(Xk|Xk−1) can be derived from the Eq 2.2. The conditional density can be used to write the following expressions. p(Xk|Xk−1) = ∫ p(Xk|Xk−1, wk−1)p(wk−1|Xk−1)dwk−1 (2.14) Since wk is independent of the state, it can be written that p(wk−1|Xk−1) ≡ p(wk−1) (2.15) 12
  • 24.
    Bayesian Methods It canbe clearly seen from the process equation that if Xk−1 and wk−1 are known, then Xk can be obtained deterministically from the process equation 2.2.Therefore the pdf of p(Xk|Xk−1, wk−1) can be mathematically written as p(Xk|Xk−1, wk−1) ≡ δ(Xk − fk−1(Xk−1, wk−1)) (2.16) where δ(.) is the Dirac-Delta function. Substituting in this in the above Eq 2.14, we get p(Xk|Xk−1, w(k − 1)) = ∫ δ(Xk − fk−1(Xk−1, wk−1))p(wk−1|Xk−1)dwk−1 (2.17) The above expression can be substituted in Eq 2.10 As soon as the measurement Yk is available at the time step k the prediction can be updated using the Bayesian relation p(Xk|Mk) = p(Yk|Xk)p(Xk|Mk−1) p(Yk|Mk−1) (2.18) where the normalizing denominator is given by p(Yk|Mk−1) = ∫ p(Yk|Xk)p(Xk|Mk−1)dXk (2.19) The only unknown in the Eq 2.18 is p(Yk|Xk) which can be obtained as: p(Yk|Xk) = ∫ p(Yk|Xk, vk)p(vk)dvk (2.20) which again takes the form of the Dirac-Delta function if Xk and vk are known. The mea- surement Yk is obtained from the measurement Eq 2.3. Thus the above equations form the basis of the recursive Bayesian Model updating. If the functions f(.) and h(.) are linear and the noise wk and vk are Gaussian; then the closed form expressions of the above integrals are available and this leads to the well- known Kalman Filter,(Kalman, 1960). However if the f(.) and h(.) are non-linear, then several other methods have been prescribed in literature like EKF, (Hoshiya and Saito, 1984). However the most recent interest is to exploit the cheap and faster computational facilities to develop methods based on the Monte Carlo Simulations for approximating the integrals in the above equations. 13
  • 25.
    Monte Carlo Methods 2.2Monte Carlo Methods The underlying principle of the MC methods is that they utilize Markov chain theory. The resulting empirical distribution converges to the desired posterior distribution through random sampling. The method is widely used in signal processing where one is interested in determining the moment of the stochastic signal f(X) with respect to some underlying probabilistic distribution p(X). However the similar concept is used in system identification problem where one is interested to estimate the expected values of the system parameters. The methods have the great advantage since these are not subject to constraints of linear- ity and Gaussianity. The methods as well have appealing convergence properties. Several variants of MC methods are available in the literature. This includes Perfect Monte carlo sampling,Sequential importance sampling, Sequential importance resampling and the Boot- strap particle filter. The following section presents the mathematical formulation of each of the method. The concept has been illustrated by solving single degree of freedom oscillator at the end of the chapter. 2.2.1 Perfect Sampling & Sequential Importance Sampling (SIS) Monte Carlo methods use statistical sampling and estimation techniques to evaluate the solutions to mathematical problems. The underlying mathematical concept of Monte Carlo approximation is simple. Consider the statistical problem of estimating the expected value of E[f(x)] with respect to some probabilistic distribution p(X): E[f(X)] = ∫ f(X)p(X)dX (2.21) Here the motivation is to integrate the above expression using stochastic sampling techniques rather than using the numerical integration techniques. Such a practice is useful to estimate complex integral where it is difficult to obtain the closed form solution. In MC approach, the required distribution is represented by random samples rather than analytic function. The approximation becomes better and more exact when the number of number of such random samples increases. Thus, MC integration evaluates Eq 2.21 by drawing samples X(i) from p(X). Assuming perfect sampling, the empirical distribution is given by p(x) = 1 N N∑ i=1 δ(X − X(i)) (2.22) 14
  • 26.
    Monte Carlo Methods Theabove equation can be substituted to give E[f(x)] = ∫ f(X)p(X)dX≃ 1 N N∑ i=1 f(X(i)) (2.23) Generalization of this approach is known as Importance sampling where the integral is writ- ten as I = ∫ p(x)dx = ∫ p(x) q(x) q(x)dx (2.24) given ∫ q(x)dx = 1 (2.25) Here q(X) is known as the importance sampling distribution since it samples p(X) non- uniformly giving more importance to some values of p(x). The Eq 2.24 can be written as I = Eq[ p(X) q(X) ] = 1 N N∑ i=1 p(X(i)) q(X(i)) (2.26) where X(i) are drawn from the importance distribution q(.). The central theme of importance sampling is to choose importance distribution q(.) which can approximate the target distribution p(.) as close as possible. Using the concept of importance sampling, it is possible to approximate the posterior distribution. Since it is generally not easy to sample from the posterior, we use importance sampling coupled with an easy to sample proposal distribution q(Xt|Yt).This is one of the most important steps of the Bayesian importance sampling methodology. Using the importance sampling concept the mean of f(Xt) can be estimated as follows: E[f(Xt)] = ∫ f(Xt)p(Xt|Yt)dXt (2.27) where (Xt|Yt) is the posterior distribution. Here, we insert the importance proposal density function q(Xt|Yt) such that the estimate becomes F(t) = E[f(Xt)] = ∫ f(Xt) p(Xt|Yt) q(Xt|Yt) q(Xt|Yt)dXt (2.28) 15
  • 27.
    Monte Carlo Methods Nowusing Eq 2.18 (Bayes Rule) to the posterior distribution and defining the weighting function as ˜W(t) = p(Xt|Yt) q(Xt|Yt) = p(Yt|Xt)p(Xt) p(Yt)q(Xt|Yt) (2.29) Calculation of ˜W(t) requires the knowledge of the normalizing constant p(Yt) which is given by p(Yt) = ∫ p(Yt|Xt)p(Xt)dXt (2.30) This normalizing constant is generally not available and hence the new weight W(t) can be defined by substituting Eq 2.29 into Eq 2.28. F(t) = 1 p(Yt) ∫ f(Xt) p(Yt|Xt)p(Xt) q(Xt|Yt) q(Xt|Yt)dXt = 1 p(Yt) ∫ W(t)f(Xt)q(Xt|Yt)dXt = 1 p(yt) Eq[W(t)f(Xt)] (2.31) The above equation can be also be written as: W(t)q(Xt|Yt) = p(Yt|Xt)p(Xt) (2.32) Thus, the normalizing constant in Eq 2.30 can be replaced by 2.32 F(t) = Eq[W(t)f(Xt)] p(Yt) = Eq[W(t)f(Xt)] ∫ W(t)q(Xt|Yt) = Eq[W(t)f(Xt)] Eq[W(t)] (2.33) Now, if the samples are drawn from the distribution q(Xt|Yt), from perfect sampling distri- 16
  • 28.
    Monte Carlo Methods butionwe have ˜q = 1 N N∑ i=1 δ(X − X(i)) (2.34) and therefore, the normalized weights ˜wi of the ith sample can be written as ˜wi = Wi(t) ∑N i=1 Wi(t) (2.35) where Wi(t) = p(Yt|Xi t )p(Xi t ) p(Yt)q(Xi t |Yt) (2.36) Therefore the final estimate of the 2.28 becomes F(t)≈ N∑ i=1 ˜wi f(Xt(i)) (2.37) As the number of samples (N → ∞), the approximation of posterior becomes p(Xt|Yt)≈ N∑ i=1 ˜wi δ(Xt − Xt(i)) (2.38) With the above mathematical framework in place, we can derive the expressions for sequen- tial interfacing of measurement data available at time instant t = k. One can write the approximation of posterior as: p(Xk|Y1:k)≈ N∑ i=1 ˜wi kδ(Xk − Xi k) (2.39) where δ(.) is the dirac delta function and ˜wi k is the normalized weight of the ith particle at time k. p(X0:k|Y1:k) ∝ p(Yk|X0:k, Y1:k−1)p(X0:k|Y1:k−1) = p(Yk|Xk)p(Xk|X0:k−1, Y1:k−1)p(X0:k−1|Y1:k−1) = p(Yk|Xk)p(Xk|Xk−1)p(X0:k−1|Y1:k−1) (2.40) 17
  • 29.
    Monte Carlo Methods Wecould now construct an importance distribution Xi 0:k∼q(X0:k|Y1:k) and compute the cor- responding (normalized) importance weights as ˜wi k∝ p(Yk|Xi k)p(Xi k|Xi k−1)p(Xi 0:k−1|Y1:k−1) q(Xi 0:k|Y1:k) (2.41) The recursive form of the importance distribution can be written as: q(X0:k|Y1:k) = q(Xk|X0:k−1, Y1:k)q(X0:k−1|Y1:k−1) (2.42) Substituting Eq 2.42 in Eq 2.41 we obtain the following expression ˜wi k = p(Yk|Xi k)p(Xi k|Xi k−1)p(Xi 0:k−1|Y1:k−1) q(Xi k|Xi 0:k−1, Y1:k)q(Xi 0:k−1|Y1:k−1) (2.43) Thus the recursive weight can be given as: ˜wi k∝ p(Yk|Xi k)p(Xi k|Xi k−1) q(Xi k|Xi 0:k−1, Y1:k) ˜wi k−1 (2.44) So, the algorithm works the following way • Initilization: Draw N samples Xi 0 from the prior Xi 0∼p(x0) (2.45) • Prediction: Draw N new samples Xi k from importance distribution Xi k∼q(Xk|Xi 0:k−1, Y1:k) (2.46) • Update: Calculate new weights according to Eq 2.44. Once the weights are updated the posterior can be calculated using equation 2.39 One of the major problems associated with SIS Filter is the degeneracy where all the particles have negligible weight except one particle after few iterations. The variance of the importance weights increases with time and it becomes impossible to control the degeneracy phenomenon. A suitable measure of the degeneracy of the algorithm is the effective sample size (Gordon 18
  • 30.
    Monte Carlo Methods etal., 1993) Neff which can be defined as Neff = Ns 1 + V ar(w∗i k ) (2.47) where w∗i k can be obtained from Eq 2.29 The estimate of Neff is given by the following relation ˜Neff = 1 ∑N i=1 ( ˜wi k)2 (2.48) where w is the normalized weight obtained using the Eq 2.44 When Neff becomes less than N; it implies degeneracy and a small Neff indicates severe degeneracy. Therefore to counter this (Arulampalam et al., 2002) suggested two ways • Good choice of Importance density: This involves the choosing the importance density such that the V ar(w∗i k ) can be reduced and hence the value of Neff increases. • Resampling: This is another important step which differentiates SIR filter from SIS filter and has been discussed in detail in the following section. Both of the above issues form the basis of “Sequential Importance Resampling” also known as “Adaptive Particle Filters” and have been discussed the following section. 2.2.2 Sequential Importance Resampling (SIR) & Bootstrap Fil- ter The SIR filter is an MC method which can be applied to recursive Bayesian filtering problems. To use SIR algorithm, both the state dynamics Eq 2.1 as well as the measurement equations Eq 2.3 must be known. Further it is required to be able to sample from the noise distribution of the process as well as from the prior. A likelihood functions p(Yk|Xk) needs to be known for computing the particle weights. SIR algorithm is very similar to SIS filter except the choice of optimal importance density as well as Resampling step included in the SIR algorithm. The SIR algorithm can be easily derived from SIS algorithm by appropriate choice of the importance density. The optimal importance density used in SIR is 19
  • 31.
    Monte Carlo Methods q(Xi k|Xi 0:k−1,Y1:k) = p(Xi k|Xi k−1, Yk) (2.49) By substituting Eq 2.49 in Eq 2.44 the updated weight becomes wi k∝wi k−1p(Yk|Xi k−1), (2.50) This optimal importance distribution can be used when the state space is finite.The present report also uses the similar assumption of importance density. However, the report deals with the problem of system identification where we are more interested in identifying the system parameters rather then tracking the sate vector. The algorithm can be implemented in the following manner • Draw particles Xi k from the importance distribution Xi k∼q(Xk|Xi 0:k−1, Y1:k), i = 1, ..., N (2.51) • The new weights can be calculated from Eq 2.44 for all the particles an normalize them to unity. • If Neff calculated in Eq 2.48 becomes too low, perform the resampling step. • Interpret each weight wi k as the probability of obtaining the sample index i in the set Xi k for [i = 1, . . . ,N]. • Draw N samples from that discrete distribution and replace the old sample set with this new one. • Set all weights to the constant value wi k = 1 N . The Bootstrap filter is a special case of SIR filter where the dynamic model is used as impor- tance distribution as in Eq 2.50 and the resampling is done at each step. A brief algorithm is presented here for a more clear illustration. However, the problem formulation section gives the detailed implementation of Bootstrap filter to System identification problem. • Draw point Xi k from the dynamic model Xi k ∼ p(Xk|Xi k−1)i = 1, ..., N (2.52) 20
  • 32.
    Monte Carlo Methods •Calculate new weights and normalize them to unity. wi k ∝ p(Yk|Xi k)i = 1, ..., N (2.53) • Perform resampling after each iteration. One of the important steps in the above algorithm is resampling from the discrete probability mass function containing the normalized weights. Resampling ensures that particles with larger weights are more likely to be preserved than particles with smaller weights. Although the resampling solves the degeneracy, but it introduces sample impoverishment which is explained through an example problem solved at the end of the chapter. There are wide variety of resampling algorithms available in the literature (Li, 2013). This report discusses the traditional resampling strategies as well as the comparative study of these algorithms in light of the system identification problem. The traditional resampling algorithms discussed are namely Multinomial or simple resampling, Systematic & Stratified resampling and Wheel resampling. A brief description of the algorithms is presented below Multinomial Resampling Multinomial Resampling also known as binary search resampling or simple resampling is one of the simplest of the resampling algorithms which generates N random numbers un t and use them to sample particles from the array containing the normalized weights of the particles wi . The cumulative sum of the weights is done to select the interval in which the random number lies. The selection of the mth particle must satisfy the following equation m−1∑ i=1 wi < un t < m∑ i=1 wi (2.54) Since the sampling of each particle is purely random so a given particle can be sampled a minimum of zero times and a maximum of N times. Stratified Resampling Stratified sampling divides the total population in sub-populations or the interval of (0,1] into 1 N equal intervals. Hence the disjoint sub-intervals are (0, 1 N ] ∪ ( 1 N , 2 N ] ∪ (1 − 1 N , 1]. The 21
  • 33.
    Monte Carlo Methods randomnumbers are drawn from each of the sub-intervals as un = U( n − 1 N , n N ), n = 1, 2....., N (2.55) After the random number is generated, each sub-interval is tested using cumulative sum of normalized weights as shown in Eq 2.54 Systematic Resampling Systematic resampling is similar to stratified resampling where the first random number is generated from the uniform distribution between (0, 1 N ]. After this the random numbers are generated deterministically using the equation un t = u1 t + n − 1 N , n = 2, 3....., N (2.56) The literature suggests that the systematic resampling is computationallye more efficient due to smaller number of random numbers that have to be generated Li (2013). Wheel Resampling In this resampling method the particles are represented as big wheel with each particle occupying the circumference proportional to its normalized weight. Particles with bigger weight occupy more space and the ones with smaller occupy smaller space. An iterative loop is run for N times where particles will be chosen in proportion to their circumference on the circle. Although the resampling step reduces the degeneracy, it introduces various problems. To begin with, it limits the opportunity to parallelize since all the particle must be combined. Moreover, resampling introduces the problem of sample impoverishment as particles having higher weights are selected multiple number of times. This also leads to lack of diversity among the particles. For the case of very small process noise, all the particles will collapse to a single point. Different researchers have tried various schemes to deal with sample impoverishment(Nasrellah and Manohar, 2011). The following section demonstrates the implementation of the algorithm to a single degree of freedom (SODF) oscillator. 22
  • 34.
    Examples 2.3 Examples In thissection numerical examples are presented to demonstrate the implementation of particle filters. We consider a single-degree-of-freedom (SDOF) oscillator excited by Elcentro earthquake. We start with a simple problem which aims at identifying the stiffness of the SDOF oscillator, given the response of the oscillator to earthquake excitation. Both SIS and Bootstrap filters have been used to solve this example. The measurement data has been synthetically generated by solving the forward problem by assuming known values of the system parameters. Once the synthetic measurements are known, the inverse problem is solved using various time domain methods described above. A schematic diagram of the oscillator is shown in Fig 2.1. The governing equation of motion of SDOF oscillator is given by the second order differential equation as: M ¨u(t) + C ˙u(t) + Ku(t) = −M ¨ug(t) (2.57) where , M is the Mass, C is the damping, K is the stiffness, ¨ug(t) is the acceleration due to the ground motion. Here we have considered the ground motion due to 1940 Elcentro earthquake. Elcentro was the first earthquake to be recorded by strong motion seismograph and had the magnitude of 6.9. For solving the forward problem, M is assumed to be 40 kg, C is assumed as 15 N-s and the stiffness value is assumed as 6 × 104 (N/m). The natural frequency of the oscillator is 38.72 rad/s. The problem in hand is to identify the stiffness of the SDOF oscillator. The forward problem has been solved by using the time marching algorithm to obtain the response numerically. We use β Newmark algorithm which is an implicit unconditionally stable time marching algorithm (Newmark, 1959). The MATLAB code forβ Newmark algorithm has been provided in the appendix. The ground excitation due to the Elcentro earthquake has been plotted in Fig 2.2. The overall duration of the excitation is 40s. The time step considered in the analysis for solving the forward problem is 0.01 sec. Hence the total number of data points are 4000. The response of the oscillator is shown in Fig 2.3 The SIS filter is now applied to identify the stiffness value of the oscillator. The total number of particles considered are 50. The initial values of the stiffness are generated in the domain of [10000,90000] from the uniform distribution. The algorithm is dependent on the parameter values generated at time t = 0. The identified value over the entire time history is shown in the Fig 2.4 Hence, the algorithm acts as a filter and returns the best value among all the values generated 23
  • 35.
    Examples at t =0. The effect of domain dependency can be bypassed and the algorithm can be made more general by mutating the particles so obtained by adding a small Gaussian noise with a controlled value of σ which can be obtained by several test run of the algorithm. The method has been demonstrated by solving the similar problem using Bootstrap filter. The example is also useful to study the evolution of weights of the particles as the time increases. The degeneracy phenomenon explained above can be seen in the Fig 2.5. For simplicity and clarity, only time history of 4 particles is given. The evolution of the posterior density with time is given by Fig 2.6. The red dots in Fig 2.6 shows the evolution of the weight of the best particle across the time history. The estimated states and the states of the original system are plotted in Fig. 2.7 The degeneracy can be seen in Fig 2.6 where the particle weights of all the particles becomes zero except the one traced with a red dot. Now let us see the similar example where the same SDOF oscillator is considered and the system parameters are identified by implementing the Bootstrap Filter. Here the degeneracy phenomenon and the sample impoverishment due to resampling is focussed. The problem of degeneracy can be solved by introducing the resampling step. Fig 2.8 shows the evolution of posterior distribution at iteration number i = 1 (initial), i = 100 (intermediate), i= 4001 (final). Though the degeneracy phenomenon is successfully bypassed but sample impoverishment dominates here. At the last step of the iteration, 50 copies of the best particle are available. However, ratio of identified value of stiffness parameter to its original value as well as the convergence of the stiffness parameter is shown in Fig: 2.9. The standard deviation of the stiffness samples become zero indicating the convergence of the algorithm. To solve the problem of sample impoverishment, a small noise is added to the updated posterior. The noise added in present study is 2%. The value is obtained after several test run of the algorithm using different noise levels. The addition of small noise maintains diversity of samples but the statistical fluctuations increases. The expected value of the identified parameter is shown in Fig 2.10.The convergence is shown in Fig 2.11. It can be seen the identified value matches closely with the original value and the convergence is also achieved. With the background provided through the mathematical framework and the examples solved, we will formulate the problem of system identification in the next chapter for both small and large scale structures. The following section focuses on the implementation of SIS, SIR and the Bootstrap Filter for solving the parameter estimation problem for a synthetic model as well as fixed base BRNS building. The synthetic model is solved by generating 24
  • 36.
    Examples the synthetic datawith comparative results using several resampling algorithms. On the other hand BRNS building is solved for multi-component ground excitations with sensor measurements in both the directions. Figure 2.1: Schematic diagram of SDOF system 0 5 10 15 20 25 30 35 40 −0.2 −0.1 0 0.1 0.2 0.3 t (s) ¨ug(g) Figure 2.2: Ground excitation due to Elcentro earthquake 25
  • 37.
    Examples 0 5 1015 20 25 30 35 40 −1.5 −1 −0.5 0 0.5 1 1.5 t (s) ¨ut(g) Figure 2.3: Response of oscillator to ground excitation 0 5 10 15 20 25 30 35 40 0.8 0.85 0.9 0.95 1 1.05 1.1 1.15 t (s) µ k iden /µ k or Figure 2.4: Estimation of ratio of identified stiffness to original stiffness as function of time 26
  • 38.
    Examples 0 0.2 0.40.6 0.8 1 1.2 1.4 1.6 1.8 2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 t (s) weights K1 6.02E4 K2 6.77E4 K3 6.45E4 K4 1.57E4 Figure 2.5: Evolution of weights of particles over time 0 0.5 1 1.5 2 0 2 4 6 8 10 x 10 4 0 0.2 0.4 0.6 0.8 1 t (s)K (N/m) weights Figure 2.6: Evolution of posterior density with time 27
  • 39.
    Examples 0 5 1015 20 25 30 35 40 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 t (s) ¨ut(g) True States Estimated States 5 5.5 6 −1 0 1 Figure 2.7: States estimation from the original and the identified system 0 5 10 x 10 4 0 2 4 6 8 10 −1 0 1 2 x 10 5 0 0.5 1 1.5 x 10 −5 0 5 10 x 10 4 0 5 10 15 20 25 0 5 10 x 10 4 0 1 2 3 4 5 6 x 10 −5 5.957 5.9575 5.958 5.9585 x 10 4 0 10 20 30 40 50 5.957 5.9575 5.958 5.9585 x 10 4 0 0.1 0.2 0.3 0.4 a b c Figure 2.8: Posterior evolution of distribution at iteration number a) initial, b) intermediate (100) and c) final 28
  • 40.
    Examples 0 5 1015 20 25 30 35 40 0.7 0.8 0.9 1 1.1 1.2 1.3 µk iden /µk or 0 5 10 15 20 25 30 35 40 0 0.5 1 1.5 2 2.5 x 10 4 t (s) σ(N/m) Figure 2.9: Mean and Standard Deviation of the identified stiffness parameter 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 0.8 0.85 0.9 0.95 1 1.05 1.1 1.15 t (s) µ k iden /µ k or Figure 2.10: Expected value of stiffness by addition of noise (2% expected value) 29
  • 41.
    Examples 0 0.5 11.5 2 2.5 3 3.5 4 4.5 5 0 0.5 1 1.5 2 2.5 x 10 4 t (s) σ(N/m) Figure 2.11: Convergence of the stiffness due to addition of noise (2% expected value) 30
  • 42.
    Chapter 3 Parameter Estimationof LTI Systems 3.1 System Identification of Linear Time Invariant (LTI) Systems We consider the problem of identifying the system parameters using the Bootstrap Particle filter algorithm. The natural frequency is calculated by solving the Eigen value problem involving Mass and Stiffness Matrix. The general equation of motion for a linear system can be written as M(t)¨u(t) + C ˙u(t) + Ku(t) = −M ¨ug(t) (3.1) where, M is the mass matrix, C is the damping matrix, K is the stiffness matrix,¨ug(t) is the ground excitation and u(t), ˙u(t) & ¨u(t) are respectively the displacement, velocity and acceleration of the nodes where the sensors are placed for recording. The system parameters to be identified are represented by ϕ. Here ϕ could represent param- eters such as stiffness, damping, mass density etc. The above equation can be represented in continuous state space form as: ˙Z(t) = τ(Z(t), ϕ(t), t) (3.2) where Z(t) is the state vector of the vibrating system and τ (.) relates the state of the system to its first order derivative with respect to time. Now the problem in hand is to identify the system parameters ϕ. The particle filter algorithm simulates particles based on 31
  • 43.
    System Identification ofLinear Time Invariant (LTI) Systems the updated posteriori distribution of the state. More samples are generated from the region where the likelihood is greater. To solve the problem for identification of parameters, ϕ, the state vector can be augmented as Xk = [Zkϕk] and assuming model noise as the sequence of i.i.d random variable wk, the above Eq 3.2 can be discretized in the form of Eq 2.2. The dimension of the problem is equal to sum of vector Z and ϕ. Hence one is able to identify the state vector as well as the parameters. In the system identification problem we are generally interested in identifying the system parameters rather than tracking the state of the system. (Nasrellah and Manohar, 2011) suggested that a larger computational effort can be reduced by formulating the problem in terms of system parameters . Hence,the systems which remain invariant with time, the system equation can be expressed as: dϕ dt = 0 ϕj(0) = ϕ0 j = 1, 2.........n (3.3) where ϕ0 is the value of the value of the system parameters at time t = 0. The discrete version of the equation can be presented as ϕk+1 = ϕk + wk (3.4) where ϕk is the system parameters at time k, wk is the model noise. The corresponding measurement equation can be written as Yk = hk(ϕk) + vk (3.5) The advantage of the above modeling is that it reduces the dimensions of the state vector which is equal to dimension of the ϕ vector and hence the associated computational effort. The MATLAB code for the bootstrap particle filter for system identification of LTI system is given in the appendix. However, the implementation and the key steps involved in the algorithm are discussed below. 1. The algorithm starts with simulating N samples for all the parameters to be identified (ϕ0), from the assumed pdf of (ϕ0) at the time instant t = 0. The random particles are generated in a suitable domain identified by the upper and the lower bounds. These are also known as the prior estimates. 2. The next step involves solving N linear forward problems, Eq 3.1 corresponding to each of the prior estimate ϕk−1. The forward problems are solved using the β Newmark 32
  • 44.
    System Identification ofLinear Time Invariant (LTI) Systems algorithm where the value of alpha and delta are taken as 1/6 and 1/2 respectively ??. 3. The predicted value obtained from step 2 are compared to the measurement values. The measurement values are either available through sensor recordings or are generated synthetically using Eq 3.5. We have considered both the measurements synthetic as well as field data. 4. The comparison between the predicted values and the measurement data is made using the likelihood function where p(Yk|Xk) at time t = k. We model the likelihood function as the normal distribution centered about the measurement with a small value of standard deviation. Thus, each particle propagated at t = k is weighted. 5. The weight calculated is normalized using the equation: δj = p(Yk|ϕkj) ∑N j=1 p(Yk|ϕkj) (3.6) The calculated weights are passed in the resampling algorithms to calculate the sample for the next iterate. Hence the above calculated normalized weights constitute the discrete probability mass function for the next iterate. This is known as the posterior estimates of ϕk for the time step k. 6. The mean of the estimates is calculated by averaging the ensemble expressed as µ = ∑N j=1ϕkj N (3.7) 7. The standard deviation of the samples across the ensemble is calculated as σ = 1 N N∑ j=1 (ϕkj − µ)T (ϕkj − µ) (3.8) 8. The above algorithm is repeated iteratively by incrementing the time steps as t = k+1. In this way the filtering is carried out for the entire process. Now we will describe the models and the results obtained upon the implementation of SIS, SIR and Bootstrap filter to solve the system identification problem. 33
  • 45.
    System Identification ofLinear Time Invariant (LTI) Systems 3.1.1 Synthetic Experiment The following Fig 3.1 shows the plan and elevation of three story shear building model kept in the Structural Engineering laboratory of Department of Civil Engineering, IIT Guwahati. We have considered synthetic measurement from the known value of system parameters and then validate the results using all the three filter algorithm described in the previous chapter. The lumped mass for the slab (600×300×10mm) has been calculated for solving the forward problem. The density of the steel taken is 8400 kg/m3 . The classical damping matrix is obtained by considering the Rayleigh Damping, (Chopra, 2007). C = αM + βK (3.9) Here M,K and C are the Mass, Stiffness and Damping matrix respectively, α & β are the coefficients. For the given model, the mass and the stiffness matrix is given as: M =    m1 0 0 0 m2 0 0 0 m3    (3.10) K =    k1 + k2 −k2 0 −k2 k2 + k3 −k3 0 −k3 k3    (3.11) where m1,m2 and m3 are the lumped mass at the floor levels & k1, k2 and k3 are the stiffness of the each story. The coefficients α and β can be obtained from specified damping ratios ζi and ζj for the ith and jth mode respectively. If both the modes are assumed to have same damping ratios then α = ζ (2wiwj wi + wj β = ζ 2 wi + wj (3.12) where wi and wj are the natural frequency of the system in it h and jt h modes respectively. The response of the model has been calculated using the values of the parameters given in Table 3.1 below. This is known as solving the forward problem. We have considered the response of the structure due to the ground motion excitation by the 1940 El-Centro Earthquake, the 1989 Lomaprieta Earthquake, the 1995 Kobe earthquake, the 1999 Chichi earthquake and the 2004 Parkfield earthquake. The plots of the ground 34
  • 46.
    System Identification ofLinear Time Invariant (LTI) Systems motion excitations is shown in Fig 3.3. The response of the model for any of the given excitations can be plotted and is shown in Fig 3.4. For illustration we have shown the response of all the three story of the model due to El-Centro & Lomaprieta earthquake in Fig 3.4 & Fig 3.5. The inverse problem starts with simulating random values from the uniform distribution, for the parameters which are to be identified, at time t = 0. Here we simulate random values for stiffness and damping at all the floor levels. The domain over which the stiffness values are simulated is between 10000 to 90000 N/m and the damping values are between 0 to 50 N-s/m. The number of values generated at t= 0 are 100. This number remains constant for each and every iteration of the algorithm. The choice of number of particles depends upon the computational time and the number of unknown parameters. For this specific study, the identification problem is solved using all the three filters SIS, SIR & Bootstrap. Therefore, a set of random particles was generated at t = 0 with 100 number of particles which would remain for all the filters considering a particular ground motion. This is done in order to present the comparison study for all the three algorithms. Normal distribution has been used to calculate the likelihood or the weights of the particles with error covariance been chosen as 0.001. The estimation of parameters become more accurate by increasing the number of particles but the computational time increases. To maintain the same order of study as was done while solving the example problem in chapter 2, the synthetic three storied model is also solved using the SIS and SIR filter first. All the five ground motions mentioned above are considered for the present study. The Fig 3.6 and 3.7 shows the identified values of the parameters using the SIS filter and SIR filter. If the pool of the simulated samples is kept the same, the identified values will converge to the same values and hence plots for only El-Centro ground motion have been given. Here, the algorithm implements the resampling step when the value of Neff calculated using Eq 2.48 falls below a threshold.Finally, the Bootstrap filter is implemented with the resampling step incorporated after every iteration. The Table 3.2 shows the identified values of natural frequencies in all the three modes. It can be seen that for a particular set of ground motion all the three filters give the same values of the natural frequencies in first three modes. Moreover the algorithms have also been compared on the basis of number of iterative steps for obtaining the convergence. The Table 3.3 gives the number of steps and it can be seen that Bootstrap is slightly faster than the other two algorithms. In some cases all the three converge after same number of iterations whereas in some cases Bootstrap converges slightly better. Several traditional resampling algorithms have been compared based on the % error in the 35
  • 47.
    System Identification ofLinear Time Invariant (LTI) Systems values identified as well as the time steps required to attain the convergence.The plots are given for El-Centro and Lomaprieta earthquake since the methodology remains the same for other excitations also. For other excitations, results are tabulated and compared. The mean ratio of identified system parameters to the original values as well as the standard deviation of the parameters has been plotted in Fig 3.8 & 3.9 and Fig 3.10 & 3.11 for El-Centro & Lomaprieta earthquake respectively. The INSET view shows the finer details and the fluctuations which take place over a very short period of time. The statistical fluctuations die out once the parameters are identified and the standard deviation becomes zero. Various traditional resampling schemes have been used to resample the distribution after each iteration in the algorithm. The robustness of the algorithm is clearly depicted in Fig 3.13 and Fig 3.12 which shows the mode shapes of the identified as well original system and the estimated states of the identified and original system for El-Centro earthquake using Bootstrap filter. The comparative study shown in Table 3.4 and 3.5 suggests that systematic and stratified resampling algorithms give a better estimate of the stiffness values. However the other two resampling algorithms Wheel and Simple converges at a much faster rate than the Stratified and Systematic algorithms. The sensitivity analysis for different SNR (Signal to noise ratio) has been done for both Elcentro and Lomaprieta earthquake. The results shows that bootstrap filter is very robust even for low SNR values. The response of the model due to addition of noise can been seen in Fig 3.14. The parameters identified due to additional noise are unaffected and these remain the same. This has been shown in Table 3.7. Table 3.6 shows the effect on ratio of identified values to original values by increasing the number of particles from 100 to 1000. Here, it is noteworthy to mention that the computational time also increases. It cannot be said with full confidence that this increase in particles would lead to better estimation of parameters. However, with generation of more and more samples the probability of obtaining near accurate values increases. 3.1.2 BRNS Building This section aims at implementing the SIS, SIR and Bootstrap filter to identify the pa- rameters of BRNS building subjected to multi-component earthquake ground excitations. Response of the building has been measured in both the directions (x and y) by the sensors placed at the first and the top storey of the building.The Fig 3.2 shows the plan and elevation of the BRNS building. Fig 3.15 and 3.16 shows the x and y component of ground motion 36
  • 48.
    System Identification ofLinear Time Invariant (LTI) Systems excitations recorded on 3rd and 21th September,2009. The multi-component response of the BRNS building is shown in Fig 3.17 and Fig 3.18 for both the dates of measurement. The parameters identified are the stiffness values in both the directions at each of the story level as well as the cofficients α and β of the modal damping matrix given in Eq: 3.9. Therefore, a total of ten values are identified.The mass and stiffness matrix of the BRNS building is given in Eq 3.13 and 3.14. M =                 m1 0 0 0 0 0 0 0 0 m2 0 0 0 0 0 0 0 0 m3 0 0 0 0 0 0 0 0 m4 0 0 0 0 0 0 0 0 m5 0 0 0 0 0 0 0 0 m6 0 0 0 0 0 0 0 0 m7 0 0 0 0 0 0 0 0 m8                 (3.13) K =                 k1 + k3 0 −k3 0 0 0 0 0 0 k2 + k4 0 −k4 0 0 0 0 −k3 0 k3 + k5 0 −k5 0 0 0 0 −k4 0 k4 + k6 0 −k6 0 0 0 0 −k5 0 k5 + k7 0 −k7 0 0 0 0 −k6 0 k6 + k8 0 −k8 0 0 0 0 −k7 0 k7 0 0 0 0 0 0 −k8 0 k8                 (3.14) where m1, m2m3, m4, m5, m6, m7, m8 is the lumped mass in x and y direction at each of the story level & k1, k3, k5, k7 is the story stiffness in x direction and k2, k4, k6, k8 be the story stiffness in y direction. The damping matrix is given by C = αM + βK (3.15) The original value of the parameters of the building are given in Table: 3.8.The algorithm for solving the inverse problem remains the same. A pool of random particles is generated at time t = 0. The particle weights are evaluated by calculating the likelihood function which is modeled as a normal distribution. The Table 3.9 shows the values of the identified natural frequency in all the 8 modes using SIS,SIR and Bootstrap filter. It can be seen that the all 37
  • 49.
    System Identification ofLinear Time Invariant (LTI) Systems the three algorithms perform equally well with the identified frequencies more or less the same in all the eight modes. The resampling algorithms have also been compared based on the values of the natural frequency identified. The Table 3.10 shows the identified values of the frequencies in all the models using the traditional resampling algorithms. The % error is given in Table 3.11 . It can be seen that error levels in both stratified and systematic resampling schemes are lesser than compared to multinomial and wheel resampling. Hence, this is consistent to the results of synthetic experimentation done using laboratory model in the previous section where also the synthetic and stratified showed superior performance. Moreover, the results for the field measurement of data recorded on 21/09/2009 have been plotted. Fig 3.19 and 3.20 shows the identified values of parameters and the convergence of the stiffness values using the Bootstrap filter. The first four mode shapes of the BRNS building is shown in Fig 3.21. It can be seen that the identified mode shapes are very close to the original mode shapes of the building. The original and estimated states of the first and the top story has shown in Fig 3.22. The states match closely for the fundamental mode of vibration of BRNS building. 38
  • 50.
    System Identification ofLinear Time Invariant (LTI) Systems Figure 3.1: Plan and Elevation of Synthetic Model 39
  • 51.
    System Identification ofLinear Time Invariant (LTI) Systems Figure 3.2: Plan and Elevation of BRNS Building 40
  • 52.
    System Identification ofLinear Time Invariant (LTI) Systems 0 10 20 30 40 −1 −0.5 0 0.5 1 t (s) ¨ug(g) 0 10 20 30 40 −1 −0.5 0 0.5 1 t (s) ¨ug(g) 0 20 40 60 80 100 −1 −0.5 0 0.5 1 t (s) ¨ug(g) 0 10 20 30 40 50 −1 −0.5 0 0.5 1 t (s) ¨ug(g) 0 10 20 30 40 50 −0.5 0 0.5 t (s) ¨ug(g) a b c d e Figure 3.3: Ground motion excitations a) Elcentro, b)Lomaprieta, c) Chichi, d) Kobe and e) Parkfield 0 5 10 15 20 25 30 35 40 −2 −1 0 1 2 ¨ut(g) 0 5 10 15 20 25 30 35 40 −4 −2 0 2 4 ¨ut(g) 0 5 10 15 20 25 30 35 40 −4 −2 0 2 4 t (s) ¨ut(g) Third Floor Second Floor First Floor Figure 3.4: Response of model: Lomaprieta earthquake 41
  • 53.
    System Identification ofLinear Time Invariant (LTI) Systems 0 5 10 15 20 25 30 35 40 −1 −0.5 0 0.5 1 ¨ut(g) 0 5 10 15 20 25 30 35 40 −2 −1 0 1 2 ¨ut(g) 0 5 10 15 20 25 30 35 40 −2 −1 0 1 2 t (s) ¨ut(g) First Floor Second Floor Third Floor Figure 3.5: Response of model: Elcentro earthquake 0 20 40 0.5 1 1.5 t (s) µk1iden /µk1org 0 20 40 0.5 1 1.5 t (s) µ k1iden /µ k2org 0 20 40 0.5 1 1.5 t (s) µ k3iden /µ k3org 0 20 40 0.2 0.4 0.6 0.8 1 1.2 t (s) µc1iden /µc1org 0 20 40 0.5 1 1.5 t (s) µ c2iden /µ c2org 0 20 40 0.5 1 1.5 t (s) µ c3iden /µ c3org Figure 3.6: Ratio of identified to original parameters:SIS Filter: Elcentro earthquake 42
  • 54.
    System Identification ofLinear Time Invariant (LTI) Systems 0 20 40 0.5 1 1.5 t (s) µk1iden /µk1org 0 20 40 0.5 1 1.5 t (s) µ k1iden /µ k2org 0 20 40 0.5 1 1.5 t (s) µ k3iden /µ k3org 0 20 40 0.2 0.4 0.6 0.8 1 1.2 t (s) µc1iden /µc1org 0 20 40 0.5 1 1.5 t (s) µ c2iden /µ c2org 0 20 40 0.5 1 1.5 t (s) µ c3iden /µ c3org Figure 3.7: Ratio of identified to original parameters:SIR Filter: Elcentro earthquake 0 5 10 15 20 25 30 35 40 0.5 1 1.5 2 µk1 Wheel Systematic Stratified Multinomial 0 5 10 15 20 25 30 35 40 0.6 0.8 1 1.2 µk2 0 5 10 15 20 25 30 35 40 0.5 1 1.5 T (s) µk3 0 2 4 0.5 1 1.5 2 Wheel Systematic Stratified Multinomial 0 2 4 0.6 0.8 1 1.2 0 2 4 0.5 1 1.5 Figure 3.8: Ratio of identified stiffness to original stiffness: Elcentro earthquake 43
  • 55.
    System Identification ofLinear Time Invariant (LTI) Systems 0 5 10 15 20 25 30 35 40 0 1 2 3 x 10 4 σ(N/m) 0 5 10 15 20 25 30 35 40 0 1 2 3 x 10 4 σ(N/m) 0 5 10 15 20 25 30 35 40 0 1 2 3 x 10 4 t (s) σ(N/m) Wheel Systematic Stratified Multinomial 0 2 4 0 1 2 3 x 10 4 Wheel Systematic Stratified Multinomial 0 2 4 0 1 2 3 x 10 4 0 2 4 0 1 2 3 x 10 4 Figure 3.9: Standard deviation of stiffness: Elcentro earthquake 0 5 10 15 20 25 30 35 40 0.5 1 1.5 2 µk1 Wheel Systematic Stratified Multinomial 0 5 10 15 20 25 30 35 40 0.4 0.6 0.8 1 1.2 µk2 0 5 10 15 20 25 30 35 40 0.8 1 1.2 1.4 t (s) µk3 0 2 4 0.5 1 1.5 2 Wheel Systematic Stratified Multinomial 0 2 4 0.4 0.6 0.8 1 1.2 0 2 4 0.5 1 1.5 Figure 3.10: Ratio of identified stiffness to original stiffness: Lomaprieta earthquake 44
  • 56.
    System Identification ofLinear Time Invariant (LTI) Systems 0 5 10 15 20 25 30 35 40 0 1 2 3 x 10 4 σ(N/m) Wheel Systematic Stratified Multinomial 0 5 10 15 20 25 30 35 40 0 1 2 3 x 10 4 σ(N/m) 0 5 10 15 20 25 30 35 40 0 1 2 3 x 10 4 t (s) σ(N/m) 0 2 4 0 1 2 3 x 10 4 Wheel Systematic Stratified Multinomial 0 2 4 0 1 2 3 x 10 4 0 2 4 0 1 2 3 x 10 4 t (s) Figure 3.11: Standard deviation of stiffness: Lomaprieta earthquake 0 5 10 15 20 25 30 35 40 −1 0 1 ¨ut(g) Original states Estimated States 0 5 10 15 20 25 30 35 40 −2 0 2 ¨ut(g) Original states Estimated States 0 5 10 15 20 25 30 35 40 −2 0 2 t (s) ¨ut(g) Original states Estimated States 5 5.5 6 −1 0 1 5 5.5 6 −1 0 1 ¨ut(g) 5 5.5 6 −1 0 1 ¨ut(g) Figure 3.12: Original and estimated states of model: El-Centro earthquake 45
  • 57.
    System Identification ofLinear Time Invariant (LTI) Systems −0.2 −0.1 0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8DOF original shape identified shape −0.2 0 0.2 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 DOF −0.5 0 0.5 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 DOF Figure 3.13: Mode shape of the original and identified structure 0 5 10 15 20 25 30 35 40 −1.5 −1 −0.5 0 0.5 1 1.5 ¨ug(g) No noise SNR 0.005 0 5 10 15 20 25 30 35 40 −3 −2 −1 0 1 2 3 t (s) ¨ug(g) No noise SNR 0.005 Figure 3.14: Response of the synthetic model due to addition of noise for El-Centro and Lomaprieta earthquake 46
  • 58.
    System Identification ofLinear Time Invariant (LTI) Systems 0 5 10 15 20 25 30 35 40 45 50 −0.015 −0.01 −0.005 0 0.005 0.01 0.015 ¨ug(g) x component 0 5 10 15 20 25 30 35 40 45 50 −0.015 −0.01 −0.005 0 0.005 0.01 0.015 ¨ug(g) t (s) y component Figure 3.15: Ground excitation due to recorded earthquakes on 03/09/2009 0 5 10 15 20 25 30 35 40 45 50 −0.03 −0.02 −0.01 0 0.01 0.02 0.03 ¨ug(g) x component 0 5 10 15 20 25 30 35 40 45 50 −0.03 −0.02 −0.01 0 0.01 0.02 0.03 ¨ug(g) t (s) y component Figure 3.16: Ground excitation due to recorded earthquakes on 21/09/2009 47
  • 59.
    System Identification ofLinear Time Invariant (LTI) Systems 0 5 10 15 20 25 30 35 40 45 50 −0.02 0 0.02 ¨uxt(g) 0 5 10 15 20 25 30 35 40 45 50 −0.02 0 0.02 ¨uyt(g) 0 5 10 15 20 25 30 35 40 45 50 −0.05 0 0.05 ¨uxt(g) 0 5 10 15 20 25 30 35 40 45 50 −0.02 0 0.02 ¨uyt(g)/g t (s) Fourth Floor y direction First Floor x direction First Floor y direction Fourth Floor x direction Figure 3.17: Response of BRNS building to multicomponent earthquake recorded on 03/09/2009 0 5 10 15 20 25 30 35 40 45 50 −0.05 0 0.05 ¨uxt(g) 0 5 10 15 20 25 30 35 40 45 50 −0.05 0 0.05 ¨uyt(g) 0 5 10 15 20 25 30 35 40 45 50 −0.05 0 0.05 ¨uxt(g) 0 5 10 15 20 25 30 35 40 45 50 −0.1 0 0.1 t (s) ¨uyt(g) First Floor x direction First Floor y direction Fouth Floor x direction Fourth Floor x direction Figure 3.18: Response of BRNS building to multicomponent earthquake recorded on 21/09/2009 48
  • 60.
    System Identification ofLinear Time Invariant (LTI) Systems 0 5 10 15 20 25 30 35 40 45 50 0.9 0.95 1 1.05 1.1 1.15 µ k iden /µ k or K1x K1y K2x K2y K 3x K3y K4x K4y 0 5 10 15 20 25 30 35 40 45 50 0 0.02 0.04 0.06 0.08 0.1 t (s) σ(kN/m) K1x K1y K 2x K2y K 3x K3y K4x K4y Figure 3.19: Ratio of identified stiffness to original stiffness at all the floor levels 49
  • 61.
    System Identification ofLinear Time Invariant (LTI) Systems 0 5 10 15 20 25 30 35 40 45 50 0.7 0.8 0.9 1 1.1 1.2 1.3 µαβiden /µαβorg α β 0 5 10 15 20 25 30 35 40 45 50 0 1 2 3 4 5 6 x 10 −3 t (s) σ αβ α β Figure 3.20: Coffiecients α and β & the convergence of damping coefficients −4 −2 0 x 10 −3 −1 0 1 0 5 10 15 −1 0 1 −4 −2 0 x 10 −3 0 5 10 15 −0.01 0 0.01 −1 0 1 0 5 10 15 −1 0 1 −0.01 0 0.01 0 5 10 15 Figure 3.21: First four true modes and estimated modes of BRNS building Table 3.1: Parameter values for solving the forward problem Mass (kg) Stiffness (N/m) Damping (N-s/m) Natural Frequency (Hz) 15.2 41987 19.032 4.1565 15.2 76842 34.173 12.8093 15.2 74812 33.630 19.8511 50
  • 62.
    System Identification ofLinear Time Invariant (LTI) Systems 0 10 20 30 40 50 −0.03 −0.02 −0.01 0 0.01 0.02 0.03 t (s) ¨ut(g) 0 10 20 30 40 50 −0.04 −0.02 0 0.02 0.04 t (s) ¨ut(g) 0 10 20 30 40 50 −0.05 0 0.05 t (s) ¨ut(g) 0 10 20 30 40 50 −0.1 −0.05 0 0.05 0.1 t (s) ¨ut(g) Estiamted States Original States Estiamted States Original States Estiamted States Original States Estiamted States Original States 5 5.5 6 −0.01 0 0.01 10 10.5 11 −0.01 0 0.01 5 5.5 6 −0.01 0 0.01 10 10.5 11 −0.01 0 0.01 a b c d Figure 3.22: Original and estimated states of BRNS building a) first storey x direction, b) First story y direction, c) top storey x direction and d) top storey y direction Table 3.2: Comparison of SIS,SIR and Bootstrap filter on the basis identified frequency in three modes Earthquake Identified Natural Frequency (Hz) SIR SIR Bootstrap f1 f2 f3 f1 f2 f3 f1 f2 f3 Chichi 4.162 11.54 19.32 4.162 11.54 19.32 4.162 11.54 19.32 El-Centro 4.159 12.721 19.371 4.159 12.721 19.371 4.159 12.721 19.371 Kobe 4.162 12.381 18.29 4.152 12.215 18.04 4.178 12.656 20.11 Lomaprieta 4.112 12.083 19.717 4.112 12.083 19.717 4.112 12.083 19.717 Parkfield 4.157 11.282 18.672 4.157 11.282 18.672 4.157 11.282 18.672 Table 3.3: Comparison of SIS,SIR and Bootstrap filter on the basis number of convergence steps Earthquake Convergence Steps SIS SIR BF Chichi 4696 4694 4581 El-Centro 178 177 177 Kobe 695 679 269 Lomaprieta 298 295 280 Parkfield 296 296 262 51
  • 63.
    System Identification ofLinear Time Invariant (LTI) Systems Table 3.4: Comparison of different resampling algorithms on the basis of identified value of natural frequency Earthquake Resampling Identified Natural Frequency (Hz) % Error f1 f2 f3 f1 f2 f3 Multinomial 4.916 13.844 21.065 18.270 8.076 6.115 Wheel 3.530 11.702 18.536 -15.085 -8.645 -6.623 Chichi Stratified 4.199 10.992 18.053 1.020 -14.184 -9.056 Systematic 4.199 10.992 18.053 1.020 -14.184 -9.056 Multinomial 4.400 12.804 19.454 5.847 -0.039 -2.002 Wheel 4.280 13.157 19.993 2.978 2.718 0.716 ElCentro Stratified 4.187 12.669 19.748 0.729 -1.093 -0.520 Systematic 4.187 12.669 19.748 0.729 -1.093 -0.520 Multinomial 3.715 10.486 15.889 -10.634 -18.138 -19.958 Wheel 3.364 10.433 15.888 -19.057 -18.554 -19.963 Kobe Stratified 4.187 12.669 19.748 0.729 -1.093 -0.520 Systematic 4.187 12.669 19.748 0.729 -1.093 -0.520 Multinomial 4.529 13.746 17.980 8.953 7.309 -9.423 Wheel 4.081 12.984 20.307 -1.824 1.362 2.296 Lomaprieta Stratified 4.203 12.863 19.496 1.110 0.417 -1.788 Systematic 4.187 12.669 19.748 0.729 -1.093 -0.520 Multinomial 3.896 11.118 16.070 -6.277 -13.204 -19.046 Wheel 4.687 12.965 18.229 12.768 1.214 -8.173 Parkfield Stratified 4.187 12.669 19.748 0.729 -1.093 -0.520 Systematic 4.187 12.669 19.748 0.729 -1.093 -0.520 52
  • 64.
    System Identification ofLinear Time Invariant (LTI) Systems Table 3.5: Ratio of identified value of parameters to original value and comparison on the basis of convergence steps Earthquake Resampling Ratio of identified to original parameters Steps K1 K2 K3 C1 C2 C3 Multinomial 1.56 1.16 1.02 2.45 0.85 0.31 300 Wheel 0.67 0.87 0.89 1.14 0.26 0.97 500 Chichi Stratified 1.14 0.93 0.58 1.94 0.90 0.17 5000 Systematic 1.14 0.93 0.58 1.94 0.90 0.17 5000 Multinomial 1.21 0.96 0.92 0.33 1.30 0.19 150 Wheel 1.10 0.94 1.10 2.13 0.55 1.06 150 Elcentro Stratified 1.02 1.02 0.94 0.79 0.55 1.34 200 Systematic 1.02 1.02 0.94 0.79 0.55 1.34 200 Multinomial 0.89 0.65 0.59 2.48 1.15 0.63 300 Wheel 0.68 0.58 0.70 1.57 1.12 0.70 300 Kobe Stratified 1.02 1.02 0.94 0.79 0.55 1.34 350 Systematic 1.02 1.02 0.94 0.79 0.55 1.34 350 Multinomial 1.96 0.57 1.00 0.52 1.03 0.85 150 Wheel 0.93 1.04 1.06 1.72 0.61 0.93 150 Lomaprieta Stratified 1.07 0.89 1.04 0.52 1.08 1.30 250 Systematic 1.02 1.02 0.94 0.79 0.55 1.34 250 Multinomial 1.07 0.61 0.67 1.43 0.09 1.02 200 Wheel 1.76 0.77 0.81 0.73 0.12 0.51 200 Parkfield Stratified 1.02 1.02 0.94 0.79 0.55 1.34 250 Systematic 1.02 1.02 0.94 0.79 0.55 1.34 250 Table 3.6: Ratio of identified value of parameters to original value and comparison on the basis of convergence steps Earth- quake No. of particles Ratio of Identified to original value Identified Freqency K1 K2 K3 C1 C2 C3 f1 f2 f3 100 1.04 0.90 1.00 0.30 0.78 1.21 4.16 12.72 19.37 El-Centro 500 1.00 0.95 1.01 0.45 0.75 1.02 4.14 12.78 19.66 1000 1.03 0.96 1.01 1.93 1.32 0.70 4.18 12.95 19.86 53
  • 65.
    System Identification ofLinear Time Invariant (LTI) Systems Table 3.7: Sensitivity analysis due to addition of noise with different SNR Earthquake Signal to Noise ratio Ratio of identified values to original values K1 K2 K3 C1 C2 C3 no noise 1.036 0.904 1.004 0.300 0.771 1.205 0.005 1.036 0.904 1.004 0.300 0.771 1.205 El-Centro 0.05 1.036 0.904 1.004 0.300 0.771 1.205 0.1 1.036 0.904 1.004 0.300 0.771 1.205 no noise 1.036 0.904 1.004 0.300 0.771 1.205 0.01 1.036 0.904 1.004 0.300 0.771 1.205 Lomaprieta 0.05 1.036 0.904 1.004 0.300 0.771 1.205 0.1 1.036 0.904 1.004 0.300 0.771 1.205 Table 3.8: Original parameters of the BRNS building Mass (kg) Stiffness (N/m) Natural Frequency (Hz) 27636.97 130215257.8 4.7474 27636.97 198788000.6 5.8439 25618.62 230377923.8 13.1977 25618.62 344149172 16.2495 25618.62 230377923.7 19.9821 25618.62 344149172 24.5701 17805.65 130215257.9 27.0526 17805.65 198788001 33.1045 54
  • 66.
    System Identification ofLinear Time Invariant (LTI) Systems Table 3.9: Comparison of SIS, SIR and Bootstrap filter on the basis of identified values of natural frequency in eight modes Recording Iden Nat. Algorithm % Error Date Frequency (Hz) SIS SIR BF SIS SIR BF f1 4.744 4.744 4.7536 -0.072 -0.072 0.131 f2 5.7817 5.7817 5.8042 -1.064 -1.064 -0.679 f3 13.1682 13.1682 13.1659 -0.224 -0.224 -0.241 3/9/2009 f4 16.0314 16.0314 16.0607 -1.342 -1.342 -1.162 f5 19.9516 19.9516 19.9316 -0.153 -0.153 -0.253 f6 24.3253 24.3253 24.4062 -0.996 -0.996 -0.667 f7 27.0414 27.0414 27.0667 -0.041 -0.041 0.052 f8 33.2141 33.2141 33.3639 0.331 0.331 0.784 f1 4.744 4.744 4.7392 -0.076 -0.076 -0.173 f2 5.748 5.748 5.8485 -1.648 -1.648 0.079 f3 13.163 13.163 13.0686 -0.267 -0.267 -0.978 21/09/2009 f4 16.027 16.027 16.3301 -1.371 -1.371 0.496 f5 19.955 19.955 19.7618 -0.138 -0.138 -1.102 f6 24.260 24.260 24.9543 -1.260 -1.260 1.564 f7 27.025 27.025 26.6724 -0.103 -0.103 -1.405 f8 33.068 33.068 34.3376 -0.112 -0.112 3.725 Table 3.10: Identified frequency for BRNS building in all the eight modes using the all the resampling algorithms Recording Identified Natural Resampling Algorithms Date Frequency (Hz) Multinomial Wheel Stratified Systematic f1 4.8043 4.6294 4.7494 4.7465 f2 5.7172 5.9018 5.8288 5.7995 f3 13.2598 13.2777 13.1368 13.1889 3/9/2009 f4 15.8172 15.8525 16.1078 16.0658 f5 20.0562 20.1657 19.9225 19.9894 f6 24.748 24.4544 24.5621 24.3583 f7 27.2657 27.0644 27.0203 27.0641 f8 33.1886 33.5388 33.8024 33.3197 f1 4.7871 4.7193 4.7781 4.7474 f2 5.6735 5.6726 5.8349 5.8425 f3 13.3091 13.1979 13.3176 13.1333 21/09/2009 f4 15.8104 16.2785 16.0635 16.2395 f5 20.1184 19.9776 20.1428 19.8837 f6 23.4591 24.1307 24.5139 24.7591 f7 26.9039 26.646 26.8554 26.7694 f8 32.2265 33.2485 33.7687 34.1282 55
  • 67.
    System Identification ofLinear Time Invariant (LTI) Systems Table 3.11: Comparison of resampling algorithms on the basis of %error in identified natural frequency Recording Identified Natural % Error Date Frequency (Hz) Multinomial Wheel Stratified Systematic f1 1.199 -2.486 0.042 -0.019 f2 -2.168 0.991 -0.258 -0.760 f3 0.471 0.606 -0.461 -0.067 3/9/2009 f4 -2.660 -2.443 -0.872 -1.130 f5 0.371 0.919 -0.298 0.037 f6 0.724 -0.471 -0.033 -0.862 f7 0.788 0.044 -0.119 0.043 f8 0.254 1.312 2.108 0.650 f1 0.836 -0.592 0.647 0.000 f2 -2.916 -2.931 -0.154 -0.024 f3 0.844 0.002 0.908 -0.488 21/09/2009 f4 -2.702 0.178 -1.145 -0.062 f5 0.682 -0.023 0.804 -0.492 f6 -4.522 -1.788 -0.229 0.769 f7 -0.550 -1.503 -0.729 -1.047 f8 -2.652 0.435 2.006 3.092 56
  • 68.
    Chapter 4 Conclusion 4.1 Conclusion Fromthe example solved in Chapter 2 and the numerical results obtained in Chapter 3 the following conclusions can be drawn • The example problem presented in chapter 2 clearly shows the problem of degeneracy in SIS filter which incorporates the need for Adaptive particle filters (SIR) or Bootstrap particle filters which involve the resampling step. However, resampling also led to sample impoverishment and hence to create a diversity among the samples a small noise was added. This was one of the ways to reduce sample impoverishment. However, it becomes difficult when the dynamic system becomes complex. Thus, it appears as one of the shortcomings of the present study which states that the algorithm is highly dependent on the sample values generated at t = 0. However the problem becomes trivial if one is able to sample from particles every time from the updated posterior distribution. • The algorithm gives fairly good results with the natural frequency and mode shapes of the identified systems is close to the original system in case of both synthetic as well as field data. The advantage of using the particle based approach is the robustness of the algorithm which suggests that the it is able to pick the best value among the samples generated at t = 0. All the three algorithms namely SIS, SIR and BF gives almost the similar results when a similar pool of simulated particles is used. However, the identified values are slightly different in case of field data. The sensitivity analysis for the lab model shows that the BF is robust enough to converge to accurate values 57
  • 69.
    Future work even uponincorporation of SNR values as low as 0.005. However the results becomes far accurate if number of particles are increased. Theoretically if N goes to infinity, the posterior approximated becomes exact. However this again comes at the cost of computation. • The comparative study of the traditional resampling algorithms clearly suggests that Stratified and Systematic resamplings give better estimates than Multinomial and Wheel resamplings for both the studies i.e. synthetic model as well as BRNS building. However one can clearly observe that the number of time steps required are more for the Systematic& Stratified resamplings. This is acceptable for an offline case but may become an issue while implementing the algorithm for an online health monitoring system. • The performance of all the algorithms for synthetic as well as field data suggests that the algorithms work well for both the studies synthetic as well as field. However, modeling uncertainties can never be denied and the technique is still not full-proof to implement on any general structure. The building geometry and the number of unknown parameters play an important role in the performance of the algorithm. 4.2 Future work Based on the current work done we plan the following future works: • We look to make the current algorithm better by implementing the Metropolis Hasting algorithm so that we overcome the problem of sample impoverishment in a more general and better way. • We look forward to implement the present studies to solve the Base isolated building with a Bouc-Wen hysteretic damping system at each of the floor levels. The present work has successfully identified the natural frequencies of fixed base structure which is an encouragement to apply similar studies for future work. • We are also interested to test the algorithm for large scale real life models like bridge. One such bridge is the railway overhead bridge connecting to the main campus of IIT Guwahati. The data has been recorded and we are in the process of modeling the bridge in free vibration mode in a professional Finite element software package. 58
  • 70.
    Bibliography M.S. Arulampalam, S.Maskell, N. Gordon, and T. Clapp. A tutorial on particle filters for online nonlinear/non-gaussian bayesian tracking. Signal Processing, IEEE Transactions, 50(2):174–188, Feb 2002. C. Bao, H. Hao, Z.X. Li, and X.F. Zhu. Time-varying system identification using a newly improved hht algorithm. Computers and Structures, 87:1611 – 1623, 2009. T.R Bayes. Essay towards solving a problem in the doctrine of chances. Phil. Trans. Roy. Soc. Lond, 53:370–418, 1763. J.V. Candy. Bootstrap particle filtering. Signal Processing Magazine, IEEE, 24(4):73–85, July 2007. Z. Chen. Bayesian filtering: From kalman filters to particle filters, and beyond. Manuscript, 2003. J. Ching, J.L. Beck, and K.A. Porter. Bayesian state and parameter estimation of uncertain dynamical systems. Probabilistic Engineering Mechanics, 21(1):81 – 96, 2006. A.K. Chopra. Dynamics of structure. PEARSON, third edition edition, 2007. G. Franco, R. Betti, and H. Lu. Identification of structural systems using an evolutionary strategy. Journal of Engineering Mechanics, 130(10):1125–1139, 2004. R. Ghanem and M. Shinozuka. Structural-system identification i and ii: Theory. Journal of Engineering Mechanics, ASCE, 121(2):265–273, 1995. S. Ghosh, C.S. Manohar, and D. Roy. Sequential importance sampling filters with a new proposal distribution for parameter identification of structural systems. In Proceedings of Royal Society of London, Series A, volume 464, pages 25–47, 2008. 59
  • 71.
    BIBLIOGRAPHY N.J. Gordon, D.J.Salmond, and A. F M Smith. Novel approach to nonlinear/non-gaussian bayesian state estimation. Radar and Signal Processing, IEE Proceedings, 140(2):107–113, Apr 1993. Y.C. Ho and R.C.K. Lee. A bayesian approach to problems in stochastic estimation and control. IEEE Trans. Automat Control, 9:333–329, Oct. 1964. M. Hoshiya and E. Saito. Structural identification by extended kalman filter. Journal of Engineering Mechanics, 110(12):1757–1770, 1984. G. Housner, L. Bergman, T. Caughey, A. Chassiakos, R. Claus, S. Masri, R. Skelton, T. Soong, B. Spencer, and J. Yao. Structural control: Past, present, and future. Journal of Engineering Mechanics, 123(9):897–971, 1997. R.E. Kalman. A new approach to linear filtering and prediction problems. Journal of Fluids Engineering, ASME, 82:35–45, 1960. J.T. Kim and N. Stubbs. Improved damage identification method based on modal informa- tion. Journal of Sound and Vibration, 252(2):223 – 238, 2002. U. Lee and J. Shin. A frequency response function-based structural damage identification method. Computers and Structures, 80(2):117 – 132, 2002. T. Li. Resampling methods for particle filtering. Manuscript, 2013. K. Liew and Q. Wang. Application of wavelet theory for crack identification in structures. Journal of Engineering Mechanics, 124(2):152–157, 1998. C.S. Manohar and D. Roy. Monte carlo filters for identification of nonlinear structural dynamical systems. Sadhana, 31(4):399–427, 2006. B. Moaveni, X. He, J. Conte, J. Restrepo, and M. Panagiotou. System identification study of a 7-story full-scale building slice tested on the ucsd-nees shake table. Journal of Structural Engineering, 137(6):705–717, 2011. V. Namdeo and C.S. Manohar. Nonlinear structural dynamical system identification using adaptive particle filters. Journal of Sound and Vibration, 306(35):524 – 563, 2007. H. A. Nasrellah and C. S. Manohar. Particle filters for structural system identification using multiple test and sensor data: A combined computational and experimental study. Structural Control and Health Monitoring, 18(1):99–120, 2011. 60
  • 72.
    BIBLIOGRAPHY N.M. Newmark. Amethod of computation for structural dynamics. Journal of the Engi- neering Mechanics Division, 85(3):67–94, July 1959. A. Raich and T. Liszkai. Improving the performance of structural damage detection methods using advanced genetic algorithms. Journal of Structural Engineering, 133(3):449–461, 2007. R. Rangaraj. Identificartion of fatigue cracks in vibrating beams using particle filtering algorithm. Master’s thesis, Indian Institute of Tecnology Madras, September 2012. C.S. Sajeeb, R.and Manohar and D. Roy. Control of nonlinear structural dnamical systems with noise using particle filters. Journal of Sound and Vibration, 306:111–135, 2007. T. Soderstrom. System Identification. Prentice Hall International, 2001. J. Spragins. A note on the iterative application of bayes rule. IEEE Trans. Informa. Theory, 11(4):544549,, 1965. S. Thrun. Particle filters in robotics (invited talk). In Proceedings of the Eighteenth Confer- ence Annual Conference on Uncertainty in Artificial Intelligence (UAI-02), pages 511–518, San Francisco, CA, 2002. Morgan Kaufmann. 61
  • 73.
    Appendix MATLAB code forparameter estimation of SDOF oscillator using SIS filter This program finds out the stiffness parameter of the SDOF oscillator using SIS filter. The code is for the example problem solved in chapter 2 % ========================================================================= % PROGRAMMER : ANSHUL GOYAL %DATE : 02.01.2014(Last modified: 25:01:2014) % ABSTRACT : Stiffness parameter identification of SDOF oscillator using % Sequential Importance Sampling % % ========================================================================= clear all close all clc % ************************************************************************* % Input Section: % ============== m = 40; c = 15; k = 60000; k or = 60000; N = 150; % Number of particles x R = 0.001; % Error covariance % *********************************************************************** % Forward Problem for Synthetic Measurements: % =========================================== % load El Centro EW.dat 62
  • 74.
    Appendix A % exct= El Centro EW; % time = El Centro EW(:,1); % plot(exct(:,1),exct(:,2)); % xlabel('T (s)'); ylabel('$ddot{X} {g}(m/{sˆ2})$ ','interpreter','latex') % Inc = zeros(1,3); % [U,Ud,Udd] = Newmark Beta MDOF(m,k,c,Inc,exct); % response = [time Udd']; % save sis measurement.dat −ascii response %********************************************************************** % Inverse Problem for System Identification using SIS Filter: % ================================================================= load sis measurement.dat load El Centro EW.dat exct = El Centro EW; time = sis measurement(:,1); t = length(time); acc = sis measurement(:,2); k inv = 10000 + (80000).*rand(N,1); % particles from uniform dist. sorted = sort(k inv); % sorted value of particles k =1; for i = 1:N w(i,k) = 1/N; end wk(:,k) = w(:,k)./sum(w(:,k)); Inc prior(:,:,N) = zeros(1,3); for k = 2:t k for i = 1:N [Inc update] = Newmark Beta MDOF instant(m,k inv(i),c,... Inc prior(:,:,i),exct,k,0.5,1/6); C(:,:,i)= Inc update; % Estimate Likelihood of Simulation: % ================================== w(i,k) = wk(i,k−1)*(1/sqrt(2*pi*x R)) * exp(−((acc(k)... − Inc update(1,3))ˆ2)/(2*(x R))); end 63
  • 75.
    Appendix A % Updatingthe particle weights: % ================================== wk(:,k) = w(:,k)./sum(w(:,k)); Inc prior = C; k estimate = 0; % Estimating the parmeter value: % ================================== for i = 1:N k estimate = k estimate + wk(i,k)*k inv(i); end k iden(k−1) = k estimate/k or; end %********************************************************************** % Plots % ================================================================= % plot(time(2:t),k iden); % xlabel('T (s)'); ylabel('K (kN/m)'); % plot(time(1:200),wk(30,1:200),'*r',time(1:200),wk(9,1:200),... ...'*g',time(1:200),wk(2,1:200),'*b',time(1:200),wk(4,1:200),'*k') % xlabel('T (s)');ylabel('weights') % legend('6.02E4','6.77E4','6.45E4','1.57E4') % [ksort,index] = sort(k inv); % for i = 1:4:200 % tp = ((i−1)*0.01)*ones(50,1); % plot3(tp,ksort,wk(index,i),'b') % xlabel('T (s)'); ylabel('K (N/m)'); zlabel('weights'); % hold on % end % hold on % for i = 1:4:200 % plot3(time(i),k inv(30),wk(30,i),'.r','Markersize',14) % hold on % end 64
  • 76.
    Appendix A MATLAB codefor parameter identification of synthetic model using Sequential Importance Sampling (SIS) filter This code solves the parameter identification for the three storied shear building synthetic model using SIS Filter. % ========================================================================= % PROGRAMMER : ANSHUL GOYAL % DATE : 02.01.2014(Last modified: 25:01:2014) % ABSTRACT : Sequential Importance Sampling (SIS) filter Code for % parameter estimation of laboratory model % ========================================================================= clear all close all clc % ************************************************************************* % Input Section: % ============== m = [15.2 15.2 15.2]; c = [19.032 34.173 33.63]; k = [41987 76842 74812]; N = 100; x R = 0.001; % *********************************************************************** % Forward Problem for Synthetic Measurements: % =========================================== % load Elcentro X.dat % exct = Elcentro X; % time = Elcentro X(:,1); % Inc = zeros(3,3); % [M mat,K mat,C mat]=LTI System Matrices(m,k,c); % [U,Ud,Udd] = Newmark Beta MDOF(M mat,K mat,C mat,Inc,exct); % response = [time Udd']; % save resp elcentro.dat −ascii response % *********************************************************************** % Eigen Analysis of laboratory model: % =========================================== 65
  • 77.
    Appendix A [Phi,D] =eig(K mat,M mat); wn or = sqrt(diag(D))/(2*pi); % Natural fequency in Hz %********************************************************************** % Inverse Problem for System Identification using SIS Filter: % ================================================================= load resp elcentro.dat load Elcentro X.dat load test stiffness 3dof elcentro.dat % data file containing 100 samples rand sample = test stiffness 3dof elcentro; exct = Elcentro X; time = resp elcentro(:,1); acc1 = resp elcentro(:,2); Inc = zeros(3,3); % Simulating particles % ================================== % k1 = 10000 + (80000).*rand(N,1); % k2 = 10000 + (80000).*rand(N,1); % k3 = 10000 + (80000).*rand(N,1); % c1 = 50.*rand(N,1); % c2 = 50.*rand(N,1); % c3 = 50.*rand(N,1); % Defining initial weights % ================================== q =1; for i = 1:N w1(i,q) = 1/N; w2(i,q) = 1/N; w3(i,q) = 1/N; end wk(:,q) = w1(:,q)./sum(w1(:,q)); Inc prior(:,:,N) = zeros(3,3); for q = 2:length(time) q for ii = 1:N m = [15.2 15.2 15.2]; k = [rand sample(ii,1) rand sample(ii,2) rand sample(ii,3)]; 66
  • 78.
    Appendix A c =[rand sample(ii,4) rand sample(ii,5) rand sample(ii,6)]; [M mat,K mat,C mat]=LTI System Matrices(m,k,c); [Inc update] = Newmark Beta MDOF instant(M mat,K mat,C mat,... Inc prior(:,:,ii),exct,q,0.5,1/6); C(:,:,ii)= Inc update; % Estimate Likelihood of Simulation: % ================================== w1(ii,q) = wk(ii,q−1)*(1/sqrt(2*pi*x R)) * exp(−((acc1(q) −... Inc update(1,3))ˆ2)/(2*(x R))); end % Updating the particle weights: % ================================== w = w1; Inc prior = C; wk(:,q) = w(:,q)./sum(w(:,q)); % Estimating and storing the parameter values: % ================================== k1 iden = 0; k2 iden = 0; k3 iden = 0; c1 iden = 0; c2 iden = 0; c3 iden = 0; for ii = 1:N k1 iden = k1 iden + wk(ii,q)*rand sample(ii,1); k2 iden = k2 iden + wk(ii,q)*rand sample(ii,2); k3 iden = k3 iden + wk(ii,q)*rand sample(ii,3); c1 iden = c1 iden + wk(ii,q)*rand sample(ii,4); c2 iden = c2 iden + wk(ii,q)*rand sample(ii,5); c3 iden = c3 iden + wk(ii,q)*rand sample(ii,6); end k11 iden(q−1) = k1 iden; k22 iden(q−1) = k2 iden; k33 iden(q−1) = k3 iden; c11 iden(q−1) = c1 iden; c22 iden(q−1) = c2 iden; 67
  • 79.
    Appendix A c33 iden(q−1)= c3 iden; end % Plotting the results % ================================== % subplot(2,3,1) % plot(time(2:4001),k11 iden/k(1),'b') % xlabel('t (s)'); ylabel('mu {k1} {iden}/mu {k1} {org}'); % subplot(2,3,2) % plot(time(2:4001),k22 iden/k(2),'b') % xlabel('t (s)'); ylabel('mu {k1} {iden}/mu {k2} {org}'); % subplot(2,3,3) % plot(time(2:4001),k33 iden/k(3),'b') % xlabel('t (s)'); ylabel('mu {k3} {iden}/mu {k3} {org}'); % subplot(2,3,4) % plot(time(2:4001),c11 iden/c(1),'b') % xlabel('t (s)'); ylabel('mu {c1} {iden}/mu {c1} {org}'); % subplot(2,3,5) % plot(time(2:4001),c22 iden/c(2),'b') % xlabel('t (s)'); ylabel('mu {c2} {iden}/mu {c2} {org}'); % subplot(2,3,6) % plot(time(2:4001),c33 iden/c(3),'b') % xlabel('t (s)'); ylabel('mu {c3} {iden}/mu {c3} {org}'); MATLAB code for parameter identification of synthetic model using Sequential Importance Re-Sampling (SIR) filter This code identifies the system parameters of three storied shear building synthetic model using SIR filter % ========================================================================= % PROGRAMMER : ANSHUL GOYAL % DATE : 02.01.2014(Last modified: 25:01:2014) % ABSTRACT : Sequential Importance Re−Sampling (SIR) filter/ Adaptive % Particle Filter Code for % parameter estimation of laboratory model % ========================================================================= clear all 68
  • 80.
    Appendix A close all clc %************************************************************************* % Input Section: % ============== m = [15.2 15.2 15.2]; c = [19.032 34.173 33.63]; k = [41987 76842 74812]; N = 100; x R = 0.001; % *********************************************************************** % Forward Problem for Synthetic Measurements: % =========================================== % load ChiChi X.dat % exct = ChiChi X; % time = ChiChi X(:,1); % Inc = zeros(3,3); % [M mat,K mat,C mat]=LTI System Matrices(m,k,c); % [U,Ud,Udd] = Newmark Beta MDOF(M mat,K mat,C mat,Inc,exct); % response = [time Udd']; % save resp ChiChi.dat −ascii response % *********************************************************************** % Eigen Analysis of laboratory model: % =========================================== [Phi,D] = eig(K mat,M mat); wn or = sqrt(diag(D))/(2*pi); % Natural fequency in Hz %********************************************************************** % Inverse Problem for System Identification using SIS Filter: % ================================================================= tm = cputime; load resp ChiChi.dat load ChiChi X.dat load test stiffness 3dof ChiChi.dat % data file containing 100 samples rand sample = test stiffness 3dof ChiChi; exct = ChiChi X; time = resp ChiChi(:,1); acc1 = resp ChiChi(:,2); 69
  • 81.
    Appendix A acc2 =resp ChiChi(:,3); acc3 = resp ChiChi(:,4); Inc = zeros(3,3); % Simulating particles % ================================== % k1 = 10000 + (80000).*rand(N,1); % k2 = 10000 + (80000).*rand(N,1); % k3 = 10000 + (80000).*rand(N,1); % c1 = 50.*rand(N,1); % c2 = 50.*rand(N,1); % c3 = 50.*rand(N,1); % Defining initial weights % ================================== q =1; for i = 1:N w1(i,q) = 1/N; w2(i,q) = 1/N; w3(i,q) = 1/N; end wk(:,q) = w1(:,q)./sum(w1(:,q)); Inc prior(:,:,N) = zeros(3,3); for q = 2:500 q for ii = 1:N m = [15.2 15.2 15.2]; k = [rand sample(ii,1) rand sample(ii,2) rand sample(ii,3)]; c = [rand sample(ii,4) rand sample(ii,5) rand sample(ii,6)]; [M mat,K mat,C mat]=LTI System Matrices(m,k,c); [Inc update] = Newmark Beta MDOF instant(M mat,K mat,C mat,... Inc prior(:,:,ii),exct,q,0.5,1/6); C(:,:,ii)= Inc update; % Estimate Likelihood of Simulation: % ================================== w1(ii,q) = wk(ii,q−1)*(1/sqrt(2*pi*x R)) * exp(−((acc1(q) −... Inc update(1,3))ˆ2)/(2*(x R)))*(1/sqrt(2*pi*x R)) *.... exp(−((acc2(q) − Inc update(2,3))ˆ2)/(2*(x R)))... 70
  • 82.
    Appendix A *(1/sqrt(2*pi*x R))* exp(−((acc3(q) − Inc update(3,3))ˆ2)... /(2*(x R))); end % Updating the particle weights: % ================================== w = w1; wk(:,q) = w(:,q)./sum(w(:,q)); Inc prior = C; Neff = 1/sum(wk(:,q).ˆ2); resample percentaje = 0.2; Nt = resample percentaje*N; % Calculating Neff and threshold criteria : % ================================== if Neff < Nt Ind = 1; % Resampling step : Adaptive control % ================================== disp('Resampling ...') [rand sample,index] = Resampling(rand sample,wk(:,q)',Ind); Inc prior = C(:,:,index); for i = 1:N wk(i,q) = 1/N; end end % Estimating and storing the parameter values: % =========================================== k1 iden = 0; k2 iden = 0; k3 iden = 0; c1 iden = 0; c2 iden = 0; c3 iden = 0; for ii = 1:N k1 iden = k1 iden + wk(ii,q)*rand sample(ii,1); k2 iden = k2 iden + wk(ii,q)*rand sample(ii,2); k3 iden = k3 iden + wk(ii,q)*rand sample(ii,3); c1 iden = c1 iden + wk(ii,q)*rand sample(ii,4); c2 iden = c2 iden + wk(ii,q)*rand sample(ii,5); 71
  • 83.
    Appendix A c3 iden= c3 iden + wk(ii,q)*rand sample(ii,6); end k11 iden(q−1) = k1 iden; k22 iden(q−1) = k2 iden; k33 iden(q−1) = k3 iden; c11 iden(q−1) = c1 iden; c22 iden(q−1) = c2 iden; c33 iden(q−1) = c3 iden; end k inv = [k11 iden(499) k22 iden(499) k33 iden(499)]; m = [15.2 15.2 15.2]; [M mat,K mat,C mat]=LTI System Matrices(m,k inv,c); [Phi in,D] = eig(K mat,M mat); wn inv = sqrt(diag(D))/(2*pi) % Natural frequency in Hz cpu time = cputime−tm % Plotting the results % ================================== % subplot(2,3,1) % plot(time(2:4001),k11 iden/k(1),'b') % xlabel('t (s)'); ylabel('mu {k1} {iden}/mu {k1} {org}'); % subplot(2,3,2) % plot(time(2:4001),k22 iden/k(2),'b') % xlabel('t (s)'); ylabel('mu {k1} {iden}/mu {k2} {org}'); % subplot(2,3,3) % plot(time(2:4001),k33 iden/k(3),'b') % xlabel('t (s)'); ylabel('mu {k3} {iden}/mu {k3} {org}'); % subplot(2,3,4) % plot(time(2:4001),c11 iden/c(1),'b') % xlabel('t (s)'); ylabel('mu {c1} {iden}/mu {c1} {org}'); % subplot(2,3,5) % plot(time(2:4001),c22 iden/c(2),'b') % xlabel('t (s)'); ylabel('mu {c2} {iden}/mu {c2} {org}'); % subplot(2,3,6) % plot(time(2:4001),c33 iden/c(3),'b') % xlabel('t (s)'); ylabel('mu {c3} {iden}/mu {c3} {org}'); 72
  • 84.
    Appendix A MATLAB codefor parameter identification of synthetic model using Bootstrap filter (BF) This is the code for Bootstrap filter to identify system parameters of synthetic model. % ========================================================================= % PROGRAMMER : ANSHUL GOYAL & ARUNASIS CHAKRABORTY % DATE : 02.01.2014(Last modified: 25:01:2014) % ABSTRACT : Bootstrap filter for system identification of laboratory % model % ========================================================================= % Input Section: % ============== clear all close all clc %======================================================================= % Original parameters m = [15.2 15.2 15.2]; c = [19.032 34.173 33.63]; k = [41987 76842 74812]; N p = 100; % No. of Particles x R = 0.001; % Error Covariance % RP = [10000 80000;10000 80000;10000 80000;0 50;0 50;0 50]; Ind = 1; % Indicator for different resampling strategy % %************************************************************************* % Forward Problem for Synthetic Measurements: % =========================================== % load Elcentro X.dat t = Elcentro X(:,1); xg t = Elcentro X(:,2); exct = [t xg t]; dof = length(m); Inc = zeros(dof,2); % % % [M mat,K mat,C mat]=LTI System Matrices(m,k,c); 73
  • 85.
    Appendix A % EigenAnalysis for Modal Parameters: % ==================================== % % [Phi,D] = eig(K mat,M mat); % wn = sqrt(diag(D))/(2*pi) % Natural frequency in /s % % Direct Time Integration for Response: % ===================================== [U,Ud,Udd] = Newmark Beta MDOF(M mat,K mat,C mat,Inc,exct); % Out put = [t Udd']; % snr = 0.005; % [noisy out put] = noisy output(snr,Out put,dof); % save −ascii resp elcentro.dat Out put % save −ascii resp loma.dat Out put % save −ascii noisy resp loma0005.dat noisy out put % Plot response Lomaprieta X accelerations: % ************************************************************************* % Inverse Problem for System Identification using Bootstrap Filter: % ================================================================= tm = cputime; load Elcentro X.dat load resp elcentro.dat load test stiffness 3dof elcentro.dat t1 = resp elcentro(:,1); % subplot(2,1,1) % plot(t1,resp elcentro(:,3),'.b',t1,noisy resp elcentro0005(:,3),'.r') % xlabel('T (s)'); ylabel('$ddot{X} {t}(m/sˆ2)$','interpreter', 'latex'); % legend ('No noise','SNR 0.005') % subplot(2,1,2) % plot(t,resp loma(:,3),'.b',t,noisy resp loma0005(:,3),'.r') % xlabel('T (s)'); ylabel('$ddot{X} {t}(m/sˆ2)$','interpreter', 'latex'); % legend ('No noise','SNR 0.005') nt = length(t); dof = length(m); Nu = 2*dof; % No. of Unknown % R samp = zeros(N p,Nu); % for ii = 1:Nu % R samp(:,ii) = random('unif',RP(ii,1),RP(ii,2),N p,1); % end 74
  • 86.
    Appendix A R samp= test stiffness 3dof elcentro; Mean Estimate = zeros(nt,Nu); Std Estimate = zeros(nt,Nu); Weights = zeros(N p,dof); for ii = 1:Nu Mean Estimate(1,ii) = mean(R samp(:,ii)); Std Estimate(1,ii) = std(R samp(:,ii)); end % Bootstrap Algorithm: % ==================== Inc(:,:,N p) = zeros(dof,2); Inc prior(:,:,N p) = zeros(3,3); Ind = 1; ab=500 for ii = 2:ab ii % exct = [t(ii−1:ii,1) xg t(ii−1:ii,1)]; exct = [Elcentro X(:,1),Elcentro X(:,2)]; for jj = 1:N p k1 = R samp(jj,1:dof); c1 = R samp(jj,dof+1:Nu); [M mat,K mat,C mat]=LTI System Matrices(m,k1,c1); [Inc update] = Newmark Beta MDOF instant(M mat,K mat,... C mat,Inc prior(:,:,jj),exct,ii,0.5,1/6); C(:,:,jj) = Inc update; for kk = 1:dof Weights(jj,kk)=(1/sqrt(2*pi*x R))*exp... (−((resp elcentro(ii,kk+1)−Inc update(kk,3))ˆ2)/(2*(x R))); end end Wt = prod(Weights,2); weight = (Wt./sum(Wt))'; % Resampling: % =========== [R samp,index] = Resampling(R samp,weight,Ind); Inc prior = C(:,:,index); 75
  • 87.
    Appendix A for kk= 1:Nu Mean Estimate(ii,kk) = mean(R samp(:,kk)); Std Estimate(ii,kk) = std(R samp(:,kk)); end end % Analysis of Identified System: %==================================== k inv = Mean Estimate(ab−1,1:dof); c inv = Mean Estimate(ab−1,dof+1:end); [M mat,K mat,C mat]=LTI System Matrices(m,k inv,c inv); [Phi,D] = eig(K mat,M mat); wn = sqrt(diag(D))/(2*pi) % Natural frequency in Hz cpu time = cputime−tm % ************************************************************************* % End Program. % ************************************************************************* MATLAB code for identification of BRNS building using Sequential Importance Sampling (SIS) filter This code estimates all the 10 unknown parameters for fixed base RC framed BRNS building using SIS filter % ========================================================================= % PROGRAMMER : ANSHUL GOYAL % DATE : 02.01.2014(Last modified: 25:01:2014) % ABSTRACT : Sequential Importance Sampling (SIS) filter Code for % LTI System Identification of BRNS building % on field measurement (21/09/2009) % ========================================================================= clear all close all clc % ************************************************************************* % Input Section: 76
  • 88.
    Appendix A % ============== m= [27636.9724 27636.9724 25618.62385 25618.62385 25618.62385 25618.62385 17805.65745 k = [130215257.8 198788000.6 230377923.8 344149172 230377923.7 344149172 130215257.9 19 [M mat,K mat]=BRNS FB Matrices(m,k); [Phi,D] = eig(K mat,M mat); wn = sqrt(diag(D))/(2*pi); % Natural frequency in Hz alf = 0.001; bta = 0.02; N p = 20; % No. of Particles x R = 0.01; % Error Covariance RP = [1.2E8 1.4E8;1.8E8 2.0E8;2.2E8 2.4E8;3.0E8 4.0E8;2.2E8 2.4E8;3.0E8 4.0E8;1.2E8 1.4E %********************************************************************** % Inverse Problem for System Identification using SIS Filter: % ================================================================= load msrmt2.dat Response = [msrmt2(:,1) msrmt2(:,4) msrmt2(:,5) msrmt2(:,6) msrmt2(:,7)]; t = msrmt2(:,1); nt = length(t); xg t = msrmt2(:,2); yg t = msrmt2(:,3); dof = length(m); Nu = dof+2; % No. of Unknown % Memory allocation: % ================================== R samp = zeros(N p,Nu); weight = zeros(N p,nt−1); w n = zeros(N p,nt); iden para = zeros(nt−1,Nu); % Generating initial particles: % ================================== for ii = 1:Nu R samp(:,ii) = random('unif',RP(ii,1),RP(ii,2),N p,1); end % SIS Algorithm % ================================== 77
  • 89.
    Appendix A Inc(:,:,N p)= zeros(dof,2); Ifl = [−1 0;0 −1;−1 0;0 −1;−1 0;0 −1;−1 0;0 −1]; for i = 1:N p w n(i,1) = 1/N p; end for ii = 2:nt ii for jj = 1:N p k1 = R samp(jj,1:dof); c1 = R samp(jj,dof+1:Nu); [M mat,K mat] = BRNS FB Matrices(m,k1); C mat = c1(1).*M mat+c1(2).*K mat; tt = t(ii−1:ii,1); Ft = M mat*Ifl*[xg t(ii−1:ii)';yg t(ii−1:ii)']; Incd = Inc(:,:,jj); [U,Ud,Udd] = Newmark Beta MDOF(M mat,K mat,C mat,Incd,tt,Ft); Inc cond = [U(:,2) Ud(:,2)]; C(:,:,jj) = Inc cond; % Estimate Likelihood of Simulation and updating weights: % ====================================================== weight(jj,ii) = w n(jj,ii−1)*(1/sqrt(2*pi*x R))*exp(−... ((Response(ii,2)−Udd(1,2))ˆ2)/(2*(x R)))*... (1/sqrt(2*pi*x R))*exp(−... ((Response(ii,3)−Udd(2,2))ˆ2)/(2*(x R)))*... (1/sqrt(2*pi*x R))*exp(−... ((Response(ii,4)−Udd(7,2))ˆ2)/(2*(x R)))*... (1/sqrt(2*pi*x R))*exp(−... ((Response(ii,5)−Udd(8,2))ˆ2)/(2*(x R))); end Inc prior = C; w n(:,ii) = weight(:,ii)./sum(weight(:,ii)); % Estimating the parameters: % ================================== for p = 1:Nu iden para(ii,p) = sum(w n(:,ii).*R samp(:,p)); end end 78
  • 90.
    Appendix A MATLAB codefor identification of BRNS building using Sequential Importance Re-Sampling (SIR) filter This code estimates all the 10 unknown parameters for fixed base RC framed BRNS building using SIR filter % ========================================================================= % PROGRAMMER : ANSHUL GOYAL % DATE : 02.01.2014(Last modified: 25:01:2014) % ABSTRACT : Sequential Importance Sampling (SIS) filter Code for % LTI System Identification of BRNS building % on field measurement (03/09/2009) % ========================================================================= clc clear all close all % ************************************************************************* % Input Section: % ============== tm = cputime; m = [27636.9724 27636.9724 25618.62385 25618.62385 25618.62385 25618.62385 17805.65745 k = [130215257.8 198788000.6 230377923.8 344149172 230377923.7 344149172 130215257.9 19 [M mat,K mat]=BRNS FB Matrices(m,k); [Phi,D] = eig(K mat,M mat); wn = sqrt(diag(D))/(2*pi); % Natural frequency in Hz alf = 0.001; bta = 0.02; N p = 100; % No. of Particles x R = 0.01; % Error Covariance % RP = [1.2E8 1.4E8;1.8E8 2.0E8;2.2E8 2.4E8;3.0E8 4.0E8;2.2E8 2.4E8;3.0E8 4.0E8;1.2E8 1. %********************************************************************** % Inverse Problem for System Identification using SIS Filter: % ================================================================= load msrmt1.dat Response = [msrmt1(:,1) msrmt1(:,4) msrmt1(:,5) msrmt1(:,6) msrmt1(:,7)]; t = msrmt1(:,1); nt = length(t); 79
  • 91.
    Appendix A xg t= msrmt1(:,2); yg t = msrmt1(:,3); dof = length(m); Nu = dof+2; % No. of Unknown % Memory allocation: % ================================== % R samp = zeros(N p,Nu); weight = zeros(N p,nt−1); w n = zeros(N p,nt); iden para = zeros(nt−1,Nu); % Generating initial particles: % ================================== % for ii = 1:Nu % R samp(:,ii) = random('unif',RP(ii,1),RP(ii,2),N p,1); % end load particles 1brns.dat R samp = particles 1brns; % SIS Algorithm % ================================== Inc(:,:,N p) = zeros(dof,2); Ifl = [−1 0;0 −1;−1 0;0 −1;−1 0;0 −1;−1 0;0 −1]; for i = 1:N p w n(i,1) = 1/N p; end for ii = 2:nt ii for jj = 1:N p k1 = R samp(jj,1:dof); c1 = R samp(jj,dof+1:Nu); [M mat,K mat] = BRNS FB Matrices(m,k1); C mat = c1(1).*M mat+c1(2).*K mat; tt = t(ii−1:ii,1); Ft = M mat*Ifl*[xg t(ii−1:ii)';yg t(ii−1:ii)']; Incd = Inc(:,:,jj); [U,Ud,Udd] = Newmark Beta MDOF(M mat,K mat,C mat,Incd,tt,Ft); Inc cond = [U(:,2) Ud(:,2)]; C(:,:,jj) = Inc cond; 80
  • 92.
    Appendix A % EstimateLikelihood of Simulation and updating weights: % ====================================================== weight(jj,ii) = w n(jj,ii−1)*(1/sqrt(2*pi*x R))*exp(−... ((Response(ii,2)−Udd(1,2))ˆ2)/(2*(x R)))*... (1/sqrt(2*pi*x R))*exp(−... ((Response(ii,3)−Udd(2,2))ˆ2)/(2*(x R)))*... (1/sqrt(2*pi*x R))*exp(−... ((Response(ii,4)−Udd(7,2))ˆ2)/(2*(x R)))*... (1/sqrt(2*pi*x R))*exp(−... ((Response(ii,5)−Udd(8,2))ˆ2)/(2*(x R))); end Inc prior = C; w n(:,ii) = weight(:,ii)./sum(weight(:,ii)); Neff = 1/sum(w n(:,ii).ˆ2); resample percentaje = 0.8; Nt = resample percentaje*N p; if Neff < Nt Ind = 1; % Resampling step : Adaptive control % ================================== disp('Resampling ...') [R samp,index] = Resampling(R samp,w n(:,ii)',Ind); Inc prior = C(:,:,index); for a = 1:N p w n(a,ii) = 1/N p; end end % Estimating the parameters: % ================================== for p = 1:Nu iden para(ii,p) = sum(w n(:,ii).*R samp(:,p)); end end k inv = iden para(nt,1:dof); [M mat,K mat]=BRNS FB Matrices(m,k inv); [Phi in,D] = eig(K mat,M mat); wn inv = sqrt(diag(D))/(2*pi) % Natural frequency in Hz 81
  • 93.
    Appendix A cpu time= cputime−tm MATLAB code for system identification of BRNS Building (Fixed Base) using Bootstrap filter his code estimates all the 10 unknown parameters for fixed base RC framed BRNS building using BF. % ========================================================================= % PROGRAMMER : ANSHUL GOYAL & ARUNASIS CHAKRABORTY % DATE : 02.01.2014(Last modified: 25:01:2014) % ABSTRACT : Bootstrap Particle Filter Code for LTI System Identification. % % % ========================================================================= clear all close all clc; tm = cputime; % ************************************************************************* % Input Section: % ============== m = [27636.9724 27636.9724 25618.62385 25618.62385 25618.62385 ... 25618.62385 17805.65745 17805.65745]; k = [130215257.8 198788000.6 230377923.8 344149172 230377923.7... 344149172 130215257.9 198788001]; alf = 0.001; bta = 0.02; SNR = 0.01; N p = 150; % No. of Particles x R = 0.01; % Error Covariance RP = [1.2E8 1.4E8;1.8E8 2.0E8;2.2E8 2.4E8;3.0E8 4.0E8;2.2E8 2.4E8;3.0E8... 4.0E8;1.2E8 1.4E8;1.8E8 2.0E8;0.0005 0.0015;0.01 0.03]; Ind = 1; % Indicator for different resampling strategy % % *********************************************************************** 82
  • 94.
    Appendix A % %Forward Problem for Synthetic Measurements: % % =========================================== % % load El Centro EW.dat % % t = El Centro EW(:,1); % nt = length(t); % xg t = El Centro EW(:,2); % yg t = 0.5.*xg t; % % dof = length(m); % Inc = zeros(dof,2); % % [M mat,K mat] = BRNS FB Matrices(m,k); % C mat = alf.*M mat+bta.*K mat; % % % Eigen Analysis for Modal Parameters: % % ==================================== % % [Phi,Lam] = eig(K mat,M mat); % wn = diag(sqrt(Lam))./(2*pi); % % % Direct Time Integration for Response: % % ===================================== % % Ifl = [−1 0;0 −1;−1 0;0 −1;−1 0;0 −1;−1 0;0 −1]; % Ft = M mat*Ifl*[xg t';yg t']; % % [U,Ud,Udd] = Newmark Beta MDOF(M mat,K mat,C mat,Inc,t,Ft); % % % Add Noise to Simulate Synthetic Data: % % ===================================== % % Mean Signal = mean(Udd,2); % SD Noise = Mean Signal./SNR; % Syn Recd = zeros(nt,dof); % for ii = 1:dof % Syn Recd(:,ii) = Udd(ii,:)'+SD Noise(ii).*randn(nt,1); % end % % Out put = [t Syn Recd]; % % save −ascii Response.dat Out put 83
  • 95.
    Appendix A % % %Plot Responses: % % =============== % % figure % subplot(2,1,1) % plot(t,xg t) % xlabel('t (s)');ylabel('Accn. (g)'); % title('Support Motion:') % subplot(2,1,2) % plot(t,Syn Recd(:,8)) % xlabel('t (s)');ylabel('Accn. (g)'); % title('Measured Response:') % % pause % ************************************************************************* load El Centro EW.dat load Response.dat t = El Centro EW(:,1); nt = length(t); xg t = El Centro EW(:,2); yg t = 0.5.*xg t; dof = length(m); Nu = dof+2; % No. of Unknown R samp = zeros(N p,Nu); for ii = 1:Nu R samp(:,ii) = random('unif',RP(ii,1),RP(ii,2),N p,1); end Mean Estimate = zeros(nt,Nu); Std Estimate = zeros(nt,Nu); Weights = zeros(N p,dof); for ii = 1:Nu Mean Estimate(1,ii) = mean(R samp(:,ii)); Std Estimate(1,ii) = std(R samp(:,ii)); end % Bootstrap Algorithm: 84
  • 96.
    Appendix A % ==================== Inc(:,:,Np) = zeros(dof,2); Ifl = [−1 0;0 −1;−1 0;0 −1;−1 0;0 −1;−1 0;0 −1]; for ii = 2:nt ii for jj = 1:N p k1 = R samp(jj,1:dof); c1 = R samp(jj,dof+1:Nu); [M mat,K mat] = BRNS FB Matrices(m,k1); C mat = c1(1).*M mat+c1(2).*K mat; tt = t(ii−1:ii,1); Ft = M mat*Ifl*[xg t(ii−1:ii)';yg t(ii−1:ii)']; Incd = Inc(:,:,jj); [U,Ud,Udd] = Newmark Beta MDOF(M mat,K mat,C mat,Incd,tt,Ft); Inc cond = [U(:,2) Ud(:,2)]; C(:,:,jj) = Inc cond; % Estimate Likelihood of Simulation: % ================================== for kk = 1:dof Weights(jj,kk)=(1/sqrt(2*pi*x R))*exp(−... ((Response(ii,kk+1)−Udd(kk,2))ˆ2)/(2*(x R))); end end Wt = prod(Weights,2); weight = (Wt./sum(Wt))'; % Resampling: % =========== [R samp,index] = Resampling(R samp,weight,Ind); Inc = C(:,:,index); for kk = 1:Nu Mean Estimate(ii,kk) = mean(R samp(:,kk)); Std Estimate(ii,kk) = std(R samp(:,kk)); end end % Eigen Analysis of Identified System: % ==================================== k inv = Mean Estimate(end,1:dof); 85
  • 97.
    Appendix A c inv= Mean Estimate(end,dof+1:end); [M mat,K mat]=BRNS FB Matrices(m,k inv); [Phi in,D] = eig(K mat,M mat); wn inv = sqrt(diag(D))/(2*pi) % Natural frequency in Hz % ************************************************************************* % Plots: % ====== figure subplot(2,1,1) plot(t,Mean Estimate(:,1)./k(1),'m') hold on plot(t,Mean Estimate(:,2)./k(2),'−.b') plot(t,Mean Estimate(:,3)./k(3),'−−g') legend('DOF 1','DOF 2','DOF 3') xlabel('t (s)');ylabel('K (kN/m)'); title('Identified Stiffness:') subplot(2,1,2) plot(t,Std Estimate(:,1),t,Std Estimate(:,2),'−.',t,Std Estimate(:,3),'−−') legend('DOF 1','DOF 2','DOF 3') xlabel('t (s)');ylabel('K (kN/m)'); title('Std. in Stiffness Simulation:') figure subplot(2,1,1) plot(t,Mean Estimate(:,9)./alf,'m') hold on plot(t,Mean Estimate(:,10)./bta,'−.b') legend('Alfa','Beta') xlabel('t (s)');ylabel('alpha & beta'); title('Identified Damping Parameters:') subplot(2,1,2) plot(t,Std Estimate(:,9),t,Std Estimate(:,10),'−−') legend('Alfa','Beta') xlabel('t (s)');ylabel('alpha & beta'); title('Std. in Damping Simulation:') % ************************************************************************* cpu time = cputime−tm % ************************************************************************* 86
  • 98.
    Appendix A % EndProgram. % ************************************************************************* MATLAB Code for Resampling Algorithms function [RS Rand No,index] = Resampling(Rand No,Weights,Ind) % ========================================================================= % PROGRAMMER : ANSHUL GOYAL & ARUNASIS CHAKRABORTY % DATE : 02.01.2013(Last modified: 26:01:2014) % ABSTRACT : % % Input/Output argument − % [RS Rand No,index] = Resampling(Rand No,Weights,Ind) % % input: % ====== % Ind: Indicator of the resampling algorithm % weights: normalized weights upon likelihood calculation % R samp: Random sample at a particle time step t % output: % ======= % [index]: Index of the resampled particles % [w] = new particle weights after resampling % % ========================================================================= if Ind == 1 % LHS Ns = length(Weights); % Number of Particles edges = min([0 cumsum(Weights)],1); % protect against round off edges(end) = 1; % get the upper edge exact UV = rand/Ns; [˜,index] = histc(UV:1/Ns:1,edges); NRN = length(Rand No(1,:)); RS Rand No = zeros(Ns,NRN); for ii = 1:NRN RN = Rand No(:,ii); RS Rand No(:,ii) = RN(index); end elseif Ind == 2 % Systamatic Ns = length(Weights); 87
  • 99.
    Appendix A u =((0:Ns−1)+rand(1))/Ns; wc=cumsum(Weights); index= 1; index f = zeros(1,Ns); for i = 1:Ns while(wc(index)<u(i)) index = mod(index+1,Ns); if index == 0 index = Ns; end end index f(i) = index; end index = index f; NRN = length(Rand No(1,:)); RS Rand No = zeros(Ns,NRN); for ii = 1:NRN RN = Rand No(:,ii); RS Rand No(:,ii) = RN(index); end elseif Ind ==3 % Stratified Ns = length(Weights); u=((0:Ns−1)+(rand(1,Ns)))/Ns; wc = cumsum(Weights); index = 1; index f = zeros(1,Ns); for i=1:Ns while(wc(index)<u(i)) index=mod(index+1,Ns); if index ==0 index = Ns; end end index f(i) = index; end index = index f; NRN = length(Rand No(1,:)); RS Rand No = zeros(Ns,NRN); for ii = 1:NRN RN = Rand No(:,ii); RS Rand No(:,ii) = RN(index); end elseif Ind ==4 % Simple 88
  • 100.
    Appendix A Ns =length(Weights); u = cumprod(rand(1,Ns).ˆ( 1./(Ns:−1:1))); u = u(Ns:−1: 1); wc = cumsum(Weights); index = 1; index f = zeros(1,Ns); for i = 1:Ns while (wc(index)<u(i)) index = mod(index+1,Ns); if index ==0 index = Ns; end end index f(i) = index; end index = index f; NRN = length(Rand No(1,:)); RS Rand No = zeros(Ns,NRN); for ii = 1:NRN RN = Rand No(:,ii); RS Rand No(:,ii) = RN(index); end elseif Ind==5 % Wheel Ns = length(Weights); index = unidrnd(Ns); beta = 0; mw = max(Weights); index f = zeros(1,Ns); for i = 1:Ns beta = beta + 2*mw*rand(1); while(beta>Weights(index)) beta = beta − Weights(index); index = mod(index + 1,Ns); if index == 0 index = Ns; end end index f(i) = index; end index = index f; NRN = length(Rand No(1,:)); RS Rand No = zeros(Ns,NRN); for ii = 1:NRN 89
  • 101.
    Appendix A RN =Rand No(:,ii); RS Rand No(:,ii) = RN(index); end else disp('Use proper Ind number for resampling.') end return % ************************************************************************* % End Program. % ************************************************************************* MATLAB code for solving the second order differential equation using β New- mark algorithm function [U,Ud,Udd] = Newmark Beta MDOF(M,K,C,Inc,t,F t,delta,alpha) % ========================================================================= % PROGRAMMER : ARUNASIS CHAKRABORTY % DATE : 02.01.2013(Last modified: 25:01:2014) % ABSTRACT : This function computes the response of a MDOF system using % Newmark−Beta method. For details, see page 780 in Bathe's % Book. This code is for any general MDOF model excited by % general force or support motions. Change the nargins as % required. % % Input/Output argument − % [U,Ud,Udd] = Newmark Beta MDOF(M,K,C,Inc,t,F t,delta,alpha) % % input: % ====== % % M = Mass Matrix % K = Stiffness Matrix % C = Damping Matrix % Inc = Initial Conditions % t = time in column vector % F t = Force in Different Degrees of Freedom. The format % of the Data is dof*nt % delta = constant in Newmark method (default is 1/2) % alpha = constant in Newmark method (default is 1/6) 90
  • 102.
    Appendix A % % output: %======= % % U = displacement % Ud = velocity % Udd = Acceleration % % ========================================================================= if nargin == 6 delta = 1/2; alpha = 1/4; %disp('Using default values: delta = 1/2 & alpha = 1/4'); elseif nargin == 8 if delta ˜= 1/2 disp('Warning: you are using delta not equal to 1/2'); end elseif nargin < 6 | | nargin > 8 exit('Wrong number of input variables'); end dof = length(diag(M)); nt = length(t); dt = t(2)−t(1); U 0 = Inc(:,1); Ud 0 = Inc(:,2); a0 = 1/(alpha*dtˆ2); a1 = delta/(alpha*dt); a2 = 1/(alpha*dt); a3 = 1/(2*alpha)−1; a4 = delta/alpha−1; a5 = (dt/2)*(delta/alpha−2); a6 = dt*(1−delta); a7 = delta*dt; K hat = K+a0*M+a1*C; U = zeros(dof,nt); Ud = zeros(dof,nt); Udd = zeros(dof,nt); 91
  • 103.
    Appendix A U(:,1) =U 0; Ud(:,1) = Ud 0; for ii = 2:nt Ut = U(:,(ii−1)); Udt = Ud(:,(ii−1)); Uddt = Udd(:,(ii−1)); R = F t(:,ii); R hat = R+M*(a0*Ut+a2*Udt+a3*Uddt)+C*(a1*Ut+a4*Udt+a5*Uddt); U(:,ii) = K hatR hat; Udd(:,ii) = a0*(U(:,ii)−Ut)−a2*Udt−a3*Uddt; Ud(:,ii) = Udt+a6*Uddt+a7*Udd(:,ii); end return % ************************************************************************* % End Program. % ************************************************************************* MATLAB code for obtaining the Mass and Stiffness matrix of multi-degree of freedom system function [M mat,K mat] = BRNS FB Matrices(m,k) % ========================================================================= % PROGRAMMER : ARUNASIS CHAKRABORTY % DATE : 17.08.2013(Last modified: 21:09:2013) % ABSTRACT : This function evaluates mass, stiffness and damping % matrices of the BRNS Fixed Base building. % % Input/Output argument − % [M mat,K mat] = BRNS FB Matrices(m,k) % % input: % ====== % % m = Mass in different dof % k = Stiffness in different dof 92
  • 104.
    Appendix A % % output: %======= % % M mat = Mass Matrix % K mat = Stiffness Matrix % ========================================================================= dof = length(m); node dof = 2; % Mass Matrix % =========== M mat = diag(m); % Stiffness Matrix % ================ K mat = zeros(dof); K mat(1,1) = k(1)+k(node dof+1); K mat(2,2) = k(2)+k(node dof+2); K mat(1,node dof+1) = −k(node dof+1); for ii = 2:dof−2 K mat(ii+node dof−1,ii−node dof+1) = −k(ii+node dof−1); K mat(ii,ii) = k(ii)+k(ii+2); K mat(ii,ii+node dof) = −k(ii+node dof); end K mat(dof−1,dof−1) = k(dof−1); K mat(dof,dof−node dof) = −k(dof); K mat(dof,dof) = k(dof); % % Damping Matrix: % % =============== % C mat = zeros(dof); % C mat(1,1) = c(1)+c(2); % C mat(1,2) = −c(2); % for ii = 2:dof−1 % C mat(ii,ii−1) = −c(ii); % C mat(ii,ii) = c(ii)+c(ii+1); % C mat(ii,ii+1) = −c(ii+1); % end % C mat(dof,dof−1) = −c(dof); % C mat(dof,dof) = c(dof); 93
  • 105.
    Appendix A return % ************************************************************************* %End Program. % ************************************************************************* 94