SlideShare a Scribd company logo
1 of 61
Download to read offline
Centre for Modeling and Simulation
Savitribai Phule Pune University
Master of Technology (M.Tech.)
Programme in Modeling and Simulation
Project Report
Model Fitting & Optimization of
Rheological Data
Ajay Vishwas Jadhav
CMS1327
Academic Year 2014-15
2
Centre for Modeling and Simulation
Savitribai Phule Pune University
Certificate
This is certify that this report, titled
Model Fitting & Optimization of Rheological Data,
authored by
Ajay Vishwas Jadhav (CMS1327),
describes the project work carried out by the author under our supervision during the
period from January 2015 to June 2015. This work represents the project component
of the Master of Technology (M.Tech.) Programme in Modeling and Simulation at the
Center for Modeling and Simulation, Savitribai Phule Pune University.
Dr. Harshawardhan Pol, Senior Scientist,
CSIR-National Chemical Laboratory
Polymer Sci.& Engg. Division
Pune - 411008, India
Dr. Sukratu Barve, Assistant Professor,
Centre for Modeling and Simulation
Savitribai Phule Pune University
Pune - 411007, India
Dr. Anjali Kshirsagar, Director,
Centre for Modeling and Simulation
Savitribai Phule Pune University
Pune - 411007, India
4
Centre for Modeling and Simulation
Savitribai Phule Pune University
Author’s Declaration
This document, titled
Model Fitting & Optimization of Rheological Data,
authored by me, is an authentic report of the project work carried out by me as part
of the Master of Technology (M.Tech.) Programme in Modeling and Simulation at the
Center for Modeling and Simulation, Savitribai Phule Pune University. In writing this
report, I have taken reasonable and adequate care to ensure that material borrowed from
sources such as books, research papers, internet, etc., is acknowledged as per accepted
academic norms and practices in this regard. I have read and understood the University’s
policy on plagiarism (http://unipune.ac.in/administration_files/pdf/Plagiarism_Policy_
University_14-5-12.pdf).
Ajay Vishwas Jadhav
CMS1327
6
Abstract
To analyze the behavior of viscoelastic materials such as polymer, the relaxation spectrum is
an important tool for studying the behaviour of viscoelastic materials. It is well known that
the relaxation spectrum characterizing the viscoelastic properties of a polymer melt or solution
is not directly accessible by an experiment. Therefore, it must be calculated from data. The
most popular procedure is to use data from a small amplitude oscillatory shear experiment to
determine the parameters in a multimode Maxwell model.
As the discrete relaxation times appear non-linearly in the mathematical model for the relax-
ation modulus. Therefore the indirect calculation of the relaxation times is an ill-posed problem
and to find out its solution is very difficult. A nonlinear regression technique is described in
this paper in which the minimization is performed with respect to both the discrete relaxation
times and the elastic moduli is well documented.
Using the Marquardt Levenberg the nonlinear least squares problems is solved. In this tech-
nique the number of discrete modes is increased dynamically and the procedure is terminated
when the calculated values of the model parameters are dominated by a measure of their ex-
pected values. Procedure for that is robust and efficient Numerical calculations on model and
experimental data are presented and discussed.
Sometimes we need exact number of relaxation modes, Using Marquardt Levenberg algo-
rithm it is not possible to get exact number of modes for relaxation spectra. To overcome this
issue we have implemented one evolutionary algorithm i.e, Genetic Algorithm. In which we can
fix the number of modes by choosing size of chromosome.
Detail algorithm for Marquardt Levenbergin and Genetic Algorithm is mentioned in details
in thesis.
7
8
Acknowledgements
First and foremost, praises and thanks to my Parents, for their showers of blessings and support
throughout my project work to complete the research successfully.
I would like to express my deep and sincere gratitude to my thesis guide, Dr. H.V. Pol,
senior scientist, NCL Pune and Dr. Sukratu Barve, Centre for Modeling and Simulation, Uni-
versity of Pune, for giving me the opportunity to do research and providing invaluable guidance
throughout this project.
I am deeply indebted to Dr. F.J.Monaco, ICMS, University of Sao Paulo, Brazil for giving
me opportunity to complete this project, His passion and enthusiasm for sharing his knowledge
and motivating students like me. Whenever I have approached him to discuss ideas for the
project, or any generic problem, I have always found an eager listener. It was a great privilege
and honour to work and study under his guidance. Working under Dr. F.J. Monaco gave me a
new identity. I am extremely grateful for what he has offered me.
I would also like to thank Dr. Renu Dhadwal, FLAME Pune for sharing her knowledge
about Marquardt- Levenberg Algorithm and also about Matlab coding. Her knowledge about
Mathematical modeling and simulation, dynamics and Rheology of polymer fluids helped me a
lot. Because of her help I could able to start my journey in the field of mathematical modeling
and simulation for polymers.
I would like to extend my thank to Dr. Deepak Bankar and Dr. Vikas Kashid for helping
and encouraging me throughout my thesis. At last but not least, I am thankful to my friends
Pratiksha, Umesh, Avinash, Akshay, Ashish K, Sandeep, Alok, Srikant J, Amolkumar, Tushar,
Swapnil, Shriram, Ashish A, Piyush,Jaydeep, Shrikant Jayraman, Sandip Swarkar, Shrikant
Panchal, Suraj, Satish Waman.
Any omision in this brief acknowledgement does not mean lack of gratitude.
9
10
List of Figures
2.1 Extrusion Film Casting Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2 Necking defect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.1 Types of Fluids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.2 Oscillatory Shear Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.1 Searching process of the steepest descent method with different learning constants: left
trajectory is for small learning constant that leads to slow convergence; right side tra-
jectory is for large learning constant that causes oscillation i.e. divergence . . . . . . . . 28
4.2 GD Step 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.3 GD Step 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.4 GD Step 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.5 GD Step 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.6 GD Step 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.7 GD Step 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.8 GD Step 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.9 GD Step 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.10 Interpretation of Newton Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.1 (a) Graph of W vs G , (b) Graph of W vs G . . . . . . . . . . . . . . . . . . . . . . 36
6.1 Flowchart of Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
6.2 One Point Crossover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
6.3 Two Point Crossover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
6.4 Cut Splice Crossover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
6.5 Uniform Crossover and Half Uniform Crossover . . . . . . . . . . . . . . . . . . . . . 41
7.1 Roulette Wheel Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
7.2 Before & After Rank Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
7.3 Tournament Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
8.1 Graph of W vs G . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
8.2 Graph of W vs G . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
9.1 Main window of GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
9.2 Second window of GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
9.3 File selection dialouge box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
9.4 User input for number of modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
9.5 Waitbar of code started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
9.6 Waitbar of code about to finish . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
9.7 Computation is done . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
11
12 LIST OF FIGURES
9.8 Relaxation spectra graph button . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
9.9 Download output file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
List of Tables
2.1 Experimental Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.1 Specifications of Different Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 34
5.1 AAD value of ML algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
7.1 GA Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
8.1 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
13
14 LIST OF TABLES
Contents
Abstract 7
Acknowledgments 9
1 Introduction 17
2 Relaxation Spectra 19
2.1 Experimental Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3 Rheological Data 23
3.1 Rheology to Viscoelasticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.2 Viscosity & elasticity measurements . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.3 Oscillatory Shear Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4 Marquardt Levenberg Algorithm 27
4.1 Objective function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.3 Stepwise Derivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.3.1 Gradiant Descent Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.3.2 Newton Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.3.3 Gauss Newton Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.3.4 Damping Factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5 Result 35
6 Using Genetic Algorithm 37
6.1 Biological Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
6.2 Evolutionary computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
6.3 Introduction & Flowchart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
6.4 Crossover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
6.4.1 One-point crossover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
6.4.2 Two-point crossover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
6.4.3 Cut and splice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
6.4.4 Uniform Crossover and Half Uniform Crossover . . . . . . . . . . . . . . . 40
6.4.5 Blend Crossover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
6.5 Mutation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
6.5.1 Flip Bit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
6.5.2 Boundary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
6.5.3 Uniform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
6.5.4 Non-Uniform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
15
16 CONTENTS
7 Objective function & Method 43
7.1 Objective Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
7.1.1 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
7.1.2 GA Parameters Used in the Experiments . . . . . . . . . . . . . . . . . . 44
7.2 Initiallization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
7.3 Fitness Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
7.4 Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
7.4.1 Roulette Wheel Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
7.4.2 Rank Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
7.4.3 Tournament Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
7.4.4 Elitism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
8 Termination Criteria & Result 49
8.1 Optical Chromosome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
8.2 Absolute average deviation (AAD) . . . . . . . . . . . . . . . . . . . . . . . . . . 49
9 Graphical User Interface (GUI) 51
10 Conclusion 57
Bibliography 59
A Acronyms 61
Chapter 1
Introduction
In rheology, the Discrete relaxation spectrum (DRS), contains values of relaxation strength g
& relaxation time λ. DRS has a vital importance, for once it is known, it is straight forward
to compute all other material functions. Unfortunately, it cannot be measured directly, in-
stead, it can be calculated indirectly, most commonly from small amplitude oscillatory shear
experiments, which is denoted by G∗ also called as dynamic moduli. These experiments yield
the frequency-dependent dynamic moduli (experimental data). The experimental data contains
values of Storage & Loss modulus at different frquencies. The relation between Experimental
data and relaxation is greatly explained by the two equation having equation number . [3]
G∗
= G + iG (1.1)
Where,
G and G are the storage and loss modulus respectively.
w is the frequency of deformation.
g is relaxation strength having SI unit of Pascal.
λ is a relaxation time having SI unit of Seconds.
N is the number of modes in the spectrum.
The resulting problem of deducing DRS from G∗ has a long and rich history . It is also quite
common to seek a Discrete relaxation spectrum (DRS), which consists of pairs of relaxation
strengths and relaxation times gi&λi, with i = 1, 2, ...,N, where N is the number of modes in
the spectrum. The relationship between the DRS and G∗ is given by
G (w) = N
i=1 gi
w2λ2
i
1+λ2
i w2 (1.2)
G (w) = N
i=1 gi
wλi
1+λ2
i w2 (1.3)
The determination of a DRS can be complicated. Additional difficulties are encountered
in the determination of the DRS, which is an ill-posed problem because the number of modes
N is not known in advance. Furthermore, even after a suitable well-posed approximation to
the problem is constructed, numerical solution is beset by problems of poor conditioning, or
sensitivity to noise in the data. Apart from all these issues, extracting the relaxation spectrum
is an important part. The principal goal of this thesis is to develop an open-source software
toolkit program which can take experimentally determined G∗ values as input, and provides
the discrete relaxation spectra as output. Given the computer code is available on the
http://cms.unipune.ac.in/~ajayj/RheomlFit.html, it is important to justify why this ad-
17
18 CHAPTER 1. INTRODUCTION
ditional undertaking is important or useful, and how it fulfils an unmet need. one of the most
important motivation include
[1]
Many of the programs based on algorithms published in the literature but are not readily
available to the public, On the other end is IRIS, arguably the most popular program used in
the industry for this problem It is a commercial product whose exact underlying algorithm and
implementation are, for understandable reasons, not in the public domain. Furthermore, the
program currently does not run on all operating systems, and provides only the DRS, which
is determined by nonlinear regression and appealing to parsimony. While the convenience af-
forded by such a black-box program may be completely sufficient for an experimentalist, it is
less attractive to programmers who want to extend the code, and researchers who may want to
analyze particular features of the algorithm. In between these two extremes are powerful, freely
available, well-documented, platform-independent general implementations such as DISCRETE,
CONTIN, FTIKREG, NLREG, and GENEREG, which overcome many of the constraints men-
tioned above. These efficient programs, generally written in older versions of Fortran, extract
the relaxation spectrum using some form of regularization. All of these programs are general
purpose they can be used to extract DRS from experimental data.[7]
Decision to build an open source software toolkit was based on following considerations.
• Platform independence
• Transparency of the algorithm and implementation
• Free availability
• Extraction of Discrete relaxation spectra
• Efficiency readability and extensibility of the code
• Integrated graphical user interface
The computer programs mentioned above satisfies a subset of these design considerations.
Based on these criteria, we implemented our algorithm in Matlab as the program Rheomfit . The
same code works without any modification on the freely available Matlab-clone GNU Octave
(http://www.gnu.org/software/octave/), which like Matlab can be installed on any operating
system. This choice allows us to use several built-in numerical routines, which makes the code
succinct and easy to extend. [7]
Chapter 2
Relaxation Spectra
Relaxation spectra gives a qualitative picture of the molecular weight Distribution of a polymer.
Since a polymer melt is an ensemble of macromolecular chains of varied repeat units (or in other
words of varied molecular weights), it is important that it be represented by an appropriate
term such as a relaxation spectra.
Relaxation spectrum, when incorporated in a constitutive equation, gives a realistic picture
of how the particular polymer will behave under various deformations and how it will relax
the stress. Relaxation spectrum of a polymer is determined by performing an inverse Fourier
transform on dynamic oscillatory time-temperature superposed data consisting of elastic and
loss moduli.[5]
Polymer manufacturing industries produce several thousand tons of polymer films and coat-
ings using Extrusion Film Casting. Which is a commercially important process. This consists
of extruding a molten polymer through a die under pressure and stretching the resulting film in
air by winding it on a chill roll. Under stable and steady-state extrusion film casting operation,
two major undesirable defects occur over the take-up length.[3]
Figure 2.1: Extrusion Film Casting Machine
This is the Extrusion film casting machine which has a types based on the type of screw
extruder. This is of single screw extruder.
At the right side portion of 2.1 it consist of Hopper which is the input of extruder, it accepts
the polymer balls and passes it to conveyor. Screw Extruder, It extrudes and melt down the
polymer balls into polymer gel and using the scre extruder it goes towards die entrance which
19
20 CHAPTER 2. RELAXATION SPECTRA
has , Die, The pressurized gel polymer come out of a die depends upon size and shape of die.
[6]
At the left side of 2.1 it is having Chill rolls and collecting roll. The polymer in gel form
which came out from die which settles down on chill roll and using the chilling technology it
gets cooler and starts becoming in form of polymer sheets from gel polymer. After moving from
all the rolls polymer sheets gets collected into form of rolls.
In steady and stable operation of polymer processing, polymer sheets come across two major
defects that is Edge beading & neck-in.[5]
Edge-beading
In Edge-beading film edges are being substantially thicker than the central portion of the film.
Means the central portion becomes weaker as compared with edge, eventually it losses the
strength of polymer. In fig 2.2 we can see the defect of Edge-beading. [6]
Neck-in
We all are familiar with this case, To understand this problem take a can of paint and pour it
from a certain height then we can see that the starting width of the flow and the width of flow
at the end is different this is nothing but the problem of Neck-in final film width became lower
than the width of the die from which the film is extruded. In fig 2.2 we can see the defect of
Neck-in. [6]
Figure 2.2: Necking defect
To avoid this prblem and to study the extrusion film casting operation we need to study the
deformation of polymer under applied stresses and flow of polymer which is nothing but Rheol-
ogy. To analyze Polymer Flow using Viscoelastic Simulations we require relaxation spectrum.
Relaxation spectrum is nothing but the two parameters that are Relaxation strength (gi) and
Relaxation Time (λi). The ultimate goal of this thesis is to find out relaxation spectrum of
polymer using experimental data.[5]
2.1. EXPERIMENTAL DATA 21
W G G
rad/seconds Pascal Pascal
0.0091 3.38 115
0.0103 3.63 127
0.0118 4.34 142
0.0135 5.61 159
0.0136 12.5 176
0.0146 6.90 180
0.0157 7.01 183
0.0164 8.61 199
0.0186 9.80 223
0.0188 9.97 214
0.0215 11.7 250
0.0223 14.6 257
0.0231 12.3 255
0.0250 15.1 282
0.0260 15.9 287
0.0269 17.0 313
0.0296 16.4 304
0.0298 20.7 350
0.0331 19.5 337
0.0340 22.5 367
0.0341 25.7 391
0.0355 26.6 393
Table 2.1: Experimental Data
2.1 Experimental Data
Experimental data which is nothing but the machine generated data consist of three columns.
First column is of frequencies is dennoted by alphabet W and unit for the frequency is rad/sec-
ond. Second column is called as Storage modulus or the elastic moduli of polymer, is denoted
by G’. The SI unit for this is Pascal. Third column is of Loss Modulus i.e. G” or Viscous
moduli of polymer. The SI unit for this is Pascal.[3] The data from above table is of material
Linear low-density polyethylene (LLDPE) is a substantially linear polymer (polyethylene), with
significant numbers of short branches. At the time of generation of this data, temperature is
maintained at 1900C.
22 CHAPTER 2. RELAXATION SPECTRA
Chapter 3
Rheological Data
Rheology is the branch of science which deals with the flow and deformation of matter and tells
us the interrelation between force, deformation and time. The rheology word actually comes
from Greek word rheos meaning of that word is to flow.[11]
Rheology is actually applicable to all materials, from gases to solids. The rheology has
the history of only about 80 years. It was founded by two scientists meeting in the late 20s
and finding out having the same need for describing fluid flow properties. The scientists were
Professor Marcus Reiner and Professor Eugene Bingham. [10]
The Greek philosopher Heraclitus described rheology as panta rei that means everything
flows. Translated into rheological terms by Marcus Reiner this means everything will flow if
you just wait long enough.
Fluid rheology is used to describe the consistency of different products, normally by the two
components viscosity and elasticity. By viscosity is usually meant resistance to flow or thickness
and by elasticity usually stickiness or structure.[11]
3.1 Rheology to Viscoelasticity
Fluids are normally divided into three different groups according to their flow behaviour:
-Newtonian fluids
-Non-Newtonian fluids.
Types of fluids are explained in graph 3.1
Viscoelastic fluids are a type of non-Newtonian fluid in which the stress-strain relationship
is time-dependent. They are often capable of generating normal stresses within the fluid that
resist deformation, and this can lead to interesting behaviours like the bead-on-a-string insta-
bility shown above.[3]
Flow curves are normally use for the graphical description of flow behaviour. All materials,
from gases to solids, can be divided into the following three categories of rheological behaviour.
• Viscous materials- In a purely viscous material all energy added is dissipated into heat.
• Elastic materials- In a purely elastic material all energy added is stored in the material.
• Viscoelastic materials- a viscoelastic material exhibits viscous as well as elastic be-
haviour. It is the combination of viscous and elastic material thats why the name is
23
24 CHAPTER 3. RHEOLOGICAL DATA
Figure 3.1: Types of Fluids
viscous + elastic = viscoelastic (3.1)
Typical examples of viscoelastic materials are bread dough, polymer melts and artificial or nat-
ural gels. Polymer is best example of viscoelastic material, which contains viscous and elastic
properties as well.
3.2 Viscosity & elasticity measurements
In rheology, rheological measurements are usually performed in kinematic instruments to get
quantitative results useful for design and development of products and process equipment. To
devolpe designs of products, such as food, cosmetic or in paint industry, rheometric measure-
ments are often performed to establish the elastic properties, such as gel strength and yield
value, both important parameters affecting e.g. particle carrying ability and spreadability. Dy-
namic moduli is the ratio of Stress to strain, Mathematically It is written as,
G∗
= Stress
Strain (3.2)
3.3 Oscillatory Shear Experiment
Measuring the Viscoelastic Behaviour of Soft Materials Soft materials such as emulsions, foams,
or dispersions are ubiquitous in industrial products and formulations; they exhibit unique me-
chanical behaviours that are often key to the way these materials are employed in a particular
3.3. OSCILLATORY SHEAR EXPERIMENT 25
application.[11] Studying the mechanical behaviour of these materials is complicated by the fact
that their response is viscoelastic, intermediate between that of solids and liquids. Oscillatory
rheology is a standard experimental tool for studying such behaviour, it provides new insights
about the physical mechanisms that govern the unique mechanical properties of soft materials.
[10]
Soft materials such as colloidal suspensions, emulsions, foams, or polymer systems are ubiq-
uitous in many industries, including foods, pharmaceuticals, and cosmetics. Their macroscopic
mechanical behaviour is a key property which often determines the usability of such materials
for a given industrial application. Characterising the mechanical behaviour of soft materials is
complicated by the fact that many materials are viscoelastic, so their mechanical properties lie
between that of a purely elastic solid and that of a viscous liquid. Using oscillatory rheology,
it is possible to quantify both the viscous-like and the elastic-like properties of a material at
different time scales; it is thus a valuable tool for understanding the structural and dynamic
properties of these systems [11]
The basic principle of an oscillatory rheometer is to induce a sinusoidal shear deformation
in the sample and measure the resultant stress response; the time scale probed is determined
by the frequency of oscillation, ω of the shear deformation. In a typical experiment, the sample
is placed between two plates, as shown in figure 3.2. While the top plate remains stationary, a
motor rotates the bottom plate, thereby imposing a time dependent strain γ (t) = γsin (ωt) on
the sample[10]. Simultaneously, the time dependent stress σ (t) is quantified by measuring the
torque that the sample imposes on the top plate.
Measuring this time dependent stress response at a single frequency immediately reveals
key differences between materials, as shown schematically in figure 3.2. If the material is an
ideal elastic solid, then the sample stress is proportional to the strain deformation, and the
proportionality constant is the shear modulus of the material.
Figure 3.2: Oscillatory Shear Experiment
26 CHAPTER 3. RHEOLOGICAL DATA
The stress is always exactly in phase with the applied sinusoidal strain deformation. In
contrast, if the material is a purely viscous fluid, the stress in the sample is proportional to the
rate of strain deformation, where the portionality constant is the viscosity of the fluid. The
applied strain and the measured stress are out of phase, with a phase angle δ = π
2 as shown
in the center graph in figure 3.2. Viscoelastic materials show a response that contains both
in-phase and out-of-phase contributions, as shown in the bottom graph of figure 3.2. These
contributions reveal the extents of solid-like (red line) and liquid-like (blue dotted line) be-
haviour. As a consequence, the total stress response (purple line) shows a phase shift δ with
respect to the applied strain deformation that lies between that of solids and liquids,0 < δ < π
2
The viscoelastic behaviour of the system at is characterised by the storage modulus, G (w) ,
and the loss modulus, G (w) , which respectively characterise the solid-like and fluid like con-
tributions to the measured stress response. For a sinusoidal strain deformation γ(t) = γ0sin(t)
, the stress response of a viscoelastic material is given by σ(t) = G γ0sin(ωt) + G γ0cos(ωt). In
a typical rheological experiment, we seek to measure G & G . We make the measurements as
a function of omega because whether a soft material is solid-like or liquid-like depends on the
time scale at which it is deformed. A typical example is shown in figure. here we plot G (w) and
G (w) for a suspension of hydrogel particles, at the lowest accessible frequencies the response
is viscous-like, with a loss modulus that is much larger than the storage modulus while at the
highest frequencies accessed the storage modulus dominates the response, indicating solid-like
behaviour.
Oscillatory rheology is a valuable tool for studying the mechanical behaviour of soft materi-
als. Recent studies suggest that the nonlinear viscoelastic behaviour contains valuable informa-
tion about the dynamics of these systems.[11] Thus measurements at large strain deformations
should lead to a better understanding of the physical mechanisms that govern their behaviour.
Chapter 4
Marquardt Levenberg Algorithm
4.1 Objective function
In this paper, The method or technique we have used is firstly we plotted the values of storage
modulus (G ) & loss modulus (G ) at different frequnciews ω. So we got two non linear curves.
Initially we are guessing some values for g & λ in order to get values of G & G at different
frequencies of experimental data. After plotting this calculated values of G & G on the graph
of experimentla values. We will get some difference between our calculated and experimental
curves. Now this problem becomes of Curve fitting. We will look towards this problem from
the error i.e difference between calculated and experimental values. So we will implement the
minimization algorithms.[7]
Our objective function is to minimize the distance between two curves or we can say problem
is of curve fitting using error minimization algorithms. For this we are calculating Goodness of
fit (GOF)
χ2
=
M
i=1
Observed − Expected
Expected
2
(4.1)
Where,
M= Number of observations.
By checking the value of Goodness of fit χ2 from Eq. 4.1 we can also set the termination
condition.[1]
4.2 Algorithm
The LevenbergMarquardt algorithm was developed by Kenneth Levenberg and Donald Mar-
quardt, Which provides a numerical solution to the problem of minimizing a non-linear function.
The advantages of this are it is fast and has stable convergence.
The gradient descent algorithm, also known as the error backpropagation (EBP) algorithm.
In this algorithm many improvements have been made, but these improvements are failed to
make a that much major effect into algorithm . The Gradient descent algorithm is still widely
used today, however, it is also known as an inefficient algorithm because convergence of this is
slow. There are mainly two reasons for the slow convergence, the first reason is that its step
sizes should be exact to the gradients. Logically, small step sizes should be taken where the
gradient is steep so as not to rattle out of the required minima (because of oscillation). So, if
27
28 CHAPTER 4. MARQUARDT LEVENBERG ALGORITHM
the step size is a constant, it needs to be chosen small. Then, in the place where the gradient
is gentle, the training process would be very slow. The second reason is that the contour of the
surface may not be the same in all directions, for example the Rosenbrock function, so that the
error valley problem may happen and may result in the slow convergence. The problem of step
size is explained in figure 4.1.
In Marquardt Levenberg algorithm the problem slow convergence of the gradient descent
method is greatly improved by using Gauss Newton algorithm. Gauss newton method requires
second order derivative. Using second-order derivatives of function to naturally evaluate the
curvature of error surface, Using Gauss Newton algorithm we can find exact step sizes for each
direction and also the convergence rate of gauss newton is very fast. Especially.[12]
Figure 4.1: Searching process of the steepest descent method with different learning constants: left
trajectory is for small learning constant that leads to slow convergence; right side trajectory is for large
learning constant that causes oscillation i.e. divergence
The LevenbergMarquardt algorithm blends the steepest descent method and the Gauss
Newton algorithm. It picks the speed advantage of the Gauss Newton algorithm and the stability
advantage of the steepest descent method. Because of this two proerties it is more robust than
the Gauss Newton algorithm, because in many cases it can converge well even if the error surface
is much more complex than the quadratic situation. But the LevenbergMarquardt algorithm
tends to be a bit slower than Gauss Newton algorithm (in convergent situation), it convergence
much faster than the steepest descent method.
The basic idea of the LevenbergMarquardt algorithm is that it performs a combined process
by blending this two algorithms, by picking the stability of Gradient descent and Speed of
Gauss Newton. So what it does that, It choose Gradient descent when it is away from minima
or maxima as per optimal point and switch towards Gauss Newton when it is near to minima,
To avoid slow convergence problem. But in case if it changes its direction because of unstability
of Gauss Newton then It will immediatly call to Gradient descent that to take care of direction
problem, then Gradient descent will take care of its direction like this way marquardt levenberg
algorithm works. This algorithm is just like a couple of Deaf and Blind. Both are helping to
4.3. STEPWISE DERIVATION 29
each other to survive. [12]
4.3 Stepwise Derivation
In this part, the derivation of the LevenbergMarquardt algorithm will be presented in four parts.
• Steepest descent algorithm
• Newtons method
• Gauss Newtons algorithm
• Damping Factor
• Levenberg Marquardt algorithm
4.3.1 Gradiant Descent Algorithm
The steepest descent algorithm is a first-order algorithm. It uses the first-order derivative of
cost function to find the minima. Normally, gradient g is defined as the first-order derivative
of cost function. With the definition of gradient g in the update rule of the steepest descent
algorithm could be written as
θ1 = θ0 − ∂
∂θ J (θ0, θ1) (4.2)
where,
J = Cost Function i.e Error = Predicted-Actual
θ= Parameter Vector
α = Step Size / Learning Rate.
To go further using this algorithm we need exact step size. Because the step size is the
parameter which makes the problem if it is high even if it is too low. We will study this using
graphical interpretation. Z (x, y) = x2 + 2 ∗ y2.
As I have plotted this function in 3-Dimension, but to visualize it clearly I have taken the
top view snaps of algorithm. The red lines are nothing but the levels or we can say contours
of function. In graph 4.3 we have initialized the starting point for algorithm to start. Using
negative gradient it is started and moved towards minima. [8]
Similarly in second graph it also moved towards minima. But if we observe the graph
number 4.5, We can see that algorithm is moving towards minima in right direction but the
problem is that it is becoming slower and slower. So this is the main drawback of gradient
descent algorithm. We can see this problem very clearly in graph 4.7. This all because we are
taking small steps as we have started with small step size. Lets check with large step size.[8]
Now, we will take large step and check the direction and step size of Gradient descent al-
gorithm. In graph 4.9 we have started with large step we moved into the direction of another
local minia, even it is not going straight into same local minima. It is continiously changing the
vallie’s also called as minimas. By visualizing this graphs we can understand the significance
and importance of step size in gradient descent method. [8]
30 CHAPTER 4. MARQUARDT LEVENBERG ALGORITHM
Figure 4.2: GD Step 1
Figure 4.3: GD Step 2
Figure 4.4: GD Step 3
Figure 4.5: GD Step 4
4.3. STEPWISE DERIVATION 31
Figure 4.6: GD Step 5
Figure 4.7: GD Step 6
Figure 4.8: GD Step 7
Figure 4.9: GD Step 8
32 CHAPTER 4. MARQUARDT LEVENBERG ALGORITHM
4.3.2 Newton Method
The geometric interpretation of Newton’s method is that at each iteration one approximates
f(x) by a quadratic function around Xn, and then takes a step towards the maximum/minimum
of that quadratic function (in higher dimensions, this may also be a saddle point). Note that if
f(x) happens to be a quadratic function, then the exact extremum is found in one step.
The mathematical formula for Newton method is
Xn+1 = Xn − f (x)
f (x) (4.3)
As we know that the definition of first derivative and second derivative are gradient and
Hessian respectively. So we can write above equation like,
Xn+1 = Xn − g
H (4.4)
Xn+1 = Xn − g ∗ H−1 (4.5)
Graphical interpretation of newton method is shown in following figure 4.10, in which we
can see that , we started from point Xk and we drawn a tangent at point Xn. After that we
found out the point which intersects x axis. By drawing perpendicular on curve from that curve
we get our next point that is Xn+1. Similary we can move towards minima or maxima.
Figure 4.10: Interpretation of Newton Method
4.3. STEPWISE DERIVATION 33
4.3.3 Gauss Newton Method
As we have seen in Newton method, equation 4.4 we require to calculate the Hessian that is
second derivative of function. So it will require large computation. To avoid this, the great
mathematicians Carl Friedrich Gauss modified the newton method to avoid second derivative
computation. This method came into focus by the name of Carl Friedrich Gauss and Isaac
Newton. That is Gauss Newton. Which is used to solve non-linear least squares problems.
We have seen the mathematical derivation of Newton Method is
Xn+1 = Xn − g
H (4.6)
Xn+1 = Xn − JT J
−1
∗ g (4.7)
4.3.4 Damping Factor
Now we reached to the step where we got the Gauss newton method which doesnt require
double derivative computation. But there is Inverse computation of matrix is there. What if
that matix is degenerate or semi definite. So we cant compute the inverse of that matrix. To
avoid this Marquardt suggested one adjustment which is similar to regularization. He has added
one identity matrix with sclar multiplier µ called as damping factor [12]
Ji,j = ∂fi
∂xj
(4.8)
If our cost fuction is f(x) then the Hessian of its that is second derivative is written as,
Hj,k = M
i=1
∂fi
∂xj
∂fi
∂xk
+ fi (x) ∗ ∂2fi
∂xj∂xk
(4.9)
Ji,j = ∂fi
∂xj
(4.10)
H = JT J
−1
(4.11)
Xn+1 = Xn − JT J + µ ∗ I
−1
∗ g (4.12)
Where,
Using the value of new Xn+1, We can calculate value of χ2. Now we want to switch the
algorithm as per requierment.
If χ2 is decreasing, that is we are moving towards the minima means we need to speed-up
the algorithm in order to rach towards minima. So we will decrease the value of µ So that we
34 CHAPTER 4. MARQUARDT LEVENBERG ALGORITHM
Algorithms Update Rules Convergence Computation Complexity
Gradient Descent Wk+1 = Wk − α ∗ gk Stable, Slow Gradient
Newton algorithm Wk+1 = Wk − H−1
k ∗ gk Unstable, Fast Gradient and Hessian
GaussNewton algorithm Wk+1 = Wk − JT J
−1
gk Unstable, Fast Gradient and Jacobian
LM algorithm Wk+1 = Wk − JT J + µI
−1
gk Stable, Fast Gradient and Jacobian
Table 4.1: Specifications of Different Algorithms
will sitch to Gauss Newton.
If χ2 is increasing, that is we are moving away from the minima means we need to give
the direction to the algorithm in order to rach towards minima in right direction. So we will
increase the value of µ So that we will sitch to Gradient descent algorithm.[12]
[12]
Chapter 5
Result
By minimizing objective function χ2 we will get some values of [gi, λi] where i=1:N. By putting
that values into equation 1.3. We will get values for G &G at number of frequencies w. If we
see the plots from figure which are perfectly fitting on each other. So we can comment that the
result we got i.e. values of g & λ are correct. Because we are calculating the values of G & G
that are our calculated values and then we are comparing them with experimtntal values.
To crossvalidate this results we can again recalculate values of G & G at different frequen-
cies ( experimental values of W ) and then we can calculate relative error, percentage error or
AAD. In this case we have calculated value of AAD.
Algorithm w vs G w vs G
LevenbergMarquardt algorithm 0.0088938 -0.01532
Table 5.1: AAD value of ML algorithm
35
36 CHAPTER 5. RESULT
(a)
(b)
Figure 5.1: (a) Graph of W vs G , (b) Graph of W vs G
By checking the values of AAD from above table we can comment that our curves are fitting
very well. In this type of problem we can check the accuracy of results by two ways either by
visualizing the graphs or by checking the error values. In this case our Graphs are perfectly
matching and also error is very less, So we can say that our results are perfect.
Chapter 6
Using Genetic Algorithm
6.1 Biological Terminology
Before moving towards Genetic algorithm we first need to understand the biological terminology
as we are going to use this throughout the thesis. In real biology this terms are plays important
role though the entities they refer to are much simpler than the real biological ones. All living
organisms consist of cells, and each cell contains the same set of one or more chromosomes
strings of DNA. A chromosome can be conceptually divided into genes each of which encodes
a particular protein.For example if our gene is our eye color.The different possible ”settings”
for a genes (e.g., blue, brown, hazel) are called alleles. every gene has its specific position in
chromosome.Many organisms have multiple chromosomes in each cell. The complete collection
of genetic material (all chromosomes taken together) is called the organism’s genome.[2]
The organism genotype are organism’s full heridatory information. The example for geno-
type is the gene responsible for eye color,gene responsible for height.The organism’s phenotype
are actual observed properties such as morphology development or behavior. The example of
phenotypeare eye color, height. In short genotyes are the responsible paramteres which we cant
visualize and the phenotype are depend on geonotype and we can visualize it.
In genetic algorithms, the term chromosome typically refers to a candidate solution to a
problem, often encoded as a bit string. The ”genes” are either single bits or short blocks of
adjacent bits that encode a particular element of the candidate solution (e.g., in the context of
multiparameter function optimization the bits encoding a particular parameter might be con-
sidered to be a gene). An bit string is either 0 or 1; for larger alphabets more alleles are possible
at each locus. Crossover typically consists of exchanging genetic material between two single
chromosome parents. Mutation consists of flipping the bit at a randomly chosen position.[2]
6.2 Evolutionary computation
Evolutionary computation is a subfield of artificial intelligence (more particularly computational
intelligence) that can be defined by the type of algorithms it is dealing with. Technically they
belong to the family of trial and error problem solvers and can be considered global optimization
methods with a metaheuristic or stochastic optimization character, distinguished by the use of
a population of candidate solutions rather than just iterating over one point in the search space.
They are mostly applied for black box problems (no derivatives known), often in the context of
expensive optimization.[9]
37
38 CHAPTER 6. USING GENETIC ALGORITHM
Broadly speaking, the field includes:
• Ant colony optimization
• Random walk
• Simulated Anneling
• Multicanonical jump walk annealing
• Genetic algorithm
• Harmony search
The field known as evolutionary computation consists of methods for problem solving by
simulating natural evolution process. Evolutionary algorithms are the computing techniques
based on iteratively applying random variations and subsequent selection over a population
(set) of prospective solution instances.
The following subsection is meant only to provide a general overview of GA rationale and
some background on how to interpret the obtained results. Its purpose is to introduce the
contributions the method can offer to the focused problem domain. This stochastic nature al-
lows for an important about (well designed) GAs as optimization tools: since a global optimum
exists, there are good chances that its surrounding will eventually be approached, although it
is not possible to known in advance how many generations it will take.
A number of variants of the canonical GA have been developed to make it applicable to
other binary search space, and these include real-encoded chromosome, which is the case for
the highlighted problem
6.3 Introduction & Flowchart
A genetic algorithm (GA) is a method for solving both constrained and unconstrained opti-
mization problems based on a natural selection process that mimics biological evolution. The
algorithm repeatedly modifies a population of individual solutions. At each step, the genetic
algorithm randomly selects individuals from the current population and uses them as parents
to produce the children for the next generation. Over successive generations, the population
”evolves” toward an optimal solution.
You can apply the genetic algorithm to solve problems that are not well suited for standard
optimization algorithms, including problems in which the objective function is discontinuous,
non differentiable, stochastic, or highly nonlinear.
The flowchart shown in Fig. 6.1 explains the working of geneic algorithm in detail. Step-
wise it is explained in detail in the following sections. As per flow chart algorithm starts from
initilization step in which we need to decide size and the number of chromosomes then initilize
them with random numbers. Using the inial values of chromosomes we can calculate the fitneess
value using optimizaion function. According to the fitness value we will assign rank to the chro-
mosomes. By considering rank of chromosomes we will select chromosomes. In order to improve
the solution we then apply genetic operators on chromsomes which are crossover, mutation. Af-
ter running the whole cycle over number of iterations, we will get a optimal chromosome then
using this we can calculate the value of optimal solution. Each operator and step of algorithm
6.4. CROSSOVER 39
Figure 6.1: Flowchart of Genetic Algorithm
is explained below with basic information & implementation in our problem statement.[2]
The decision to make in implementing a genetic algorithm is what genetic operators to
use. This decision depends greatly on the encoding strategy. Here I will discuss crossover and
mutation.
6.4 Crossover
In genetic algorithms, crossover is a genetic operator used to vary the programming of a chro-
mosome or chromosomes from one generation to the next. It is analogous to reproduction and
biological crossover, upon which genetic algorithms are based. Cross over is a process of taking
more than one parent solutions and producing a child solution from them. There are methods
for selection of the chromosomes. Those are also given below.[9]
6.4.1 One-point crossover
A one point crossover is a crossover in which it selects 1 point of chromosome and from that
point it divides chromosome into two parts. To build new child it takes first part of first parent
and second part of second parent. It is briefly explained using image in which our first parent
is of red color and another is of blue color after applying one point crossover they flipped their
second part and reslting child’s are look like the addition of two chromosomes.[9]
40 CHAPTER 6. USING GENETIC ALGORITHM
Figure 6.2: One Point Crossover
6.4.2 Two-point crossover
In two point crossover it marks two points onto the two parent chromosomes and within that
portion it flips the portion from the second parent. It is explained very well using following
image in which first it has marked the two points and from that range it has flipped the red
portion with blue portion.
Figure 6.3: Two Point Crossover
6.4.3 Cut and splice
Another crossover variant, the ”cut and splice” approach, results in a change in length of the
children strings. The reason for this difference is that each parent string has a separate choice
of crossover point.
Figure 6.4: Cut Splice Crossover
6.4.4 Uniform Crossover and Half Uniform Crossover
The Uniform Crossover uses a fixed mixing ratio between two parents. Unlike one and two
point crossover, the Uniform Crossover enables the parent chromosomes to contribute the gene
level rather than the segment level. If the mixing ratio is 0.5, the offspring has approximately
half of the genes from first parent and the other half from second parent, although cross over
points can be randomly chosen as seen below.
6.5. MUTATION 41
Figure 6.5: Uniform Crossover and Half Uniform Crossover
6.4.5 Blend Crossover
This crossover operator is a kind of linear combination of the two parents that uses the following
equations for each gene: In which it substracts and add diffrnce between two parents, multiplied
by blending factor to parents. So it will give you two new child’s. This is nothing but the linear
combination of chromosomes.
Child1 = parent1 − b ∗ (parent1 − parent2) (6.1)
Child2 = parent2 + b ∗ (parent1parent2) (6.2)
Where,
b is a random value between 0 and 1. Implemented for real and integers genes only.
In this problem I have implemented blend crossover to create child’s for the next generation.[9]
6.5 Mutation
Mutation is a genetic operator used to create genetic diversity from one generation of a pop-
ulation of genetic algorithm chromosomes to the next. It is analogous to biological mutation.
Mutation alters one or more gene values in a chromosome from its initial state. In mutation, the
solution may change entirely from the previous solution. In short solution get transferred from
one pont to another point. Hence GA can come to better solution by using mutation. Mutation
occurs during evolution according to a user-definable mutation probability. This probability
should be set low. If it is set too high, the search will turn into a primitive random search.
Because if its probability is high then it will get shifted from point which is already close to
minima. Using mutation we want to move solution which stucks into local minima. The classic
example of a mutation operator involves a probability that an arbitrary bit in a genetic sequence
will be changed from its original state. The purpose of mutation in GAs is preserving and in-
troducing diversity. Mutation should allow the algorithm to avoid local minima by preventing
the population of chromosomes from becoming too similar to each other, thus slowing or even
stopping evolution. This reasoning also explains the fact that most GA systems avoid only tak-
ing the fittest of the population in generating the next but rather a random (or semi-random)
selection with a weighting toward those that are fitter. For different genome types, there are
different mutation types are suitable. [9]
Since the end goal is to bring the population to convergence, selection/crossover happen
more frequently (typically every generation). Mutation, being a divergence operation, should
happen less frequently, and typically only effects a few members of a population (if any) in any
given generation.
42 CHAPTER 6. USING GENETIC ALGORITHM
6.5.1 Flip Bit
This mutation operator takes the chosen genome and inverts the bits (i.e. if the genome bit is
1, it is changed to 0 and vice versa).
6.5.2 Boundary
This mutation operator replaces the genome from either lower or upper bound randomly. This
can be used for integer and float genes.
6.5.3 Uniform
This operator replaces the value of the chosen gene with a uniform random value selected be-
tween the user-specified upper and lower bounds for that gene. This mutation operator can
only be used for integer and float genes.
6.5.4 Non-Uniform
The probability that amount of mutation will go to 0 with the next generation is increased
by using non-uniform mutation operator. It keeps the population from stagnating in the early
stages of the evolution. It tunes solution in later stages of evolution. This mutation operator
can only be used for integer and float genes.
Chapter 7
Objective function & Method
7.1 Objective Function
The most commonly employed experiment to determine the relaxation spectrum of a polymer
is small-amplitude oscillatory shear. This experiment measures linear viscoelastic materials
properties as storage modulus G (w) and loss modulus G (w) of the dynamic shear as functions
of frequency, which may be expressed as follows.
G (w) = N
i=1 gi
w2λ2
i
1+λ2
i w2 (7.1)
G (w) = N
i=1 gi
wλi
1+λ2
i w2 (7.2)
(7.3)
[4]
where, w is a frequency.
g is relaxation strength.
λ is relaxation time.
The estimation process consists in properly setting pairs (Gi, ki) such that G (w) and G (w)
from eqs. 7.3 best fit the experimental data. Equation 7.3 gives us the storage modulus of the
dynamic shear and the loss modulus of the dynamic shear, respevtively. It has already been
observed that the spectrum estimation from experimental data is an ill-posed problem. Conse-
quently, minor inaccuracy in the input data may produce major errors in the solution spectra.
This characteristic may lead to inadequate solutions with relatively high regression errors and
spectra with unrealistic waved aspect1,2 of the fitted curve because of the excessive degrees of
freedom.[4]
7.1.1 Methods
This thesis introduces a nonanalytical regression approach based on evolutionary algorithms for
determination of relaxation spectra, which is suited to handle ill-posed problems and, likewise
nonlinear regression methods, can also find all models parameters. The field of evolutionary
computation investigates nondeterministic search algorithms for complex optimization prob-
lems. These techniques can simultaneously iterate several solutions and combine the more
promising ones to generate a new improved set of solutions. EA’s have been successfully ap-
plied to problems with nonlinear or even discontinuous objective functions, nonconvex objective
43
44 CHAPTER 7. OBJECTIVE FUNCTION & METHOD
Operators Values
Population size/Number of Chromsomes 100
Size of Chromosome 8
Search space for random number generation to fill up chromosomes 10−3 to 103
Substitution rate 0.3
Crossover probability 0.7
Mutation Probability 0.01
Mutation Range 0 to 1
Blend factor 0.5
Table 7.1: GA Parameters
function space, nonconvex search space, as well as ill-posed problems.
7.1.2 GA Parameters Used in the Experiments
• Population size/Number of Chromsomes -100
• Size of Chromosome-8
• Search space for random number generation to fill up chromosomes 10−3 to 103.
• Substitution rate-0.3
• Mutation rate-0.01
• Crossover probability-0.7
• Mutation Probability
• Mutation Range- 0.1
• Blend factor-0.5
Here, In our experiement we have initiallized 100 chromosomes with random number be-
tween search space of interval 10−3 to 103 as we want 8 relaxation spectra so the size of chro-
mosomes is 8. So each chromosome has 8 pairs of g & λ from interval of 10−3 to 103 after
that we started applying genetic operators which are explained in detail below. Substitution
rate is 0.3 mean we are changing or flipping the 30% of chromosome with other chromosome. [4]
7.2 Initiallization
The population size depends on the nature of the problem, but typically contains several hun-
dreds or thousands of possible solutions. In genetic algorithm we initialize our chromosomes
with random nuber within interval of search space. Search space is probable range or interval of
number for calculation of output. If we are dealing with Binary genetic algorithm we initialize
chromosomes with binary input 0 & 1. Occasionally, the solutions may be ”seeded” in areas
where optimal solutions are likely to be found.
7.3. FITNESS VALUES 45
In our problem we have initialize K chromosomes with random number genaearation within
search space. As the range for g & λ are different even g is in decreasing order and λ is in
increasing order so deciding or fixing search space in critical step even the problem is that large
interval of search space gives us accuracy but requires extra time.
7.3 Fitness Values
After initializing the chromosomes using our objective function we can calculate some output
value which is our predicted or calculated value by comparing our predicted value with actual
we will get some error value. Using that error value we can arrange the chromosomes as per
the error value. This value will help us for selection step. Using the value of fitness score we
can decide which one is best chromosome and which is worst chromosome.
In our algorithm our objective is to minimize the error. Therefore the objective function is
M G , G , C = MSE (Gc) + MSE (Gc ) (7.4)
Where,
M- Mean Square Error.
As per values of mean square error we can sort the chromosomes and then select the best
cromosomes for creation of next generation.
7.4 Selection
After deciding on an fitness, The second decision to make in using a genetic algorithm is how
to perform selectionthat is, how to choose the individuals in the population that will create
childs for the next generation, and how many childs each will create. The Aim of selection
is, to fitter chromosomes from the population in ensures that their childs will produce higher
fitness compared to parent fitness. Selection has to be balanced with variation from crossover
and mutation.If our selection process is toostrong then that suboptimal highly fit individuals
will take over the population, reducing the diversity needed for further change and progress.
If selection process is tooweak then it will result in tooslow evolution. As was the case for
encodings, numerous selection schemes have been proposed in the GA literature.
7.4.1 Roulette Wheel Selection
In order to produce new childs/offsprings the parents which are selected according to their
fitness. The better the chromosomes are, the more chances to be selected they have. Imagine
a roulette wheel where are placed all chromosomes in the population, every has its place big
accordingly to its fitness function, like on the following picture. So if the place of chromosome
is big then the probability of getting selected is high. If the place of chromosome is small
then the probability of getting selected is low. Then a marble is thrown there and selects the
chromosome. Chromosome with bigger fitness will be selected more times. [2]
7.4.2 Rank Selection
Rank selection is an alternative method whose purpose is also to prevent tooquick convergence.
In the version proposed by Baker (1985), the individuals in the population are ranked according
46 CHAPTER 7. OBJECTIVE FUNCTION & METHOD
Figure 7.1: Roulette Wheel Selection
to fitness, and the expected value of each individual depends on its rank rather than on its
absolute fitness. There is no need to scale fitness in this case, since absolute differences in
fitness are obscured. This discarding of absolute fitness information can have advantages (using
absolute fitness can lead to convergence problems) and disadvantages (in some cases it might
be important to know that one individual is far fitter than its nearest competitor). Ranking
avoids giving the far largest share of offspring to a small group of highly fit individuals, and
thus reduces the selection pressure when the fitness variance is high. It also keeps up selection
pressure when the fitness variance is low: the ratio of expected values of individuals ranked i
and i+1 will be the same whether their absolute fitness differences are high or low. [2]
The previous selection will have problems when the fitnesses differs very much. For example,
if the best chromosome fitness is 90% of all the roulette wheel then the other chromosomes will
have very few chances to be selected. Rank selection first ranks the population and then every
chromosome receives fitness from this ranking. The worst will have fitness 1, second worst 2
etc. and the best will have fitness N (number of chromosomes in population). You can see in
following picture, how the situation changes after changing fitness to order number.
Figure 7.2: Before & After Rank Selection
After this all the chromosomes have a chance to be selected. But this method can lead to
slower convergence, because the best chromosomes do not differ so much from other ones.
7.4. SELECTION 47
7.4.3 Tournament Selection
Tournament selection is a method of selecting an individual from a population of individuals in
a genetic algorithm. Its like deciding groups and allow them to fight among them and choose
the chromosome which is a winner of tournament. Selection pressure is easily adjusted by
changing the tournament size. If the tournament size is larger, weak individuals have a smaller
chance to be selected. Rank scaling requires sorting the entire population by rank a potentially
timeconsuming procedure. Two individuals are chosen at random from the population. A
random number r is then chosen between 0 and 1. If r < k (where k is a parameter, for example
0.75), the fitter of the two individuals is selected to be a parent; otherwise the less fit individual
is selected. The two are then returned to the original population and can be selected again.[2]
Figure 7.3: Tournament Selection
7.4.4 Elitism
Sometimes best chromosome can be lost when crossover or mutation results in child that are
weaker than the parents. To do this we can use a feature known as elitism. Elitism involves
copying a small propotion of the best chromosomes, unchanged, into the next generation. This
can sometimes have a dramatic impact on performance by ensuring that the algorithm does not
waste time for re discovering previously discarded partial solutions. Best chromosome that are
preserved unchanged through elitism remain eligible for selection as parents when breeding the
remainder of the next generation. [2]
48 CHAPTER 7. OBJECTIVE FUNCTION & METHOD
Chapter 8
Termination Criteria & Result
8.1 Optical Chromosome
After running algorithm till the termination criteria. Algorithm will find out 1 optimal or we
can say best chromosome. Using the values of genes from best chromosome we can calculate
values of G & G at different frequecies of experimental data. Now if want to see model fitness,
we will plot the curves on the same plot of experimental values. Suppose in our case we get best
chromsome Cx then we use the values of g & λ to calculate G & G for different frequencies
W. [2]
Figure 8.1: Graph of W vs G
8.2 Absolute average deviation (AAD)
Deviation is a measure of difference between the observed value of a variable and some other
value, often that variable’s mean. The sign of the deviation (positive or negative). Reports
49
50 CHAPTER 8. TERMINATION CRITERIA & RESULT
Figure 8.2: Graph of W vs G
Algorithm w vs G w vs G
LevenbergMarquardt algorithm 0.0088938 -0.01532
Genetic Algorithm 0.0503931 0.022601
Table 8.1: AAD values of G & G
the direction of that difference (the deviation is positive when the observed value exceeds the
reference value). The magnitude of the value indicates the size of the difference. But we are not
interested in sign whether it is positive or negative we just want to minimize the error, That’s
why we are considering its value as a absolute, that’s why we are considering absolute average
deviation.[9]
AAD(G ) = 1
M ∗ M
j=1
¯G (wj)−G (wj)
¯G (wj)
(8.1)
AAD(G ) = 1
M ∗ M
j=1
¯G (wj)−G (wj)
¯G (wj)
(8.2)
(8.3)
Where,
¯G (wj) & ¯G (wj) are the calculated values
G (wj) & G (wj) are the experimental values
M and M are the number of available data for G and G , respectively.
Chapter 9
Graphical User Interface (GUI)
Following are the screenshots of the software toolkit having name ”RheomlFit”. It is open source
software toolkit. All the files related to this toolkit are avaiable on http://cms.unipune.ac.
in/~ajayj/RheomlFit.html.
The steps requiered to compile this code are
• For windows Operating System-
1. Go to the link http://cms.unipune.ac.in/ ajayj/RheomlFit.html
2. Click the button to download the folder RheomlFit
3. Go to that Windows folder then you need to install RheomlFit.exe
• For Linux Operating System-
1. Go to the link http://cms.unipune.ac.in/ ajayj/RheomlFit.html
2. Click the button to download the folder RheomlFit
3. Go to that Linux folder
4. Open Matlab console/ GNU-Octave and run ’gui’ file It will open the main window of
GUI.
• How to operate the application-
1. After completion of installation process The main window of software will be open. Shown
in figure 9.1
2. Press the Enter button it will open a new window which displays the options, Shown in
figure 9.2
3. Open Input File Which will open a dialouge box for selection of file. The input file must
be in .txt format without any headers. Shown in figure 9.3
4. Number of Modes This option is giving us option to insert or choose number of relaxation
modes. Shown in figure 9.4
51
52 CHAPTER 9. GRAPHICAL USER INTERFACE (GUI)
5. Compute It will start computation of code and simmultaneously displays out one small
window which will give you status of code which is shown in figure number 9.5 9.6 9.7 9.8
It will give you status of code in interval of 20 percentage.
6. After completion of computation GUI will displays out fitted curve .Shown in figure 9.9
7. By clicking on ”Graph of Relaxation Spectra” button it will displays out the curve of g
vs λ. Which is output generated file. Shown in figure 9.9
8. By clicking the button ”Download output Button” it will ask you to choose the directory
to save a output file after selecting the folder it will automatically save the output file to
that folder.Shown in figure 9.9
Figure 9.1: Main window of GUI
53
Figure 9.2: Second window of GUI
Figure 9.3: File selection dialouge box
54 CHAPTER 9. GRAPHICAL USER INTERFACE (GUI)
Figure 9.4: User input for number of modes
Figure 9.5: Waitbar of code started
55
Figure 9.6: Waitbar of code about to finish
Figure 9.7: Computation is done
56 CHAPTER 9. GRAPHICAL USER INTERFACE (GUI)
Figure 9.8: Relaxation spectra graph button
Figure 9.9: Download output file
Chapter 10
Conclusion
The determination of the discrete relaxation spectrum of a viscoelastic material is an difficult
but important process. The determination of the discrete relaxation spectrum is well known to
be an ill-posed problem. We have introduced a new nonlinear regression technique, based on
the MarquardtLevenberg procedure, for the determination of the discrete relaxation spectrum,
also we have implemented Genetic Algorithm in oder to fix the relaxation modes. In oder to
make this operating process user friendly we have designed a Graphical user interface. We have
published our code as a open source for those who want to extend the code, and researchers
who may want to analyze particular features of the algorithm.
57
58 CHAPTER 10. CONCLUSION
Bibliography
[1] M. Baumgaertel and H. Winter. Determination of discrete relaxation and retardation time
spectra from dynamic mechanical data, volume 28. Rheol Acta, 1989.
[2] D. A. Coley. An Introduction to Genetic Algorithms for Scientist and Engineers. World
Scientific, Singapore.
[3] J. D. Ferry. Viscoelastic Properties of Polymers. Third Edition. John Wiley and Sons, New
york, University of viscosin John Wiley and Sons New york.
[4] F.J.Monaco, A.C.B.Delbem. Genetic Algorithm for the Determination of Linear Viscoelas-
tic Relaxation Spectrum from Experimental Data. Wiley InterScience, ICMS University of
Sao Paulo Brazil., 35−6 edition, 1991.
[5] Pankaj. Doshi. Harshawardhan Pol. Sourya Banik Lal Busher Azad Sumeet Thete Ashish
Lele Nonisothermal analysis of extrusion film casting process using molecular constitutive
equations. Springer-Verlag Berlin Heidelberg, 2013.
[6] Pankaj Doshi. Harshawardhan V. Pol, Sumeet S. Thete. Sumeet S. Thete, Ashish K. Lele
Necking in extrusion film casting: The role of macromolecular architecture. The Society of
Rheology, 2013.
[7] J. Honerkamp and J. Weese. A Non Linear Regularization method for the calculation of
relaxation spectra, volume 32. Rheol Acta, New York, USA, 1993.
[8] O. T. K. Madsen, H.B. Nielsen. Methods for non linear least square problems. Second Edi-
tion. John Wiley and Sons, New york, Informatics and Mathematical Modelling Technical
University of Denmark, 2004.
[9] G. K. Kumara Sastry, David Goldberg. Genetic Algorithms. University of Illinois, USA.
[10] Ottendorf-Okrilla. Introduction to Rheology. RheoTec Messtechnik GmbH.
[11] D. H. M. Wyss. Measuring the Viscoelastic Behaviour of Soft Materials. Harvard University
Cambridge, USA.
[12] H. Yu and B. M. Wilamowski. Industrial Electronics Handbook, vol. 5 Intelligent Systems.
CRC Press 2011.
59
60 BIBLIOGRAPHY
Appendix A
Acronyms
DRS Discrete Relaxation Spectra
GA Genetic Algorithm
ML Marquardt Levenberg
GD Gradient Descent
GN Gauss Newton
NLR Non Linear Regression
61

More Related Content

What's hot

Derya_Sezen_POMDP_thesis
Derya_Sezen_POMDP_thesisDerya_Sezen_POMDP_thesis
Derya_Sezen_POMDP_thesisDerya SEZEN
 
Mark Quinn Thesis
Mark Quinn ThesisMark Quinn Thesis
Mark Quinn ThesisMark Quinn
 
Virtual Environments as Driving Schools for Deep Learning Vision-Based Sensor...
Virtual Environments as Driving Schools for Deep Learning Vision-Based Sensor...Virtual Environments as Driving Schools for Deep Learning Vision-Based Sensor...
Virtual Environments as Driving Schools for Deep Learning Vision-Based Sensor...Artur Filipowicz
 
Efficient Model-based 3D Tracking by Using Direct Image Registration
Efficient Model-based 3D Tracking by Using Direct Image RegistrationEfficient Model-based 3D Tracking by Using Direct Image Registration
Efficient Model-based 3D Tracking by Using Direct Image RegistrationEnrique Muñoz Corral
 
FATKID - A Finite Automata Toolkit - NF Huysamen
FATKID - A Finite Automata Toolkit - NF HuysamenFATKID - A Finite Automata Toolkit - NF Huysamen
FATKID - A Finite Automata Toolkit - NF HuysamenNico Huysamen
 
Efficient algorithms for sorting and synchronization
Efficient algorithms for sorting and synchronizationEfficient algorithms for sorting and synchronization
Efficient algorithms for sorting and synchronizationrmvvr143
 
60969_Orsted2003-Morten Høgholm Pedersen-New Digital Techniques in Medical Ul...
60969_Orsted2003-Morten Høgholm Pedersen-New Digital Techniques in Medical Ul...60969_Orsted2003-Morten Høgholm Pedersen-New Digital Techniques in Medical Ul...
60969_Orsted2003-Morten Høgholm Pedersen-New Digital Techniques in Medical Ul...Morten Høgholm Pedersen
 
Tutorial guide
Tutorial guideTutorial guide
Tutorial guidenicobrain
 

What's hot (19)

feilner0201
feilner0201feilner0201
feilner0201
 
phd-thesis
phd-thesisphd-thesis
phd-thesis
 
Derya_Sezen_POMDP_thesis
Derya_Sezen_POMDP_thesisDerya_Sezen_POMDP_thesis
Derya_Sezen_POMDP_thesis
 
phd_unimi_R08725
phd_unimi_R08725phd_unimi_R08725
phd_unimi_R08725
 
Mark Quinn Thesis
Mark Quinn ThesisMark Quinn Thesis
Mark Quinn Thesis
 
Diederik Fokkema - Thesis
Diederik Fokkema - ThesisDiederik Fokkema - Thesis
Diederik Fokkema - Thesis
 
lapointe_thesis
lapointe_thesislapointe_thesis
lapointe_thesis
 
AuthorCopy
AuthorCopyAuthorCopy
AuthorCopy
 
Virtual Environments as Driving Schools for Deep Learning Vision-Based Sensor...
Virtual Environments as Driving Schools for Deep Learning Vision-Based Sensor...Virtual Environments as Driving Schools for Deep Learning Vision-Based Sensor...
Virtual Environments as Driving Schools for Deep Learning Vision-Based Sensor...
 
Efficient Model-based 3D Tracking by Using Direct Image Registration
Efficient Model-based 3D Tracking by Using Direct Image RegistrationEfficient Model-based 3D Tracking by Using Direct Image Registration
Efficient Model-based 3D Tracking by Using Direct Image Registration
 
FATKID - A Finite Automata Toolkit - NF Huysamen
FATKID - A Finite Automata Toolkit - NF HuysamenFATKID - A Finite Automata Toolkit - NF Huysamen
FATKID - A Finite Automata Toolkit - NF Huysamen
 
Flth
FlthFlth
Flth
 
HASMasterThesis
HASMasterThesisHASMasterThesis
HASMasterThesis
 
Efficient algorithms for sorting and synchronization
Efficient algorithms for sorting and synchronizationEfficient algorithms for sorting and synchronization
Efficient algorithms for sorting and synchronization
 
Thats How We C
Thats How We CThats How We C
Thats How We C
 
Frmsyl1213
Frmsyl1213Frmsyl1213
Frmsyl1213
 
BenThesis
BenThesisBenThesis
BenThesis
 
60969_Orsted2003-Morten Høgholm Pedersen-New Digital Techniques in Medical Ul...
60969_Orsted2003-Morten Høgholm Pedersen-New Digital Techniques in Medical Ul...60969_Orsted2003-Morten Høgholm Pedersen-New Digital Techniques in Medical Ul...
60969_Orsted2003-Morten Høgholm Pedersen-New Digital Techniques in Medical Ul...
 
Tutorial guide
Tutorial guideTutorial guide
Tutorial guide
 

Similar to thesis

1026332_Master_Thesis_Eef_Lemmens_BIS_269.pdf
1026332_Master_Thesis_Eef_Lemmens_BIS_269.pdf1026332_Master_Thesis_Eef_Lemmens_BIS_269.pdf
1026332_Master_Thesis_Eef_Lemmens_BIS_269.pdfssusere02009
 
Classification System for Impedance Spectra
Classification System for Impedance SpectraClassification System for Impedance Spectra
Classification System for Impedance SpectraCarl Sapp
 
UCHILE_M_Sc_Thesis_final
UCHILE_M_Sc_Thesis_finalUCHILE_M_Sc_Thesis_final
UCHILE_M_Sc_Thesis_finalGustavo Pabon
 
UCHILE_M_Sc_Thesis_final
UCHILE_M_Sc_Thesis_finalUCHILE_M_Sc_Thesis_final
UCHILE_M_Sc_Thesis_finalGustavo Pabon
 
Maxime Javaux - Automated spike analysis
Maxime Javaux - Automated spike analysisMaxime Javaux - Automated spike analysis
Maxime Javaux - Automated spike analysisMaxime Javaux
 
Integrating IoT Sensory Inputs For Cloud Manufacturing Based Paradigm
Integrating IoT Sensory Inputs For Cloud Manufacturing Based ParadigmIntegrating IoT Sensory Inputs For Cloud Manufacturing Based Paradigm
Integrating IoT Sensory Inputs For Cloud Manufacturing Based ParadigmKavita Pillai
 
Stochastic Processes and Simulations – A Machine Learning Perspective
Stochastic Processes and Simulations – A Machine Learning PerspectiveStochastic Processes and Simulations – A Machine Learning Perspective
Stochastic Processes and Simulations – A Machine Learning Perspectivee2wi67sy4816pahn
 
project Report on LAN Security Manager
project Report on LAN Security Managerproject Report on LAN Security Manager
project Report on LAN Security ManagerShahrikh Khan
 
Steganography final report
Steganography final reportSteganography final report
Steganography final reportABHIJEET KHIRE
 
Project report on Eye tracking interpretation system
Project report on Eye tracking interpretation systemProject report on Eye tracking interpretation system
Project report on Eye tracking interpretation systemkurkute1994
 

Similar to thesis (20)

Fraser_William
Fraser_WilliamFraser_William
Fraser_William
 
MS_Thesis
MS_ThesisMS_Thesis
MS_Thesis
 
1026332_Master_Thesis_Eef_Lemmens_BIS_269.pdf
1026332_Master_Thesis_Eef_Lemmens_BIS_269.pdf1026332_Master_Thesis_Eef_Lemmens_BIS_269.pdf
1026332_Master_Thesis_Eef_Lemmens_BIS_269.pdf
 
Classification System for Impedance Spectra
Classification System for Impedance SpectraClassification System for Impedance Spectra
Classification System for Impedance Spectra
 
UCHILE_M_Sc_Thesis_final
UCHILE_M_Sc_Thesis_finalUCHILE_M_Sc_Thesis_final
UCHILE_M_Sc_Thesis_final
 
UCHILE_M_Sc_Thesis_final
UCHILE_M_Sc_Thesis_finalUCHILE_M_Sc_Thesis_final
UCHILE_M_Sc_Thesis_final
 
Mak ms
Mak msMak ms
Mak ms
 
Maxime Javaux - Automated spike analysis
Maxime Javaux - Automated spike analysisMaxime Javaux - Automated spike analysis
Maxime Javaux - Automated spike analysis
 
Integrating IoT Sensory Inputs For Cloud Manufacturing Based Paradigm
Integrating IoT Sensory Inputs For Cloud Manufacturing Based ParadigmIntegrating IoT Sensory Inputs For Cloud Manufacturing Based Paradigm
Integrating IoT Sensory Inputs For Cloud Manufacturing Based Paradigm
 
Stochastic Processes and Simulations – A Machine Learning Perspective
Stochastic Processes and Simulations – A Machine Learning PerspectiveStochastic Processes and Simulations – A Machine Learning Perspective
Stochastic Processes and Simulations – A Machine Learning Perspective
 
HonsTokelo
HonsTokeloHonsTokelo
HonsTokelo
 
project Report on LAN Security Manager
project Report on LAN Security Managerproject Report on LAN Security Manager
project Report on LAN Security Manager
 
Steganography final report
Steganography final reportSteganography final report
Steganography final report
 
Thesis_Prakash
Thesis_PrakashThesis_Prakash
Thesis_Prakash
 
Analytical-Chemistry
Analytical-ChemistryAnalytical-Chemistry
Analytical-Chemistry
 
MSC-2013-12
MSC-2013-12MSC-2013-12
MSC-2013-12
 
Liebman_Thesis.pdf
Liebman_Thesis.pdfLiebman_Thesis.pdf
Liebman_Thesis.pdf
 
Project report on Eye tracking interpretation system
Project report on Eye tracking interpretation systemProject report on Eye tracking interpretation system
Project report on Eye tracking interpretation system
 
thesis
thesisthesis
thesis
 
thesis
thesisthesis
thesis
 

thesis

  • 1. Centre for Modeling and Simulation Savitribai Phule Pune University Master of Technology (M.Tech.) Programme in Modeling and Simulation Project Report Model Fitting & Optimization of Rheological Data Ajay Vishwas Jadhav CMS1327 Academic Year 2014-15
  • 2. 2
  • 3. Centre for Modeling and Simulation Savitribai Phule Pune University Certificate This is certify that this report, titled Model Fitting & Optimization of Rheological Data, authored by Ajay Vishwas Jadhav (CMS1327), describes the project work carried out by the author under our supervision during the period from January 2015 to June 2015. This work represents the project component of the Master of Technology (M.Tech.) Programme in Modeling and Simulation at the Center for Modeling and Simulation, Savitribai Phule Pune University. Dr. Harshawardhan Pol, Senior Scientist, CSIR-National Chemical Laboratory Polymer Sci.& Engg. Division Pune - 411008, India Dr. Sukratu Barve, Assistant Professor, Centre for Modeling and Simulation Savitribai Phule Pune University Pune - 411007, India Dr. Anjali Kshirsagar, Director, Centre for Modeling and Simulation Savitribai Phule Pune University Pune - 411007, India
  • 4. 4
  • 5. Centre for Modeling and Simulation Savitribai Phule Pune University Author’s Declaration This document, titled Model Fitting & Optimization of Rheological Data, authored by me, is an authentic report of the project work carried out by me as part of the Master of Technology (M.Tech.) Programme in Modeling and Simulation at the Center for Modeling and Simulation, Savitribai Phule Pune University. In writing this report, I have taken reasonable and adequate care to ensure that material borrowed from sources such as books, research papers, internet, etc., is acknowledged as per accepted academic norms and practices in this regard. I have read and understood the University’s policy on plagiarism (http://unipune.ac.in/administration_files/pdf/Plagiarism_Policy_ University_14-5-12.pdf). Ajay Vishwas Jadhav CMS1327
  • 6. 6
  • 7. Abstract To analyze the behavior of viscoelastic materials such as polymer, the relaxation spectrum is an important tool for studying the behaviour of viscoelastic materials. It is well known that the relaxation spectrum characterizing the viscoelastic properties of a polymer melt or solution is not directly accessible by an experiment. Therefore, it must be calculated from data. The most popular procedure is to use data from a small amplitude oscillatory shear experiment to determine the parameters in a multimode Maxwell model. As the discrete relaxation times appear non-linearly in the mathematical model for the relax- ation modulus. Therefore the indirect calculation of the relaxation times is an ill-posed problem and to find out its solution is very difficult. A nonlinear regression technique is described in this paper in which the minimization is performed with respect to both the discrete relaxation times and the elastic moduli is well documented. Using the Marquardt Levenberg the nonlinear least squares problems is solved. In this tech- nique the number of discrete modes is increased dynamically and the procedure is terminated when the calculated values of the model parameters are dominated by a measure of their ex- pected values. Procedure for that is robust and efficient Numerical calculations on model and experimental data are presented and discussed. Sometimes we need exact number of relaxation modes, Using Marquardt Levenberg algo- rithm it is not possible to get exact number of modes for relaxation spectra. To overcome this issue we have implemented one evolutionary algorithm i.e, Genetic Algorithm. In which we can fix the number of modes by choosing size of chromosome. Detail algorithm for Marquardt Levenbergin and Genetic Algorithm is mentioned in details in thesis. 7
  • 8. 8
  • 9. Acknowledgements First and foremost, praises and thanks to my Parents, for their showers of blessings and support throughout my project work to complete the research successfully. I would like to express my deep and sincere gratitude to my thesis guide, Dr. H.V. Pol, senior scientist, NCL Pune and Dr. Sukratu Barve, Centre for Modeling and Simulation, Uni- versity of Pune, for giving me the opportunity to do research and providing invaluable guidance throughout this project. I am deeply indebted to Dr. F.J.Monaco, ICMS, University of Sao Paulo, Brazil for giving me opportunity to complete this project, His passion and enthusiasm for sharing his knowledge and motivating students like me. Whenever I have approached him to discuss ideas for the project, or any generic problem, I have always found an eager listener. It was a great privilege and honour to work and study under his guidance. Working under Dr. F.J. Monaco gave me a new identity. I am extremely grateful for what he has offered me. I would also like to thank Dr. Renu Dhadwal, FLAME Pune for sharing her knowledge about Marquardt- Levenberg Algorithm and also about Matlab coding. Her knowledge about Mathematical modeling and simulation, dynamics and Rheology of polymer fluids helped me a lot. Because of her help I could able to start my journey in the field of mathematical modeling and simulation for polymers. I would like to extend my thank to Dr. Deepak Bankar and Dr. Vikas Kashid for helping and encouraging me throughout my thesis. At last but not least, I am thankful to my friends Pratiksha, Umesh, Avinash, Akshay, Ashish K, Sandeep, Alok, Srikant J, Amolkumar, Tushar, Swapnil, Shriram, Ashish A, Piyush,Jaydeep, Shrikant Jayraman, Sandip Swarkar, Shrikant Panchal, Suraj, Satish Waman. Any omision in this brief acknowledgement does not mean lack of gratitude. 9
  • 10. 10
  • 11. List of Figures 2.1 Extrusion Film Casting Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.2 Necking defect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.1 Types of Fluids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 3.2 Oscillatory Shear Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 4.1 Searching process of the steepest descent method with different learning constants: left trajectory is for small learning constant that leads to slow convergence; right side tra- jectory is for large learning constant that causes oscillation i.e. divergence . . . . . . . . 28 4.2 GD Step 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 4.3 GD Step 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 4.4 GD Step 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 4.5 GD Step 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 4.6 GD Step 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 4.7 GD Step 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 4.8 GD Step 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 4.9 GD Step 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 4.10 Interpretation of Newton Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 5.1 (a) Graph of W vs G , (b) Graph of W vs G . . . . . . . . . . . . . . . . . . . . . . 36 6.1 Flowchart of Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 6.2 One Point Crossover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 6.3 Two Point Crossover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 6.4 Cut Splice Crossover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 6.5 Uniform Crossover and Half Uniform Crossover . . . . . . . . . . . . . . . . . . . . . 41 7.1 Roulette Wheel Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 7.2 Before & After Rank Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 7.3 Tournament Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 8.1 Graph of W vs G . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 8.2 Graph of W vs G . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 9.1 Main window of GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 9.2 Second window of GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 9.3 File selection dialouge box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 9.4 User input for number of modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 9.5 Waitbar of code started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 9.6 Waitbar of code about to finish . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 9.7 Computation is done . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 11
  • 12. 12 LIST OF FIGURES 9.8 Relaxation spectra graph button . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 9.9 Download output file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
  • 13. List of Tables 2.1 Experimental Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 4.1 Specifications of Different Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 34 5.1 AAD value of ML algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 7.1 GA Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 8.1 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 13
  • 14. 14 LIST OF TABLES
  • 15. Contents Abstract 7 Acknowledgments 9 1 Introduction 17 2 Relaxation Spectra 19 2.1 Experimental Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3 Rheological Data 23 3.1 Rheology to Viscoelasticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.2 Viscosity & elasticity measurements . . . . . . . . . . . . . . . . . . . . . . . . . 24 3.3 Oscillatory Shear Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 4 Marquardt Levenberg Algorithm 27 4.1 Objective function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 4.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 4.3 Stepwise Derivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 4.3.1 Gradiant Descent Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 29 4.3.2 Newton Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 4.3.3 Gauss Newton Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 4.3.4 Damping Factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 5 Result 35 6 Using Genetic Algorithm 37 6.1 Biological Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 6.2 Evolutionary computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 6.3 Introduction & Flowchart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 6.4 Crossover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 6.4.1 One-point crossover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 6.4.2 Two-point crossover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 6.4.3 Cut and splice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 6.4.4 Uniform Crossover and Half Uniform Crossover . . . . . . . . . . . . . . . 40 6.4.5 Blend Crossover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 6.5 Mutation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 6.5.1 Flip Bit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 6.5.2 Boundary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 6.5.3 Uniform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 6.5.4 Non-Uniform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 15
  • 16. 16 CONTENTS 7 Objective function & Method 43 7.1 Objective Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 7.1.1 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 7.1.2 GA Parameters Used in the Experiments . . . . . . . . . . . . . . . . . . 44 7.2 Initiallization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 7.3 Fitness Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 7.4 Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 7.4.1 Roulette Wheel Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 7.4.2 Rank Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 7.4.3 Tournament Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 7.4.4 Elitism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 8 Termination Criteria & Result 49 8.1 Optical Chromosome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 8.2 Absolute average deviation (AAD) . . . . . . . . . . . . . . . . . . . . . . . . . . 49 9 Graphical User Interface (GUI) 51 10 Conclusion 57 Bibliography 59 A Acronyms 61
  • 17. Chapter 1 Introduction In rheology, the Discrete relaxation spectrum (DRS), contains values of relaxation strength g & relaxation time λ. DRS has a vital importance, for once it is known, it is straight forward to compute all other material functions. Unfortunately, it cannot be measured directly, in- stead, it can be calculated indirectly, most commonly from small amplitude oscillatory shear experiments, which is denoted by G∗ also called as dynamic moduli. These experiments yield the frequency-dependent dynamic moduli (experimental data). The experimental data contains values of Storage & Loss modulus at different frquencies. The relation between Experimental data and relaxation is greatly explained by the two equation having equation number . [3] G∗ = G + iG (1.1) Where, G and G are the storage and loss modulus respectively. w is the frequency of deformation. g is relaxation strength having SI unit of Pascal. λ is a relaxation time having SI unit of Seconds. N is the number of modes in the spectrum. The resulting problem of deducing DRS from G∗ has a long and rich history . It is also quite common to seek a Discrete relaxation spectrum (DRS), which consists of pairs of relaxation strengths and relaxation times gi&λi, with i = 1, 2, ...,N, where N is the number of modes in the spectrum. The relationship between the DRS and G∗ is given by G (w) = N i=1 gi w2λ2 i 1+λ2 i w2 (1.2) G (w) = N i=1 gi wλi 1+λ2 i w2 (1.3) The determination of a DRS can be complicated. Additional difficulties are encountered in the determination of the DRS, which is an ill-posed problem because the number of modes N is not known in advance. Furthermore, even after a suitable well-posed approximation to the problem is constructed, numerical solution is beset by problems of poor conditioning, or sensitivity to noise in the data. Apart from all these issues, extracting the relaxation spectrum is an important part. The principal goal of this thesis is to develop an open-source software toolkit program which can take experimentally determined G∗ values as input, and provides the discrete relaxation spectra as output. Given the computer code is available on the http://cms.unipune.ac.in/~ajayj/RheomlFit.html, it is important to justify why this ad- 17
  • 18. 18 CHAPTER 1. INTRODUCTION ditional undertaking is important or useful, and how it fulfils an unmet need. one of the most important motivation include [1] Many of the programs based on algorithms published in the literature but are not readily available to the public, On the other end is IRIS, arguably the most popular program used in the industry for this problem It is a commercial product whose exact underlying algorithm and implementation are, for understandable reasons, not in the public domain. Furthermore, the program currently does not run on all operating systems, and provides only the DRS, which is determined by nonlinear regression and appealing to parsimony. While the convenience af- forded by such a black-box program may be completely sufficient for an experimentalist, it is less attractive to programmers who want to extend the code, and researchers who may want to analyze particular features of the algorithm. In between these two extremes are powerful, freely available, well-documented, platform-independent general implementations such as DISCRETE, CONTIN, FTIKREG, NLREG, and GENEREG, which overcome many of the constraints men- tioned above. These efficient programs, generally written in older versions of Fortran, extract the relaxation spectrum using some form of regularization. All of these programs are general purpose they can be used to extract DRS from experimental data.[7] Decision to build an open source software toolkit was based on following considerations. • Platform independence • Transparency of the algorithm and implementation • Free availability • Extraction of Discrete relaxation spectra • Efficiency readability and extensibility of the code • Integrated graphical user interface The computer programs mentioned above satisfies a subset of these design considerations. Based on these criteria, we implemented our algorithm in Matlab as the program Rheomfit . The same code works without any modification on the freely available Matlab-clone GNU Octave (http://www.gnu.org/software/octave/), which like Matlab can be installed on any operating system. This choice allows us to use several built-in numerical routines, which makes the code succinct and easy to extend. [7]
  • 19. Chapter 2 Relaxation Spectra Relaxation spectra gives a qualitative picture of the molecular weight Distribution of a polymer. Since a polymer melt is an ensemble of macromolecular chains of varied repeat units (or in other words of varied molecular weights), it is important that it be represented by an appropriate term such as a relaxation spectra. Relaxation spectrum, when incorporated in a constitutive equation, gives a realistic picture of how the particular polymer will behave under various deformations and how it will relax the stress. Relaxation spectrum of a polymer is determined by performing an inverse Fourier transform on dynamic oscillatory time-temperature superposed data consisting of elastic and loss moduli.[5] Polymer manufacturing industries produce several thousand tons of polymer films and coat- ings using Extrusion Film Casting. Which is a commercially important process. This consists of extruding a molten polymer through a die under pressure and stretching the resulting film in air by winding it on a chill roll. Under stable and steady-state extrusion film casting operation, two major undesirable defects occur over the take-up length.[3] Figure 2.1: Extrusion Film Casting Machine This is the Extrusion film casting machine which has a types based on the type of screw extruder. This is of single screw extruder. At the right side portion of 2.1 it consist of Hopper which is the input of extruder, it accepts the polymer balls and passes it to conveyor. Screw Extruder, It extrudes and melt down the polymer balls into polymer gel and using the scre extruder it goes towards die entrance which 19
  • 20. 20 CHAPTER 2. RELAXATION SPECTRA has , Die, The pressurized gel polymer come out of a die depends upon size and shape of die. [6] At the left side of 2.1 it is having Chill rolls and collecting roll. The polymer in gel form which came out from die which settles down on chill roll and using the chilling technology it gets cooler and starts becoming in form of polymer sheets from gel polymer. After moving from all the rolls polymer sheets gets collected into form of rolls. In steady and stable operation of polymer processing, polymer sheets come across two major defects that is Edge beading & neck-in.[5] Edge-beading In Edge-beading film edges are being substantially thicker than the central portion of the film. Means the central portion becomes weaker as compared with edge, eventually it losses the strength of polymer. In fig 2.2 we can see the defect of Edge-beading. [6] Neck-in We all are familiar with this case, To understand this problem take a can of paint and pour it from a certain height then we can see that the starting width of the flow and the width of flow at the end is different this is nothing but the problem of Neck-in final film width became lower than the width of the die from which the film is extruded. In fig 2.2 we can see the defect of Neck-in. [6] Figure 2.2: Necking defect To avoid this prblem and to study the extrusion film casting operation we need to study the deformation of polymer under applied stresses and flow of polymer which is nothing but Rheol- ogy. To analyze Polymer Flow using Viscoelastic Simulations we require relaxation spectrum. Relaxation spectrum is nothing but the two parameters that are Relaxation strength (gi) and Relaxation Time (λi). The ultimate goal of this thesis is to find out relaxation spectrum of polymer using experimental data.[5]
  • 21. 2.1. EXPERIMENTAL DATA 21 W G G rad/seconds Pascal Pascal 0.0091 3.38 115 0.0103 3.63 127 0.0118 4.34 142 0.0135 5.61 159 0.0136 12.5 176 0.0146 6.90 180 0.0157 7.01 183 0.0164 8.61 199 0.0186 9.80 223 0.0188 9.97 214 0.0215 11.7 250 0.0223 14.6 257 0.0231 12.3 255 0.0250 15.1 282 0.0260 15.9 287 0.0269 17.0 313 0.0296 16.4 304 0.0298 20.7 350 0.0331 19.5 337 0.0340 22.5 367 0.0341 25.7 391 0.0355 26.6 393 Table 2.1: Experimental Data 2.1 Experimental Data Experimental data which is nothing but the machine generated data consist of three columns. First column is of frequencies is dennoted by alphabet W and unit for the frequency is rad/sec- ond. Second column is called as Storage modulus or the elastic moduli of polymer, is denoted by G’. The SI unit for this is Pascal. Third column is of Loss Modulus i.e. G” or Viscous moduli of polymer. The SI unit for this is Pascal.[3] The data from above table is of material Linear low-density polyethylene (LLDPE) is a substantially linear polymer (polyethylene), with significant numbers of short branches. At the time of generation of this data, temperature is maintained at 1900C.
  • 22. 22 CHAPTER 2. RELAXATION SPECTRA
  • 23. Chapter 3 Rheological Data Rheology is the branch of science which deals with the flow and deformation of matter and tells us the interrelation between force, deformation and time. The rheology word actually comes from Greek word rheos meaning of that word is to flow.[11] Rheology is actually applicable to all materials, from gases to solids. The rheology has the history of only about 80 years. It was founded by two scientists meeting in the late 20s and finding out having the same need for describing fluid flow properties. The scientists were Professor Marcus Reiner and Professor Eugene Bingham. [10] The Greek philosopher Heraclitus described rheology as panta rei that means everything flows. Translated into rheological terms by Marcus Reiner this means everything will flow if you just wait long enough. Fluid rheology is used to describe the consistency of different products, normally by the two components viscosity and elasticity. By viscosity is usually meant resistance to flow or thickness and by elasticity usually stickiness or structure.[11] 3.1 Rheology to Viscoelasticity Fluids are normally divided into three different groups according to their flow behaviour: -Newtonian fluids -Non-Newtonian fluids. Types of fluids are explained in graph 3.1 Viscoelastic fluids are a type of non-Newtonian fluid in which the stress-strain relationship is time-dependent. They are often capable of generating normal stresses within the fluid that resist deformation, and this can lead to interesting behaviours like the bead-on-a-string insta- bility shown above.[3] Flow curves are normally use for the graphical description of flow behaviour. All materials, from gases to solids, can be divided into the following three categories of rheological behaviour. • Viscous materials- In a purely viscous material all energy added is dissipated into heat. • Elastic materials- In a purely elastic material all energy added is stored in the material. • Viscoelastic materials- a viscoelastic material exhibits viscous as well as elastic be- haviour. It is the combination of viscous and elastic material thats why the name is 23
  • 24. 24 CHAPTER 3. RHEOLOGICAL DATA Figure 3.1: Types of Fluids viscous + elastic = viscoelastic (3.1) Typical examples of viscoelastic materials are bread dough, polymer melts and artificial or nat- ural gels. Polymer is best example of viscoelastic material, which contains viscous and elastic properties as well. 3.2 Viscosity & elasticity measurements In rheology, rheological measurements are usually performed in kinematic instruments to get quantitative results useful for design and development of products and process equipment. To devolpe designs of products, such as food, cosmetic or in paint industry, rheometric measure- ments are often performed to establish the elastic properties, such as gel strength and yield value, both important parameters affecting e.g. particle carrying ability and spreadability. Dy- namic moduli is the ratio of Stress to strain, Mathematically It is written as, G∗ = Stress Strain (3.2) 3.3 Oscillatory Shear Experiment Measuring the Viscoelastic Behaviour of Soft Materials Soft materials such as emulsions, foams, or dispersions are ubiquitous in industrial products and formulations; they exhibit unique me- chanical behaviours that are often key to the way these materials are employed in a particular
  • 25. 3.3. OSCILLATORY SHEAR EXPERIMENT 25 application.[11] Studying the mechanical behaviour of these materials is complicated by the fact that their response is viscoelastic, intermediate between that of solids and liquids. Oscillatory rheology is a standard experimental tool for studying such behaviour, it provides new insights about the physical mechanisms that govern the unique mechanical properties of soft materials. [10] Soft materials such as colloidal suspensions, emulsions, foams, or polymer systems are ubiq- uitous in many industries, including foods, pharmaceuticals, and cosmetics. Their macroscopic mechanical behaviour is a key property which often determines the usability of such materials for a given industrial application. Characterising the mechanical behaviour of soft materials is complicated by the fact that many materials are viscoelastic, so their mechanical properties lie between that of a purely elastic solid and that of a viscous liquid. Using oscillatory rheology, it is possible to quantify both the viscous-like and the elastic-like properties of a material at different time scales; it is thus a valuable tool for understanding the structural and dynamic properties of these systems [11] The basic principle of an oscillatory rheometer is to induce a sinusoidal shear deformation in the sample and measure the resultant stress response; the time scale probed is determined by the frequency of oscillation, ω of the shear deformation. In a typical experiment, the sample is placed between two plates, as shown in figure 3.2. While the top plate remains stationary, a motor rotates the bottom plate, thereby imposing a time dependent strain γ (t) = γsin (ωt) on the sample[10]. Simultaneously, the time dependent stress σ (t) is quantified by measuring the torque that the sample imposes on the top plate. Measuring this time dependent stress response at a single frequency immediately reveals key differences between materials, as shown schematically in figure 3.2. If the material is an ideal elastic solid, then the sample stress is proportional to the strain deformation, and the proportionality constant is the shear modulus of the material. Figure 3.2: Oscillatory Shear Experiment
  • 26. 26 CHAPTER 3. RHEOLOGICAL DATA The stress is always exactly in phase with the applied sinusoidal strain deformation. In contrast, if the material is a purely viscous fluid, the stress in the sample is proportional to the rate of strain deformation, where the portionality constant is the viscosity of the fluid. The applied strain and the measured stress are out of phase, with a phase angle δ = π 2 as shown in the center graph in figure 3.2. Viscoelastic materials show a response that contains both in-phase and out-of-phase contributions, as shown in the bottom graph of figure 3.2. These contributions reveal the extents of solid-like (red line) and liquid-like (blue dotted line) be- haviour. As a consequence, the total stress response (purple line) shows a phase shift δ with respect to the applied strain deformation that lies between that of solids and liquids,0 < δ < π 2 The viscoelastic behaviour of the system at is characterised by the storage modulus, G (w) , and the loss modulus, G (w) , which respectively characterise the solid-like and fluid like con- tributions to the measured stress response. For a sinusoidal strain deformation γ(t) = γ0sin(t) , the stress response of a viscoelastic material is given by σ(t) = G γ0sin(ωt) + G γ0cos(ωt). In a typical rheological experiment, we seek to measure G & G . We make the measurements as a function of omega because whether a soft material is solid-like or liquid-like depends on the time scale at which it is deformed. A typical example is shown in figure. here we plot G (w) and G (w) for a suspension of hydrogel particles, at the lowest accessible frequencies the response is viscous-like, with a loss modulus that is much larger than the storage modulus while at the highest frequencies accessed the storage modulus dominates the response, indicating solid-like behaviour. Oscillatory rheology is a valuable tool for studying the mechanical behaviour of soft materi- als. Recent studies suggest that the nonlinear viscoelastic behaviour contains valuable informa- tion about the dynamics of these systems.[11] Thus measurements at large strain deformations should lead to a better understanding of the physical mechanisms that govern their behaviour.
  • 27. Chapter 4 Marquardt Levenberg Algorithm 4.1 Objective function In this paper, The method or technique we have used is firstly we plotted the values of storage modulus (G ) & loss modulus (G ) at different frequnciews ω. So we got two non linear curves. Initially we are guessing some values for g & λ in order to get values of G & G at different frequencies of experimental data. After plotting this calculated values of G & G on the graph of experimentla values. We will get some difference between our calculated and experimental curves. Now this problem becomes of Curve fitting. We will look towards this problem from the error i.e difference between calculated and experimental values. So we will implement the minimization algorithms.[7] Our objective function is to minimize the distance between two curves or we can say problem is of curve fitting using error minimization algorithms. For this we are calculating Goodness of fit (GOF) χ2 = M i=1 Observed − Expected Expected 2 (4.1) Where, M= Number of observations. By checking the value of Goodness of fit χ2 from Eq. 4.1 we can also set the termination condition.[1] 4.2 Algorithm The LevenbergMarquardt algorithm was developed by Kenneth Levenberg and Donald Mar- quardt, Which provides a numerical solution to the problem of minimizing a non-linear function. The advantages of this are it is fast and has stable convergence. The gradient descent algorithm, also known as the error backpropagation (EBP) algorithm. In this algorithm many improvements have been made, but these improvements are failed to make a that much major effect into algorithm . The Gradient descent algorithm is still widely used today, however, it is also known as an inefficient algorithm because convergence of this is slow. There are mainly two reasons for the slow convergence, the first reason is that its step sizes should be exact to the gradients. Logically, small step sizes should be taken where the gradient is steep so as not to rattle out of the required minima (because of oscillation). So, if 27
  • 28. 28 CHAPTER 4. MARQUARDT LEVENBERG ALGORITHM the step size is a constant, it needs to be chosen small. Then, in the place where the gradient is gentle, the training process would be very slow. The second reason is that the contour of the surface may not be the same in all directions, for example the Rosenbrock function, so that the error valley problem may happen and may result in the slow convergence. The problem of step size is explained in figure 4.1. In Marquardt Levenberg algorithm the problem slow convergence of the gradient descent method is greatly improved by using Gauss Newton algorithm. Gauss newton method requires second order derivative. Using second-order derivatives of function to naturally evaluate the curvature of error surface, Using Gauss Newton algorithm we can find exact step sizes for each direction and also the convergence rate of gauss newton is very fast. Especially.[12] Figure 4.1: Searching process of the steepest descent method with different learning constants: left trajectory is for small learning constant that leads to slow convergence; right side trajectory is for large learning constant that causes oscillation i.e. divergence The LevenbergMarquardt algorithm blends the steepest descent method and the Gauss Newton algorithm. It picks the speed advantage of the Gauss Newton algorithm and the stability advantage of the steepest descent method. Because of this two proerties it is more robust than the Gauss Newton algorithm, because in many cases it can converge well even if the error surface is much more complex than the quadratic situation. But the LevenbergMarquardt algorithm tends to be a bit slower than Gauss Newton algorithm (in convergent situation), it convergence much faster than the steepest descent method. The basic idea of the LevenbergMarquardt algorithm is that it performs a combined process by blending this two algorithms, by picking the stability of Gradient descent and Speed of Gauss Newton. So what it does that, It choose Gradient descent when it is away from minima or maxima as per optimal point and switch towards Gauss Newton when it is near to minima, To avoid slow convergence problem. But in case if it changes its direction because of unstability of Gauss Newton then It will immediatly call to Gradient descent that to take care of direction problem, then Gradient descent will take care of its direction like this way marquardt levenberg algorithm works. This algorithm is just like a couple of Deaf and Blind. Both are helping to
  • 29. 4.3. STEPWISE DERIVATION 29 each other to survive. [12] 4.3 Stepwise Derivation In this part, the derivation of the LevenbergMarquardt algorithm will be presented in four parts. • Steepest descent algorithm • Newtons method • Gauss Newtons algorithm • Damping Factor • Levenberg Marquardt algorithm 4.3.1 Gradiant Descent Algorithm The steepest descent algorithm is a first-order algorithm. It uses the first-order derivative of cost function to find the minima. Normally, gradient g is defined as the first-order derivative of cost function. With the definition of gradient g in the update rule of the steepest descent algorithm could be written as θ1 = θ0 − ∂ ∂θ J (θ0, θ1) (4.2) where, J = Cost Function i.e Error = Predicted-Actual θ= Parameter Vector α = Step Size / Learning Rate. To go further using this algorithm we need exact step size. Because the step size is the parameter which makes the problem if it is high even if it is too low. We will study this using graphical interpretation. Z (x, y) = x2 + 2 ∗ y2. As I have plotted this function in 3-Dimension, but to visualize it clearly I have taken the top view snaps of algorithm. The red lines are nothing but the levels or we can say contours of function. In graph 4.3 we have initialized the starting point for algorithm to start. Using negative gradient it is started and moved towards minima. [8] Similarly in second graph it also moved towards minima. But if we observe the graph number 4.5, We can see that algorithm is moving towards minima in right direction but the problem is that it is becoming slower and slower. So this is the main drawback of gradient descent algorithm. We can see this problem very clearly in graph 4.7. This all because we are taking small steps as we have started with small step size. Lets check with large step size.[8] Now, we will take large step and check the direction and step size of Gradient descent al- gorithm. In graph 4.9 we have started with large step we moved into the direction of another local minia, even it is not going straight into same local minima. It is continiously changing the vallie’s also called as minimas. By visualizing this graphs we can understand the significance and importance of step size in gradient descent method. [8]
  • 30. 30 CHAPTER 4. MARQUARDT LEVENBERG ALGORITHM Figure 4.2: GD Step 1 Figure 4.3: GD Step 2 Figure 4.4: GD Step 3 Figure 4.5: GD Step 4
  • 31. 4.3. STEPWISE DERIVATION 31 Figure 4.6: GD Step 5 Figure 4.7: GD Step 6 Figure 4.8: GD Step 7 Figure 4.9: GD Step 8
  • 32. 32 CHAPTER 4. MARQUARDT LEVENBERG ALGORITHM 4.3.2 Newton Method The geometric interpretation of Newton’s method is that at each iteration one approximates f(x) by a quadratic function around Xn, and then takes a step towards the maximum/minimum of that quadratic function (in higher dimensions, this may also be a saddle point). Note that if f(x) happens to be a quadratic function, then the exact extremum is found in one step. The mathematical formula for Newton method is Xn+1 = Xn − f (x) f (x) (4.3) As we know that the definition of first derivative and second derivative are gradient and Hessian respectively. So we can write above equation like, Xn+1 = Xn − g H (4.4) Xn+1 = Xn − g ∗ H−1 (4.5) Graphical interpretation of newton method is shown in following figure 4.10, in which we can see that , we started from point Xk and we drawn a tangent at point Xn. After that we found out the point which intersects x axis. By drawing perpendicular on curve from that curve we get our next point that is Xn+1. Similary we can move towards minima or maxima. Figure 4.10: Interpretation of Newton Method
  • 33. 4.3. STEPWISE DERIVATION 33 4.3.3 Gauss Newton Method As we have seen in Newton method, equation 4.4 we require to calculate the Hessian that is second derivative of function. So it will require large computation. To avoid this, the great mathematicians Carl Friedrich Gauss modified the newton method to avoid second derivative computation. This method came into focus by the name of Carl Friedrich Gauss and Isaac Newton. That is Gauss Newton. Which is used to solve non-linear least squares problems. We have seen the mathematical derivation of Newton Method is Xn+1 = Xn − g H (4.6) Xn+1 = Xn − JT J −1 ∗ g (4.7) 4.3.4 Damping Factor Now we reached to the step where we got the Gauss newton method which doesnt require double derivative computation. But there is Inverse computation of matrix is there. What if that matix is degenerate or semi definite. So we cant compute the inverse of that matrix. To avoid this Marquardt suggested one adjustment which is similar to regularization. He has added one identity matrix with sclar multiplier µ called as damping factor [12] Ji,j = ∂fi ∂xj (4.8) If our cost fuction is f(x) then the Hessian of its that is second derivative is written as, Hj,k = M i=1 ∂fi ∂xj ∂fi ∂xk + fi (x) ∗ ∂2fi ∂xj∂xk (4.9) Ji,j = ∂fi ∂xj (4.10) H = JT J −1 (4.11) Xn+1 = Xn − JT J + µ ∗ I −1 ∗ g (4.12) Where, Using the value of new Xn+1, We can calculate value of χ2. Now we want to switch the algorithm as per requierment. If χ2 is decreasing, that is we are moving towards the minima means we need to speed-up the algorithm in order to rach towards minima. So we will decrease the value of µ So that we
  • 34. 34 CHAPTER 4. MARQUARDT LEVENBERG ALGORITHM Algorithms Update Rules Convergence Computation Complexity Gradient Descent Wk+1 = Wk − α ∗ gk Stable, Slow Gradient Newton algorithm Wk+1 = Wk − H−1 k ∗ gk Unstable, Fast Gradient and Hessian GaussNewton algorithm Wk+1 = Wk − JT J −1 gk Unstable, Fast Gradient and Jacobian LM algorithm Wk+1 = Wk − JT J + µI −1 gk Stable, Fast Gradient and Jacobian Table 4.1: Specifications of Different Algorithms will sitch to Gauss Newton. If χ2 is increasing, that is we are moving away from the minima means we need to give the direction to the algorithm in order to rach towards minima in right direction. So we will increase the value of µ So that we will sitch to Gradient descent algorithm.[12] [12]
  • 35. Chapter 5 Result By minimizing objective function χ2 we will get some values of [gi, λi] where i=1:N. By putting that values into equation 1.3. We will get values for G &G at number of frequencies w. If we see the plots from figure which are perfectly fitting on each other. So we can comment that the result we got i.e. values of g & λ are correct. Because we are calculating the values of G & G that are our calculated values and then we are comparing them with experimtntal values. To crossvalidate this results we can again recalculate values of G & G at different frequen- cies ( experimental values of W ) and then we can calculate relative error, percentage error or AAD. In this case we have calculated value of AAD. Algorithm w vs G w vs G LevenbergMarquardt algorithm 0.0088938 -0.01532 Table 5.1: AAD value of ML algorithm 35
  • 36. 36 CHAPTER 5. RESULT (a) (b) Figure 5.1: (a) Graph of W vs G , (b) Graph of W vs G By checking the values of AAD from above table we can comment that our curves are fitting very well. In this type of problem we can check the accuracy of results by two ways either by visualizing the graphs or by checking the error values. In this case our Graphs are perfectly matching and also error is very less, So we can say that our results are perfect.
  • 37. Chapter 6 Using Genetic Algorithm 6.1 Biological Terminology Before moving towards Genetic algorithm we first need to understand the biological terminology as we are going to use this throughout the thesis. In real biology this terms are plays important role though the entities they refer to are much simpler than the real biological ones. All living organisms consist of cells, and each cell contains the same set of one or more chromosomes strings of DNA. A chromosome can be conceptually divided into genes each of which encodes a particular protein.For example if our gene is our eye color.The different possible ”settings” for a genes (e.g., blue, brown, hazel) are called alleles. every gene has its specific position in chromosome.Many organisms have multiple chromosomes in each cell. The complete collection of genetic material (all chromosomes taken together) is called the organism’s genome.[2] The organism genotype are organism’s full heridatory information. The example for geno- type is the gene responsible for eye color,gene responsible for height.The organism’s phenotype are actual observed properties such as morphology development or behavior. The example of phenotypeare eye color, height. In short genotyes are the responsible paramteres which we cant visualize and the phenotype are depend on geonotype and we can visualize it. In genetic algorithms, the term chromosome typically refers to a candidate solution to a problem, often encoded as a bit string. The ”genes” are either single bits or short blocks of adjacent bits that encode a particular element of the candidate solution (e.g., in the context of multiparameter function optimization the bits encoding a particular parameter might be con- sidered to be a gene). An bit string is either 0 or 1; for larger alphabets more alleles are possible at each locus. Crossover typically consists of exchanging genetic material between two single chromosome parents. Mutation consists of flipping the bit at a randomly chosen position.[2] 6.2 Evolutionary computation Evolutionary computation is a subfield of artificial intelligence (more particularly computational intelligence) that can be defined by the type of algorithms it is dealing with. Technically they belong to the family of trial and error problem solvers and can be considered global optimization methods with a metaheuristic or stochastic optimization character, distinguished by the use of a population of candidate solutions rather than just iterating over one point in the search space. They are mostly applied for black box problems (no derivatives known), often in the context of expensive optimization.[9] 37
  • 38. 38 CHAPTER 6. USING GENETIC ALGORITHM Broadly speaking, the field includes: • Ant colony optimization • Random walk • Simulated Anneling • Multicanonical jump walk annealing • Genetic algorithm • Harmony search The field known as evolutionary computation consists of methods for problem solving by simulating natural evolution process. Evolutionary algorithms are the computing techniques based on iteratively applying random variations and subsequent selection over a population (set) of prospective solution instances. The following subsection is meant only to provide a general overview of GA rationale and some background on how to interpret the obtained results. Its purpose is to introduce the contributions the method can offer to the focused problem domain. This stochastic nature al- lows for an important about (well designed) GAs as optimization tools: since a global optimum exists, there are good chances that its surrounding will eventually be approached, although it is not possible to known in advance how many generations it will take. A number of variants of the canonical GA have been developed to make it applicable to other binary search space, and these include real-encoded chromosome, which is the case for the highlighted problem 6.3 Introduction & Flowchart A genetic algorithm (GA) is a method for solving both constrained and unconstrained opti- mization problems based on a natural selection process that mimics biological evolution. The algorithm repeatedly modifies a population of individual solutions. At each step, the genetic algorithm randomly selects individuals from the current population and uses them as parents to produce the children for the next generation. Over successive generations, the population ”evolves” toward an optimal solution. You can apply the genetic algorithm to solve problems that are not well suited for standard optimization algorithms, including problems in which the objective function is discontinuous, non differentiable, stochastic, or highly nonlinear. The flowchart shown in Fig. 6.1 explains the working of geneic algorithm in detail. Step- wise it is explained in detail in the following sections. As per flow chart algorithm starts from initilization step in which we need to decide size and the number of chromosomes then initilize them with random numbers. Using the inial values of chromosomes we can calculate the fitneess value using optimizaion function. According to the fitness value we will assign rank to the chro- mosomes. By considering rank of chromosomes we will select chromosomes. In order to improve the solution we then apply genetic operators on chromsomes which are crossover, mutation. Af- ter running the whole cycle over number of iterations, we will get a optimal chromosome then using this we can calculate the value of optimal solution. Each operator and step of algorithm
  • 39. 6.4. CROSSOVER 39 Figure 6.1: Flowchart of Genetic Algorithm is explained below with basic information & implementation in our problem statement.[2] The decision to make in implementing a genetic algorithm is what genetic operators to use. This decision depends greatly on the encoding strategy. Here I will discuss crossover and mutation. 6.4 Crossover In genetic algorithms, crossover is a genetic operator used to vary the programming of a chro- mosome or chromosomes from one generation to the next. It is analogous to reproduction and biological crossover, upon which genetic algorithms are based. Cross over is a process of taking more than one parent solutions and producing a child solution from them. There are methods for selection of the chromosomes. Those are also given below.[9] 6.4.1 One-point crossover A one point crossover is a crossover in which it selects 1 point of chromosome and from that point it divides chromosome into two parts. To build new child it takes first part of first parent and second part of second parent. It is briefly explained using image in which our first parent is of red color and another is of blue color after applying one point crossover they flipped their second part and reslting child’s are look like the addition of two chromosomes.[9]
  • 40. 40 CHAPTER 6. USING GENETIC ALGORITHM Figure 6.2: One Point Crossover 6.4.2 Two-point crossover In two point crossover it marks two points onto the two parent chromosomes and within that portion it flips the portion from the second parent. It is explained very well using following image in which first it has marked the two points and from that range it has flipped the red portion with blue portion. Figure 6.3: Two Point Crossover 6.4.3 Cut and splice Another crossover variant, the ”cut and splice” approach, results in a change in length of the children strings. The reason for this difference is that each parent string has a separate choice of crossover point. Figure 6.4: Cut Splice Crossover 6.4.4 Uniform Crossover and Half Uniform Crossover The Uniform Crossover uses a fixed mixing ratio between two parents. Unlike one and two point crossover, the Uniform Crossover enables the parent chromosomes to contribute the gene level rather than the segment level. If the mixing ratio is 0.5, the offspring has approximately half of the genes from first parent and the other half from second parent, although cross over points can be randomly chosen as seen below.
  • 41. 6.5. MUTATION 41 Figure 6.5: Uniform Crossover and Half Uniform Crossover 6.4.5 Blend Crossover This crossover operator is a kind of linear combination of the two parents that uses the following equations for each gene: In which it substracts and add diffrnce between two parents, multiplied by blending factor to parents. So it will give you two new child’s. This is nothing but the linear combination of chromosomes. Child1 = parent1 − b ∗ (parent1 − parent2) (6.1) Child2 = parent2 + b ∗ (parent1parent2) (6.2) Where, b is a random value between 0 and 1. Implemented for real and integers genes only. In this problem I have implemented blend crossover to create child’s for the next generation.[9] 6.5 Mutation Mutation is a genetic operator used to create genetic diversity from one generation of a pop- ulation of genetic algorithm chromosomes to the next. It is analogous to biological mutation. Mutation alters one or more gene values in a chromosome from its initial state. In mutation, the solution may change entirely from the previous solution. In short solution get transferred from one pont to another point. Hence GA can come to better solution by using mutation. Mutation occurs during evolution according to a user-definable mutation probability. This probability should be set low. If it is set too high, the search will turn into a primitive random search. Because if its probability is high then it will get shifted from point which is already close to minima. Using mutation we want to move solution which stucks into local minima. The classic example of a mutation operator involves a probability that an arbitrary bit in a genetic sequence will be changed from its original state. The purpose of mutation in GAs is preserving and in- troducing diversity. Mutation should allow the algorithm to avoid local minima by preventing the population of chromosomes from becoming too similar to each other, thus slowing or even stopping evolution. This reasoning also explains the fact that most GA systems avoid only tak- ing the fittest of the population in generating the next but rather a random (or semi-random) selection with a weighting toward those that are fitter. For different genome types, there are different mutation types are suitable. [9] Since the end goal is to bring the population to convergence, selection/crossover happen more frequently (typically every generation). Mutation, being a divergence operation, should happen less frequently, and typically only effects a few members of a population (if any) in any given generation.
  • 42. 42 CHAPTER 6. USING GENETIC ALGORITHM 6.5.1 Flip Bit This mutation operator takes the chosen genome and inverts the bits (i.e. if the genome bit is 1, it is changed to 0 and vice versa). 6.5.2 Boundary This mutation operator replaces the genome from either lower or upper bound randomly. This can be used for integer and float genes. 6.5.3 Uniform This operator replaces the value of the chosen gene with a uniform random value selected be- tween the user-specified upper and lower bounds for that gene. This mutation operator can only be used for integer and float genes. 6.5.4 Non-Uniform The probability that amount of mutation will go to 0 with the next generation is increased by using non-uniform mutation operator. It keeps the population from stagnating in the early stages of the evolution. It tunes solution in later stages of evolution. This mutation operator can only be used for integer and float genes.
  • 43. Chapter 7 Objective function & Method 7.1 Objective Function The most commonly employed experiment to determine the relaxation spectrum of a polymer is small-amplitude oscillatory shear. This experiment measures linear viscoelastic materials properties as storage modulus G (w) and loss modulus G (w) of the dynamic shear as functions of frequency, which may be expressed as follows. G (w) = N i=1 gi w2λ2 i 1+λ2 i w2 (7.1) G (w) = N i=1 gi wλi 1+λ2 i w2 (7.2) (7.3) [4] where, w is a frequency. g is relaxation strength. λ is relaxation time. The estimation process consists in properly setting pairs (Gi, ki) such that G (w) and G (w) from eqs. 7.3 best fit the experimental data. Equation 7.3 gives us the storage modulus of the dynamic shear and the loss modulus of the dynamic shear, respevtively. It has already been observed that the spectrum estimation from experimental data is an ill-posed problem. Conse- quently, minor inaccuracy in the input data may produce major errors in the solution spectra. This characteristic may lead to inadequate solutions with relatively high regression errors and spectra with unrealistic waved aspect1,2 of the fitted curve because of the excessive degrees of freedom.[4] 7.1.1 Methods This thesis introduces a nonanalytical regression approach based on evolutionary algorithms for determination of relaxation spectra, which is suited to handle ill-posed problems and, likewise nonlinear regression methods, can also find all models parameters. The field of evolutionary computation investigates nondeterministic search algorithms for complex optimization prob- lems. These techniques can simultaneously iterate several solutions and combine the more promising ones to generate a new improved set of solutions. EA’s have been successfully ap- plied to problems with nonlinear or even discontinuous objective functions, nonconvex objective 43
  • 44. 44 CHAPTER 7. OBJECTIVE FUNCTION & METHOD Operators Values Population size/Number of Chromsomes 100 Size of Chromosome 8 Search space for random number generation to fill up chromosomes 10−3 to 103 Substitution rate 0.3 Crossover probability 0.7 Mutation Probability 0.01 Mutation Range 0 to 1 Blend factor 0.5 Table 7.1: GA Parameters function space, nonconvex search space, as well as ill-posed problems. 7.1.2 GA Parameters Used in the Experiments • Population size/Number of Chromsomes -100 • Size of Chromosome-8 • Search space for random number generation to fill up chromosomes 10−3 to 103. • Substitution rate-0.3 • Mutation rate-0.01 • Crossover probability-0.7 • Mutation Probability • Mutation Range- 0.1 • Blend factor-0.5 Here, In our experiement we have initiallized 100 chromosomes with random number be- tween search space of interval 10−3 to 103 as we want 8 relaxation spectra so the size of chro- mosomes is 8. So each chromosome has 8 pairs of g & λ from interval of 10−3 to 103 after that we started applying genetic operators which are explained in detail below. Substitution rate is 0.3 mean we are changing or flipping the 30% of chromosome with other chromosome. [4] 7.2 Initiallization The population size depends on the nature of the problem, but typically contains several hun- dreds or thousands of possible solutions. In genetic algorithm we initialize our chromosomes with random nuber within interval of search space. Search space is probable range or interval of number for calculation of output. If we are dealing with Binary genetic algorithm we initialize chromosomes with binary input 0 & 1. Occasionally, the solutions may be ”seeded” in areas where optimal solutions are likely to be found.
  • 45. 7.3. FITNESS VALUES 45 In our problem we have initialize K chromosomes with random number genaearation within search space. As the range for g & λ are different even g is in decreasing order and λ is in increasing order so deciding or fixing search space in critical step even the problem is that large interval of search space gives us accuracy but requires extra time. 7.3 Fitness Values After initializing the chromosomes using our objective function we can calculate some output value which is our predicted or calculated value by comparing our predicted value with actual we will get some error value. Using that error value we can arrange the chromosomes as per the error value. This value will help us for selection step. Using the value of fitness score we can decide which one is best chromosome and which is worst chromosome. In our algorithm our objective is to minimize the error. Therefore the objective function is M G , G , C = MSE (Gc) + MSE (Gc ) (7.4) Where, M- Mean Square Error. As per values of mean square error we can sort the chromosomes and then select the best cromosomes for creation of next generation. 7.4 Selection After deciding on an fitness, The second decision to make in using a genetic algorithm is how to perform selectionthat is, how to choose the individuals in the population that will create childs for the next generation, and how many childs each will create. The Aim of selection is, to fitter chromosomes from the population in ensures that their childs will produce higher fitness compared to parent fitness. Selection has to be balanced with variation from crossover and mutation.If our selection process is toostrong then that suboptimal highly fit individuals will take over the population, reducing the diversity needed for further change and progress. If selection process is tooweak then it will result in tooslow evolution. As was the case for encodings, numerous selection schemes have been proposed in the GA literature. 7.4.1 Roulette Wheel Selection In order to produce new childs/offsprings the parents which are selected according to their fitness. The better the chromosomes are, the more chances to be selected they have. Imagine a roulette wheel where are placed all chromosomes in the population, every has its place big accordingly to its fitness function, like on the following picture. So if the place of chromosome is big then the probability of getting selected is high. If the place of chromosome is small then the probability of getting selected is low. Then a marble is thrown there and selects the chromosome. Chromosome with bigger fitness will be selected more times. [2] 7.4.2 Rank Selection Rank selection is an alternative method whose purpose is also to prevent tooquick convergence. In the version proposed by Baker (1985), the individuals in the population are ranked according
  • 46. 46 CHAPTER 7. OBJECTIVE FUNCTION & METHOD Figure 7.1: Roulette Wheel Selection to fitness, and the expected value of each individual depends on its rank rather than on its absolute fitness. There is no need to scale fitness in this case, since absolute differences in fitness are obscured. This discarding of absolute fitness information can have advantages (using absolute fitness can lead to convergence problems) and disadvantages (in some cases it might be important to know that one individual is far fitter than its nearest competitor). Ranking avoids giving the far largest share of offspring to a small group of highly fit individuals, and thus reduces the selection pressure when the fitness variance is high. It also keeps up selection pressure when the fitness variance is low: the ratio of expected values of individuals ranked i and i+1 will be the same whether their absolute fitness differences are high or low. [2] The previous selection will have problems when the fitnesses differs very much. For example, if the best chromosome fitness is 90% of all the roulette wheel then the other chromosomes will have very few chances to be selected. Rank selection first ranks the population and then every chromosome receives fitness from this ranking. The worst will have fitness 1, second worst 2 etc. and the best will have fitness N (number of chromosomes in population). You can see in following picture, how the situation changes after changing fitness to order number. Figure 7.2: Before & After Rank Selection After this all the chromosomes have a chance to be selected. But this method can lead to slower convergence, because the best chromosomes do not differ so much from other ones.
  • 47. 7.4. SELECTION 47 7.4.3 Tournament Selection Tournament selection is a method of selecting an individual from a population of individuals in a genetic algorithm. Its like deciding groups and allow them to fight among them and choose the chromosome which is a winner of tournament. Selection pressure is easily adjusted by changing the tournament size. If the tournament size is larger, weak individuals have a smaller chance to be selected. Rank scaling requires sorting the entire population by rank a potentially timeconsuming procedure. Two individuals are chosen at random from the population. A random number r is then chosen between 0 and 1. If r < k (where k is a parameter, for example 0.75), the fitter of the two individuals is selected to be a parent; otherwise the less fit individual is selected. The two are then returned to the original population and can be selected again.[2] Figure 7.3: Tournament Selection 7.4.4 Elitism Sometimes best chromosome can be lost when crossover or mutation results in child that are weaker than the parents. To do this we can use a feature known as elitism. Elitism involves copying a small propotion of the best chromosomes, unchanged, into the next generation. This can sometimes have a dramatic impact on performance by ensuring that the algorithm does not waste time for re discovering previously discarded partial solutions. Best chromosome that are preserved unchanged through elitism remain eligible for selection as parents when breeding the remainder of the next generation. [2]
  • 48. 48 CHAPTER 7. OBJECTIVE FUNCTION & METHOD
  • 49. Chapter 8 Termination Criteria & Result 8.1 Optical Chromosome After running algorithm till the termination criteria. Algorithm will find out 1 optimal or we can say best chromosome. Using the values of genes from best chromosome we can calculate values of G & G at different frequecies of experimental data. Now if want to see model fitness, we will plot the curves on the same plot of experimental values. Suppose in our case we get best chromsome Cx then we use the values of g & λ to calculate G & G for different frequencies W. [2] Figure 8.1: Graph of W vs G 8.2 Absolute average deviation (AAD) Deviation is a measure of difference between the observed value of a variable and some other value, often that variable’s mean. The sign of the deviation (positive or negative). Reports 49
  • 50. 50 CHAPTER 8. TERMINATION CRITERIA & RESULT Figure 8.2: Graph of W vs G Algorithm w vs G w vs G LevenbergMarquardt algorithm 0.0088938 -0.01532 Genetic Algorithm 0.0503931 0.022601 Table 8.1: AAD values of G & G the direction of that difference (the deviation is positive when the observed value exceeds the reference value). The magnitude of the value indicates the size of the difference. But we are not interested in sign whether it is positive or negative we just want to minimize the error, That’s why we are considering its value as a absolute, that’s why we are considering absolute average deviation.[9] AAD(G ) = 1 M ∗ M j=1 ¯G (wj)−G (wj) ¯G (wj) (8.1) AAD(G ) = 1 M ∗ M j=1 ¯G (wj)−G (wj) ¯G (wj) (8.2) (8.3) Where, ¯G (wj) & ¯G (wj) are the calculated values G (wj) & G (wj) are the experimental values M and M are the number of available data for G and G , respectively.
  • 51. Chapter 9 Graphical User Interface (GUI) Following are the screenshots of the software toolkit having name ”RheomlFit”. It is open source software toolkit. All the files related to this toolkit are avaiable on http://cms.unipune.ac. in/~ajayj/RheomlFit.html. The steps requiered to compile this code are • For windows Operating System- 1. Go to the link http://cms.unipune.ac.in/ ajayj/RheomlFit.html 2. Click the button to download the folder RheomlFit 3. Go to that Windows folder then you need to install RheomlFit.exe • For Linux Operating System- 1. Go to the link http://cms.unipune.ac.in/ ajayj/RheomlFit.html 2. Click the button to download the folder RheomlFit 3. Go to that Linux folder 4. Open Matlab console/ GNU-Octave and run ’gui’ file It will open the main window of GUI. • How to operate the application- 1. After completion of installation process The main window of software will be open. Shown in figure 9.1 2. Press the Enter button it will open a new window which displays the options, Shown in figure 9.2 3. Open Input File Which will open a dialouge box for selection of file. The input file must be in .txt format without any headers. Shown in figure 9.3 4. Number of Modes This option is giving us option to insert or choose number of relaxation modes. Shown in figure 9.4 51
  • 52. 52 CHAPTER 9. GRAPHICAL USER INTERFACE (GUI) 5. Compute It will start computation of code and simmultaneously displays out one small window which will give you status of code which is shown in figure number 9.5 9.6 9.7 9.8 It will give you status of code in interval of 20 percentage. 6. After completion of computation GUI will displays out fitted curve .Shown in figure 9.9 7. By clicking on ”Graph of Relaxation Spectra” button it will displays out the curve of g vs λ. Which is output generated file. Shown in figure 9.9 8. By clicking the button ”Download output Button” it will ask you to choose the directory to save a output file after selecting the folder it will automatically save the output file to that folder.Shown in figure 9.9 Figure 9.1: Main window of GUI
  • 53. 53 Figure 9.2: Second window of GUI Figure 9.3: File selection dialouge box
  • 54. 54 CHAPTER 9. GRAPHICAL USER INTERFACE (GUI) Figure 9.4: User input for number of modes Figure 9.5: Waitbar of code started
  • 55. 55 Figure 9.6: Waitbar of code about to finish Figure 9.7: Computation is done
  • 56. 56 CHAPTER 9. GRAPHICAL USER INTERFACE (GUI) Figure 9.8: Relaxation spectra graph button Figure 9.9: Download output file
  • 57. Chapter 10 Conclusion The determination of the discrete relaxation spectrum of a viscoelastic material is an difficult but important process. The determination of the discrete relaxation spectrum is well known to be an ill-posed problem. We have introduced a new nonlinear regression technique, based on the MarquardtLevenberg procedure, for the determination of the discrete relaxation spectrum, also we have implemented Genetic Algorithm in oder to fix the relaxation modes. In oder to make this operating process user friendly we have designed a Graphical user interface. We have published our code as a open source for those who want to extend the code, and researchers who may want to analyze particular features of the algorithm. 57
  • 58. 58 CHAPTER 10. CONCLUSION
  • 59. Bibliography [1] M. Baumgaertel and H. Winter. Determination of discrete relaxation and retardation time spectra from dynamic mechanical data, volume 28. Rheol Acta, 1989. [2] D. A. Coley. An Introduction to Genetic Algorithms for Scientist and Engineers. World Scientific, Singapore. [3] J. D. Ferry. Viscoelastic Properties of Polymers. Third Edition. John Wiley and Sons, New york, University of viscosin John Wiley and Sons New york. [4] F.J.Monaco, A.C.B.Delbem. Genetic Algorithm for the Determination of Linear Viscoelas- tic Relaxation Spectrum from Experimental Data. Wiley InterScience, ICMS University of Sao Paulo Brazil., 35−6 edition, 1991. [5] Pankaj. Doshi. Harshawardhan Pol. Sourya Banik Lal Busher Azad Sumeet Thete Ashish Lele Nonisothermal analysis of extrusion film casting process using molecular constitutive equations. Springer-Verlag Berlin Heidelberg, 2013. [6] Pankaj Doshi. Harshawardhan V. Pol, Sumeet S. Thete. Sumeet S. Thete, Ashish K. Lele Necking in extrusion film casting: The role of macromolecular architecture. The Society of Rheology, 2013. [7] J. Honerkamp and J. Weese. A Non Linear Regularization method for the calculation of relaxation spectra, volume 32. Rheol Acta, New York, USA, 1993. [8] O. T. K. Madsen, H.B. Nielsen. Methods for non linear least square problems. Second Edi- tion. John Wiley and Sons, New york, Informatics and Mathematical Modelling Technical University of Denmark, 2004. [9] G. K. Kumara Sastry, David Goldberg. Genetic Algorithms. University of Illinois, USA. [10] Ottendorf-Okrilla. Introduction to Rheology. RheoTec Messtechnik GmbH. [11] D. H. M. Wyss. Measuring the Viscoelastic Behaviour of Soft Materials. Harvard University Cambridge, USA. [12] H. Yu and B. M. Wilamowski. Industrial Electronics Handbook, vol. 5 Intelligent Systems. CRC Press 2011. 59
  • 61. Appendix A Acronyms DRS Discrete Relaxation Spectra GA Genetic Algorithm ML Marquardt Levenberg GD Gradient Descent GN Gauss Newton NLR Non Linear Regression 61