SlideShare a Scribd company logo
1 of 108
Download to read offline
Development of an Interferometric Biosensor
Master Thesis
Robert MacKenzie
Submitted to
Institute for High-Frequency and Quantum Electronics (IHQ)
Universität Karlsruhe (TH), Germany
Carried out at
Fraunhofer Institute for Physical Measurement Technology (IPM)
Freiburg, Germany
October 31, 2003
Declaration
With this statement I ensure that the submitted thesis is a product of my individual work,
except for those aids, materials and assistances known to my supervisor. Furthermore,
I have acknowledged the use of all information, results and work of others with exact
and complete references.
Karlsruhe, 31. October, 2003 Robert MacKenzie
Acknowlegements
Firstly, I would like to acknowledge Prof. Dr. Wolfgang Freude and express my thanks
to him for the permission to perform my research external to the University of Karlsruhe.
The communication and coordination of my work and presentations with him has been
seamless.
Secondly, I owe a great deal of thanks and gratitude to Dr. Bernd Schirmer, my project
supervisor and mentor for the duration of this project. Many intelligent suggestions, in-
teresting experiments, and volumes of patience and encouragement have helped
me to thoroughly enjoy every moment of our working time together. His professional-
ism, criticism, approachability and good taste in music have also been paramount to
the successful completion of this thesis.
Contents
1 Introduction 1
1.1 Biosensors & Label-Free Detection . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 System Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2 Fundamental Theory 5
2.1 Evanescent-Field Sensing Technology . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Interferometry & Diffraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2.1 Young’s Interferometer . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.2 Single-Slit Diffraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2.3 Double-Slit Diffraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3 System Principles & Relationships . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3.1 Determination of Phase Difference . . . . . . . . . . . . . . . . . . . . 12
2.3.2 Determination of Refractive Index Difference . . . . . . . . . . . . . . 13
2.3.3 System Noise & Sources of Error . . . . . . . . . . . . . . . . . . . . . . 13
2.3.4 Phase Error & Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.4 Fourier Analysis & Signal Processing . . . . . . . . . . . . . . . . . . . . . . . . 21
2.4.1 Fourier Representations of Functions . . . . . . . . . . . . . . . . . . . 21
2.4.2 The Continuous Fourier Transformation . . . . . . . . . . . . . . . . . . 23
2.4.3 Discrete Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.4.4 The Discrete Fourier Transformation . . . . . . . . . . . . . . . . . . . . 25
2.4.5 Distortion Effects in Signal Analysis . . . . . . . . . . . . . . . . . . . . . 26
2.4.6 Signal Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3 Interferometric Sensor System 34
3.1 Detailed System Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.1.1 Lasers & Light Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.1.2 Lenses & Beam Forming . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.1.3 Flow Cells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.1.4 Optical Chips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.1.5 Double Slit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.1.6 Optical Chip Mount . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.1.7 CCD Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.1.8 Pump . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.2 System Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.2.1 FFT-Based Measurement Algorithm . . . . . . . . . . . . . . . . . . . . 45
3.2.2 Fourier-Coefficient Correlation Algorithm . . . . . . . . . . . . . . . . 46
i
3.3 Software Development Environments . . . . . . . . . . . . . . . . . . . . . . . 48
3.3.1 LabVIEW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.3.2 Visual C++ & Dynamic Linking Libraries . . . . . . . . . . . . . . . . . . 49
3.3.3 MATLAB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.4 Interferometric Measurement Program . . . . . . . . . . . . . . . . . . . . . . 50
4 Measurements & Results 53
4.1 System & Chip Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.1.1 Coupling Effeciency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.1.2 Scattering & Spreading Measurements . . . . . . . . . . . . . . . . . 54
4.1.3 Scattering Improvements . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.2 Signal Analysis & Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.2.1 The Interferogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.2.2 Fourier Transformation of the Interferogram . . . . . . . . . . . . . . . 58
4.2.3 Signal Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.2.4 Signal Noise & Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.3 Test Measurements with Glycerin . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.3.1 Detection Limit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.3.2 Measurement Drift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
4.4 Comparison of System Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 85
5 Summary 87
5.1 Future Prospects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
References 91
A Appendix 92
A.1 Single-Slit Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
A.2 LabVIEW & C-Coded Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 94
A.3 Refractive Index Tables & Plots . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
ii
1 INTRODUCTION
1 Introduction
In the medical and pharmaceutical industries there exists a demand for stable mea-
surement systems, which can deliver low-cost and highly sensitive biomolecular de-
tection. The interferometric biosensor is a single-sensor, real-time optical measurement
system in the field of label-free analysis & detection to meet these requirements. As
proposed by A. Brandenburg [2, 3], the interferometric biosensor (i.e. the system under
development) is based on the principle of a Young interferometer and is potentially
much more sensitive than the conventional systems in the field of label-free detection
and measurement.
While the theory surrounding Young interferometry is at the core of the design, the key
component of the optical system is, however, the transducer chip with a mono-mode
waveguide film. These chips offer a reusable and an inexpensive method for surface
sensing. Through the application of evanescent-field sensing technology, section 2.1, it
is possible to detect changes in adsorbate layer thickness or mass coverage on a bio-
chemically active surface[5]. This, in combination with additional optical hardware and
software-powered signal analysis & processing algorithms, enables the final realization
of the Young interferometer as an effective biosensing device.
The focus of this thesis is to demonstrate the development of the interferometric biosen-
sor through the discussion of the following tasks and the presentation of their results:
• Improvement in the optical construction
• Analysis and determination of the coupling properties of the optical chips
• Development of the system measurement software
• Maintenance and improvement of the detection and data collection system
• Implementation and improvement of the system signal analysis algorithms
• Adaptation and application of signal processing algorithms
• Determination and reduction of noise effects and disturbances (e.g. signal drift)
in the system and their influence on the final measurement
• Execution of test measurements
• Determination of measurement parameters (e.g. detection limit, measurement
time constant)
1
1 INTRODUCTION
1.1 Biosensors & Label-Free Detection
A sensor, in reference to the field of engineering, is an electronic device for measuring
physical quantities by converting the information into an electronic signal. A biosen-
sor is simply a specific type of sensor for retrieving this converted information from a
biological or physiological process.
Due to its function as a real-time detection system, the interferometric biosensor is able
to perform kinetic analysis for the examination of biochemical & biomolecular reac-
tions. This further allows the investigation of affinity and binding reactions, specific ana-
lyte detection, analysis of protein interactions, concentration determination, and more.
Therefore, the target applications of the interferometric biosensor are for the areas of
pharmaceutical research, medical diagnostics and substance screening.
Label-free detection and measurement is essentially the direct sensing of samples with-
out the prerequisite of an elaborate and often expensive sample preparation. For this
reason label-free detection is extremely attractive for specific application in protein re-
search. Since measurements involving proteins are so important, for example in drug
and medicine research, there exists a great need for the capability to directly mea-
sure uninfluenced protein reactions and bindings. Label-free measurement techniques
have this capability, which stems largely from the application of evanescent-field sens-
ing technology, section 2.1.
A disadvantage of label-free detection is the generally lower sensitivity compared to
very effective label-based measurement systems. Labeling, however, requires an elab-
orate preparation of the sample material, which often involves some type of chemical
marking (e.g. fluorescence) or the creation of reporter molecules in order to retrieve
the wanted information from a measurement or reaction. However, due to the geom-
etry and nature of proteins, these markers or labels often interfere severely with the
measurements and the reactions under analysis. This is a major disadvantage of label-
based detection.
Therefore, the interferometric biosensor represents an alternative, label-free applica-
tion in the optical measurement of specific biomolecular and biochemical reactions,
which are beyond the current limitations of many labeling technologies, specifically
in protein research. Thus, the interferometric biosensor is not a competing technol-
ogy with label-based detection. It is important to understand that the interferometric
biosensor is, instead, a competing technology for other label-free detection technolo-
2
1 INTRODUCTION
gies targeting protein research. Such competing technologies would be the grating
coupler or the SPR (Surface Plasmon Resonance) system, which is currently the most
widely used label-free detection system [5].
1.2 System Overview
This section provides a general overview of the interferometric biosensor, which is useful
for the better understanding of the coming theory. A more detailed system description
is given in section 3.1.
Figure 1: Simple system diagram of the interferometric biosensor.
Figure 1 illustrates the general system layout and design principle of the interferometric
biosensor under development. Depending on the light source (e.g. Helium-Neon laser,
with a wavelength λ) and the optical chip properties (e.g. grating couplers), an optical
configuration first forms the laser beam. The result of the beam forming is an elliptical,
almost line-shaped, beam profile in order to better adjust the beam to the dimensions
of the optical grating.
Next is the most essential system element: the optical chip. It may also be the case
that a spatial filter, in the form of a double slit, is placed at the input coupling of the
chip for the purpose of reducing coupled light, which is not related to the regions of
interest (i.e. the sensing path and the reference path). The result is two parallel light
3
1 INTRODUCTION
beams originating from the the two slits of the initial double slit. Coupled light outside
the regions of interest could lead to internal scattering and this would interfere with the
desired optical sensing output. For further information on this internal scattering, also
known as M-lines, please refer to section 3.1.4. The slit dimensions of the initial double
slit (slit width w ) are almost uncritical since the optical path length is too small to allow
for unwanted diffraction and interference of the propagating light beams before their
interaction at the output double slit (slit width w < w ).
As mentioned earlier, the sensing and reference regions (length L) are parallel along
the path of two coupled light beams. Through the application of evanescent-field sens-
ing these parallel light beams interact with the material, which forms the superstrate.
These light beams are coupled out of the chip after propagating through the mea-
surement regions and are forced to interfere through diffraction as a result of their inter-
action with an output double slit. This forms the interferogram, which is detected and
encoded by a CCD camera at a considerable distance (D), which is large enough to
fulfill the small-angle criteria, where CCD distance (D) >> slit width (w).
It is then the difference of the change of the optical properties in the sensing and refer-
ence regions (i.e. measurement regions) that eventually alters the interference pattern
at the output double slit and finally the form of the interferogram. For the interferomet-
ric biosensor, the phenomena to be detected is the individual change in phase ve-
locity resulting from the propagation through the measurement regions. This is covered
in greater detail in section 2.1. Depending on this difference relationship the interfer-
ogram undergoes a relative phase shift along a spatial axis. The direction of this shift
depends on the choice of the sensing path. For example, if path 1 is chosen, then the
difference between the paths could be positive (e.g. a shift in the positive direction).
Alternatively, if path 2 would be chosen for the same measurement, the difference and
the shift of the interferogram would also reverse.
This difference relationship inherent to the interferogram is extremely beneficial, since
the optical properties in the regions of interest are compared relative to each other.
It allows for the almost entire cancellation of localized disruptions (e.g. temperature,
pressure), since all conditions in the measuring regions should be identical, with the
only disparity being the fundamental difference of the effective refractive index.
4
2 FUNDAMENTAL THEORY
2 Fundamental Theory
Throughout the development of this project different theories from a variety of areas
have been applied. This section describes the fundamentals of the major theoretical
principles, in such areas as optics, physics, biology, signal processing and computing,
and their connection to the overall project.
First, evanescent-field sensing and its use with interferometry are explored in the first two
sections of this chapter. The third section discusses the specific system principles and
relationships of the interferometric biosensor, resulting in the derivation of the needed
measured parameters for the determination of the final measurement signal (i.e. phase
changes and changes in the effective refractive index). The sections following this de-
scribe the various noise effects and influences present in the system, as well as exploring
the nature of phase error and fluctuations present in the final measurement signal.
The final major section gives an in-depth explanation of the Fourier transform, which
forms the basis of the various signal analysis algorithms. These algorithms are the math-
ematical mechanism for the determination and calculation of the system variables
and final measurement signal. Associated with these algorithms are errors and noise
sources inherent to the Fourier methods. As a final topic of the chapter signal process-
ing theory and methods are discussed for the overall compensation and reduction of
error and noise for the physical system and the analysis algorithms.
2.1 Evanescent-Field Sensing Technology
With the application of thin-film waveguides it is possible to excite a guided mode of
coupled light, which possesses an evanescent field distribution that decays exponen-
tially into both the substrate (e.g. optical glass) and the superstrate (e.g. an interface,
such as air, fluid, biological sample). With this configuration the change in the optical
properties of the superstrate is probed by the evanescent tail of the mode propagating
along the waveguide structure. An interaction with the superstrate will both alter the
tail’s propagation speed, caused by a change in refractive index, and alter its attenua-
tion, effected by a change in the absorption coefficient. These alterations would then
also be detectable as a corresponding change in the phase velocity of the guided
wave. Figure 2 illustrates this.
5
2 FUNDAMENTAL THEORY
Figure 2: Overview of evanescent-field sensing technology with an optical chip.
The phase velocity vp can be used in combination with the speed of light c to introduce
a new quantity, the effective refractive index ne f f .
Effective Refractive Index =
Speed of Light
Phase Velocity
or ne f f =
c
vp
(1)
Image 1 in Figure 2 shows the basic structure of an optical chip. Image 2 then repre-
sents this chip with an interface or superstrate, written as "Fluid". Intuitively this fluid has a
refractive index and its corresponding change in phase velocity vre f can be either cal-
culated in relation to this refractive index or with an effective refractive index. In Image
3 the necessity of an effective refractive index is clearer, since it is not intuitively correct
to speak of the refractive indexes of molecules. However, since it is obvious that these
molecules will interact with the evanescent field, it is easily understood that the result
will be a phase velocity v1 smaller than the speed of light. Image 4 is a continuation
from Image 3. Here, the introduction of new molecules (e.g. representing a binding
reaction of antibodies and antigens) further alters the optical properties of the super-
strate. This causes a change in the phase velocity v2, which can be compared to v1 to
obtain a change in the effective refractive index ∆ne f f . This ∆ne f f is then the measured
result of e.g. introducing antigens to a biochemically prepared chip surface.
Through interaction with the evanescent wave, the binding of target bioagents to the
receptors on the sensing surface produces a phase change in the guided light beam
relative to a parallel reference light beam. These two parallel beams propagate along
separate regions of the optical chip surface and form the basis of an interferometer, or
an optical sensing circuit. In the specific case of the interferometric biosensor, these two
parallel beams are coupled out of the optical chip via an optical grating. This output
then impinges upon a double slit to induce diffraction and interference. This is visible in
Figure 1. Interferometry is discussed in greater detail in the following section.
6
2 FUNDAMENTAL THEORY
2.2 Interferometry & Diffraction
Interference is the term, from which both "interferometry" and "interferogram" are de-
riven. Within the context of this project, interference is a phenomenon involving waves
of any kind, when they interact at the same time and place. Interference can be vi-
sualized as the superposition (i.e. addition & subtraction) of two or more waves. The
result of this is either constructive interference, destructive interference, or no resulting
interference (e.g. if the waves are orthogonal to each other).
An interferometer is then a device for producing interference between two or more
waves. Thus, an interferogram is quite literally the resulting diagram of the interference,
which is recognizable by a number of light fringes. The fringes are bright, where the
waves interfere constructively. Alternatively the fringes are dark, where wave interfere
destructively. Figure 3 illustrates the visible interference pattern of a single slit and a
double slit from a Helium-Neon Laser (λ=632.8 [nm]).
Figure 3: Fringes from a single slit and a double slit. The pattern displays 4 modes (i.e. one max-
imum and 3 neighbouring maximums on each side).
With all other parameters remaining equal (e.g. slit width, wavelength, slit separation,
etc.) the single slit pattern represents the envelope of all further multiple-slit patterns.
Each visible spot maximum, or maximum of constructive interference, is also known as
a mode. In an interferogram there are (2 × Modes) − 1 maximums. Since there are seven
maximums in the single-slit pattern of Figure 3, there are four modes overall. The main
maximum in the center is mode zero (m=0). The maximums immediately neighbouring
the main maximum on the left and the right are mode one (m=1), then mode two, and
so on.
For the single-slit pattern, the difference in intensity is quite easy to determine between
the bright and dark fringes, or spots. There is also a difference in intensity between
the visible spots (i.e. not all maximums have the same intensity). Thus, another form of
viewing an interferogram is to measure and plot its intensity distribution over space.
Figure 4 depicts the intensity distribution over space of an ideal interferogram. For a
7
2 FUNDAMENTAL THEORY
double slit, the far-field interferogram consists of a cos2(α1) interference pattern, which
is modulated by the envelope (sin(α2)/α2)2
. Both the envelope and the interference
pattern are depicted.
Figure 4: Envelope and interference pattern from a double slit, which shows 4 modes: m=0,1,2,3
The theory of single and double slits is explained in further detail in combination with
Fraunhofer diffraction.
2.2.1 Young’s Interferometer
The interferometric system of this project is based on the Young Interferometer. This type
of interferometer is essentially a wavefront splitting interferometer, where monochro-
matic plane waves impinge upon a surface with two small holes or slits. When these
slits, separated by d and with a width w, can be considered a line of point sources,
then a fringe pattern, as discussed earlier, can be seen on a screen or by a camera at
a sufficiently large distance D, as best illustrated in Figure 1 and Figure 6.
The principles and operation of the Young Interferometer are described systematically
through the examination of single-slit diffraction and double-slit diffraction, also often
referred to as Young’s Double-Slit Experiment.
8
2 FUNDAMENTAL THEORY
2.2.2 Single-Slit Diffraction
Light in the form of plane waves, which impinges upon a narrow slit, produces a dis-
tributed pattern of light produced by diffraction. Diffraction is the phenomenon, by
which wavefronts of propagating waves bend upon interaction with obstacles.
Figure 5: Single-slit diffraction phenomenon, where r is the radius of the aperture and D is the
distance from the aperture.
In simple cases diffraction can often be further identified as either Fresnel (near-field)
or Fraunhofer (far-field) Diffraction. The region of interest can be determined by the
Fresnel Number:
NF =
r2
aperture
λD
(2)
These relationships are discussed by Born and Wolf in [9]. Referring to Figure 5 and Equa-
tion (2), if NF > 1, then the region is the near field and NF < 1 it corresponds to the far
field. The interferometric biosensor operates in the far field, as based on the Young In-
terferometer, and therefore the diffraction of interest is Fraunhofer diffraction.
It is the single-slit diffraction which determines the fundamental shape of the interfero-
gram, or rather the interferogram envelope, as described by the following relationship:
W =
2λD
w
(3)
In words, the envelope width W increases with an increase in the wavelength λ and/or
a decrease in the slit width, w. As mentioned in section 2.2, the shape of the envelope
is determined by the (sin(α2)/α2)2
function.
9
2 FUNDAMENTAL THEORY
2.2.3 Double-Slit Diffraction
The intensity distribution resulting from the addition of a slit follows the cos function, and
precisely the cos2(α1) function in the case of the double slit. Despite a new intensity dis-
tribution, the envelope of this interferogram is still governed in general by the properties
of the single-slit for the case w << d.
Important for the understanding of interferometry as a tool for measurement is to un-
derstand how any change in the interferogram can be detected and represented by
a meaningful quantity or parameter. A starting point is the direct analysis of the Young
Double-Slit Experiment, shown in Figure 6.
Figure 6: Young’s experiment. Two slits of width w, separated by d, with a distance D to a screen
in the far-field. ∆s represents the difference in path length of light beams from the
various slits and x is the spatial offset of the interferogram.
Direct trigonometry yields the following relationships from the double-slit geometry:
tanθ =
∆x
D
(4)
sinθ =
∆s
d
(5)
Further, with the assumption that D >> w the application of the small-angle approxi-
mation yields:
tan(θ) ≈ sin(θ) = ∆x/D
10
2 FUNDAMENTAL THEORY
and
∆s =
d∆x
D
(6)
Equation (6) is an extremely important relationship. It is is able to connect the spatial
offset of the interferogram to the change in path length ∆s. Also of great importance is
the phase difference of the two interfering beams ∆φ, which is given below, as well as
the result of the substitution of Equation (6).
∆φ(x) =
2π∆s
λ
=
2πd∆x
Dλ
(7)
One further relationship, which can be calculated with the introduced parameters and
variables, is the period of the interferogram:
p =
Dλ
d
(8)
These relationships and how they depend on one another regarding the interferometric
biosensor is derived and discussed in section 2.3.
2.3 System Principles & Relationships
Now that a certain theoretical foundation has been explained, it is possible to expand
upon the interferometric relationships, which have been introduced thus far. Recall that
the approximate intensity distribution resulting from Young’s Double-Slit Experiment is:
I(x) ∝ cos2
(φ(x) − δ) (9)
The parameter φ(x) from Equation (9) can also be written as follows:
φ(x) = 2π f x (10)
Equation (10) is the radian representation of φ(x) for the cos2 function. The variable x
has already been defined as an offset along a spatial axis, Figure 6. The variable f is the
frequency of the cos2 function. Recall Equation (8). It is also well known in interferometry
that the spatial period p of an interferogram is:
p = Dλ/d
11
2 FUNDAMENTAL THEORY
2.3.1 Determination of Phase Difference
Knowing that the spatial period is simply the inverse of the spatial frequency, Equation
(10) can be re-written as the following and expanded with Equation (8):
φ(x) =
2πx
p
or ∆φ(x) =
2π∆x
p
=
2πd∆x
Dλ
(11)
For a second confirmation of this result recall Equation (7), which was derived directly
from the Young double-slit geometry:
∆φ(x) =
2π∆s
λ
=
2πd∆x
Dλ
(12)
The other variable of Equation (9) is the phase shift δ of the intensity distribution I(x). As
stated by A. Brandenburg[2] "the phase shift denotes the influence of the media in the
flow cells." This phase shift is dependent on the difference in effective refractive index
∆ne f f in the flow cells and the length of the flow cells L, which have been introduced
in section 1.2.
δ =
2πL∆ne f f
λ
(13)
Comparing Equation (12) and Equation (13) the result is:
∆x =
DL∆ne f f
d
(14)
When the result from Equation (14) is re-substituted into Equation (12) the following is
obtained:
∆φ =
2πL∆ne f f
λ
(15)
or
∆φ =
2πDL∆ne f f
pd
(16)
This is an important result, since there is now a relationship between the phase differ-
ence ∆φ and the difference in effective refractive index ∆ne f f , shown in Equation (15).
Furthermore, in order to determine ∆φ all variables of Equation (16) other than the
period p are known constants during a measurement. In other words, if one can de-
termine the period of the interferogram, then it is quite easy to determine the phase
12
2 FUNDAMENTAL THEORY
difference and therefore the difference in effective refractive index. The determination
of the period p can be accomplished with the Fourier Transform, which is discussed in
great detail in section 2.4. In the specific case of the interferometric biosensor, this can
be accomplished with the FFT, 2.4.4.
2.3.2 Determination of Refractive Index Difference
Alternatively it is possible to re-arrange the above equations, e.g. Equation (15), for an
expression of ∆ne f f :
∆ne f f =
λ∆φ
2πL
(17)
Again, the variables λ and L are both known constants, which shows that the conver-
sion between ∆ne f f and ∆φ is the simple multiplication of a single constant represented
by:
K = λ/2πL
Referring to the interferometric biosensor’s system parameters (i.e. λ=632.8 nm and L=5
mm), K is calculated to be ≈ 20.14×10−6, meaning that the change in effective refrac-
tive index is measured on a scale of 10−6 and can be influenced by disturbances even
in the order of 10−8.
2.3.3 System Noise & Sources of Error
The measurement signal (i.e. phase difference ∆φ or effective refractive index differ-
ence ∆ne f f ) always contains a certain combination of distortion effects or noise, which
are introduced by the environment, analysis methods and system, from which the signal
resides or originates. Noise can be defined as "any unwanted signals, random or deter-
ministic, which interfere with the faithful reproduction of the desired signal in a system"
[18]. Due to the often random nature of noise it is normally best described through
statistical properties (e.g. mean, RMS, standard deviation, etc.). Additional represen-
tations include the use of the auto-correlation function (time domain) and its Fourier
transformation (i.e. power spectral density).
13
2 FUNDAMENTAL THEORY
The following are several common and possible sources of noise and signal fluctuation
in the system of the interferometric biosensor.
Laser Stability
Fox et al[12] define the stability of a HeNe laser through several uncertainties. These
include the wavefront curvature ∆λWC, beam misalignment ∆λAlign, HeNe wavelength
∆λHeNe and the counting resolution of the fringe patter ∆λCount. The most relevant for
the interferometric biosensor would be the uncertainty of the HeNe wavelength, which
is estimated in [12] to be ∆λDoppler/λ ≈ 3.2×10−6. This is based on the Doppler width of
the Ne emission line, but these details are outside of the scope of this thesis.
Electronic Noise Sources
In electrical systems there exists many possible types of noise from electronic sources,
several of which are described below [11, 16, 18].
• Thermal noise - the random thermal motion of electrons in a conducting medium.
• Absorption noise - based on the theory of black body radiation, whereby the
same energy absorbed by a body is radiated from that body as noise.
• Shot noise (quantum noise) - arises from the actually discrete nature of current in
electronic equipment and is caused by the diffusion of minority carries and the
random generation and recombination of electron-hole pairs in semiconductor-
based devices.
• Flicker noise (1/f noise) - arises from the surface imperfections in electronic de-
vices resulting from the fabrication process. It is most important at low frequencies,
which is relative to the specific system (< 100 Hz, 1 kHz, 10 kHz etc.)
• Photocurrent noise - W. Freude [16] proposes a noise present in photoreceivers
(e.g. the pixels of the CCD camera). This arises due to fluctuations in the photocur-
rent around its mean value. The fluctuations are the result of shot noise, thermal
noise and a type of classical noise known as relative intensity noise, RIN.
Mechanical Noise Sources
This is a term to encompass the physical disturbances in the system. Examples could be
the misalignment of system components, component construction defects, etc.. These
could result in an external path difference of the propagation path, ∆l. Additionally,
physical vibrations in the system could continuously alter ∆l and appear as fluctuations
in the measurement signal. Furthermore, under certain assumptions the physical shifting
of the CCD camera could mimic relatively large phase shifts of the interferogram.
14
2 FUNDAMENTAL THEORY
Of particular interest are those noise sources contributing to random noise (e.g. flicker
noise, shot noise, etc.). A further example of random noise is Gaussian white noise. This
type of noise is independent of the spectral variable (e.g. frequency), has an average
value of zero, and its standard deviation is described through a Gaussian probability
density function. The autocorrelation thereof is shown in Figure 7. An important char-
acteristic of the autocorrelation function of random noise is the existence of a central
maximum (peak), with the surrounding values having small or zero amplitudes. Since
the result of shifting a series over its own length N is 2N, the center is often either lo-
cated at N or zero, if the axis is symmetric from -N to N. The former is true is this case.
Figure 7: Gaussian white noise of 200 samples randomly generated from MATLAB and its au-
tocorrelation. A center peak indicates that the samples are strongly uncorrelated, or
random.
In the cases of laser stability and random noise, their specific theoretical influences on
the actual measurement signal (i.e. phase difference ∆φ) are discussed in more detail
in the following section.
2.3.4 Phase Error & Uncertainty
It is reasonable to assume a minimum phase error and phase uncertainty are required
for the determination of the optimum detection limit for the interferometric biosensor.
The improvement in the phase error through various techniques and signal processing
methods is discussed in several papers and literary sources, however, the terms "phase
error" and "phase uncertainty" in these works have many varying definitions. This section
defines and explores the sources of error and uncertainty in the measured phase and
the final measurement of the effective refractive index.
15
2 FUNDAMENTAL THEORY
Phase Error
Here, a definition of the phase error from S. Nakadate [14] is used and is defined as
the absolute difference between the true phase φ and the measured phase φ . This is
written as δφ to avoid confusion with ∆φ in section 2.3.
δφ = φ − φ (18)
The true phase would result from the exact period determination of the interferogram
under ideal measurement and analysis conditions. The measured phase is then the
phase evaluated at the approximated period of the interferogram, when noise, sam-
pling and FFT associated error sources are considered. To minimize the phase error it is
necessary to determine the precise period of the interferogram. The phase error is thus
a systematic error induced by the distortion effects of signal analysis (section 2.4.5),
which may be improved through signal processing. For this, methods, such as window-
ing and zero padding, will be discussed in the following sections.
Phase Uncertainty from Random Noise
The other important term is the standard deviation of the measured phase, the "phase
uncertainty," which S. Nakadate [14] determines from the measured phase variance,
defined as:
σ2
φ =
1
SNR2
∑N
k=1 W2
k
∑N
k=1 Wk
2
(19)
Here, Wk is the chosen window function and SNR is the signal-to-noise ratio. The oc-
casion is taken to introduce the expression for the signal-to-noise ratio. A background
influence or signal offset is subtracted from the maximum signal strength and then di-
vided by the total noise RMS.
SNR =
Smax − Sbackground
NoiseRMS
(20)
Equation (19) is calculated under the assumption that the noise present in the interfer-
ogram is white, stationary and Gaussian noise of zero mean. For example, if a rectan-
gular window function (section 2.4.6) is defined by:
Wk =



1 f or |n| ≤ N−1
2
0 otherwise
(21)
16
2 FUNDAMENTAL THEORY
The standard deviation of the calculated phase σφ , would be:
σφ =
1
SNR
√
N
(22)
Notice, however, the result if a von Hann window is implemented [14]:
σφHann
=
1
SNR
√
N − 1
3
2
(23)
Therefore, it may be possible to reduce the phase uncertainty of the calculated phase
due to random noise by increasing the SNR of the sampled signal and by increasing
the number of sampled points, N. Furthermore, the use of a special window function,
such as the von Hann, Hamming, Gaussian, etc. is then only useful if their application
increases the SNR enough to compensate for their additional uncertainty (e.g. von
Hann increases the σφ by a factor of 1.22). Of course, the SNR and the number of
sampled points can’t be increased indefinitely, so a compromise for the ideal solution
must be accepted.
As in the case of the interferometric biosensor only a portion of the entire signal from
the CCD camera is considered and sampled (e.g. from 2000 pixels, the interferogram
is comprised of only e.g. N=256 pixels). An increase in N can be achieved by increas-
ing the sampling window, i.e. by sampling more of the original signal. This will, however,
intuitively alter the SNR and there is no guarantee of an increase in the SNR, especially
if more noise is sampled. N can also be increased through oversampling. As already
mentioned, this is only possible with a new camera with smaller pixel dimensions (e.g.
width of the interferogram is unaltered, but if the pixels are half the size, then the inter-
ferogram could be comprised of twice the number of pixels).
Example: In a best case, where both the SNR and N are perhaps unrealistically dou-
bled and the chosen window function is rectangular, then the resulting change in the
calculated phase uncertainty due to random noise would be:
σφoptimized
=
1
2SNR
1
√
2N
≈
1
3
σφ (24)
With twice an amplitudal SNR in the frequency domain 2·100 and twice the number of
samples N=2·256, then σφoptimized
would be 2.08×10−4. This is an relatively small error in the
phase and it is equivalent to a resolution in the ∆ne f f ≈ 4.2×10−9.
17
2 FUNDAMENTAL THEORY
In summary, based on the above findings, if the phase uncertainty is mainly a product
of random noise, then the main goals of signal processing should be the following, such
that the optimum detection limit of the interferometric biosensor is reached.
1. Calculation of the true phase to reduce the phase error
2. Increase of the SNR and where possible the number of sampled points N for a
reduction in the phase uncertainty
Further Sources of Phase Error
Further sources of error can be derived using Equation (15) in a re-written form:
∆φ =
2π∆Lne f f
λ
(25)
Here, it is assumed that there is no relative medium to be measured. In this case, instead
of a difference in the effective refractive index, a measured phase change could be
the result of varying path lengths, both internal and external to the flow cell. First the
internal case shall be examined.
An internal path difference would be represented as ∆L = L2 − L1, where L is the prop-
agation path length in the flow cell. Also, if there would be no physical change in phase
velocity, vp = c making the absolute effective refractive index ne f f = 1. The expression
for the measured phase change would become:
∆φ =
2π
λ
ne f f · L2 − ne f f · L1 =
2π∆L
λ
(26)
Ideally the system construction should ensure that ∆L is zero, since ∆L is limited to the
distance between the optical gratings. In an extreme case where construction defects
are considered, an assumed path difference could be 320 nm, which represents a full
grating period (e.g. one half of a grating period at both the input and output grat-
ings). However, since the fluctuations in the phase change are to be considered, it is
possible to observe possible errors from other sources in combination with the path dif-
ference. One such source could be an instability in the wavelength δλ, which could
mimic a change in the measured phase. This can be obtained through the derivation
of Equation (26).
18
2 FUNDAMENTAL THEORY
d
dλ
∆φ =
2π∆L
λ2
(27)
Under certain assumptions Equation (27) can be re-written as:
δ ∆φ =
2π∆L
λ2
· δλ (28)
Upon rearranging this becomes:
δ ∆φ ≈ 2π
∆L
λ
δλ
λ
(29)
The result of this derivation, Equation (29), can also be found in [14] in a similar form.
Example: Assume a path difference of ∆L = 1 × 10−6 m, which is much larger than
the discussed 320 nm. Furthermore, assume a wavelength stability of the laser light
∆λ/λ = 3.2×10−6 for a Helium-Neon (λ = 632.8 nm)[12], then Equation (29) predicts
a measured phase change of ≈ 3.0 ×10−5. Upon conversion this is a change in the
effective refractive index of ≈ 6.0 × 10−10. As calculated in section 2.3.2 a change in
the effective refractive index has the order of 10−6. It is clear that the effects of laser
instability and internal path differences are too small to interfere with the uncertainty of
the measurements of the interferometric biosensor.
Note: In [14] a laser instability, ∆λ/λ = × 10−7 is assumed. The above example assumes
an instability on the order of 10 times worse.
Now the external path difference is considered and derived. The re-written form of (15)
would now be:
∆φ =
2π
λ
(nair · l2 − nair · l1) =
2πnair∆l
λ
(30)
Here l represents the external path difference (i.e. between the first beam-forming op-
tic and the CCD camera). The refractive index of interest is now external to the flow
cell and is that of air, nair. Following the same derivation as the internal case:
d
dλ
∆φ =
2πnair∆l
λ2
(31)
19
2 FUNDAMENTAL THEORY
Again, this implies:
δ ∆φ ≈ 2πnair∆l
δλ
λ2
(32)
The occasion is now taken to substitute this into Equation (17), which results in a direct
expression for δ ∆ne f f .
δ ∆ne f f =
δλ
λ
∆l
L
nair (33)
Example: With the same values as before: δλ/λ = 3.2×10−6, an external path difference
∆l = 100 ×10−6 m, a propagation path L = 5 mm and nair ≈ 1, the resulting δ ∆ne f f ≈
6.4 ×10−8. This would represent a relatively large disturbance, since this value would
be on the same order as the standard deviation of the measured phase φ . However,
an external path difference of even 100 ×10−6 could be an overestimate. Therefore, it
is important to further investigate those causes in the system, which could lead to an
external path difference. From the mere fact that fluctuations exist in the measurement
signal, it is reasonable to assume that any external path difference would not remain
constant (e.g. 100 ×10−6), but would also fluctuate. Here, a hypothesis could be me-
chanical vibrations of the system, which could contribute to the continual fluctuation
of the external path difference itself.
In summary, it may be possible to consider the combination of laser instability and exter-
nal propagation path differences as legitimate sources of fluctuations in the phase and
final measurement signal of effective refractive index. Furthermore, it may be possible
to link the changes in external propagation path difference to the physical effect of vi-
brations, which could slightly alter the relative position of components in the biosensor.
20
2 FUNDAMENTAL THEORY
2.4 Fourier Analysis & Signal Processing
As mentioned in the previous chapter, the extraction & detection of information from
an environment or a system can be performed with sensors. The topic, sensors, has
already been discussed to some detail, specifying mainly on the area of "biosensors".
Biosensors have been described as electrical and/or optical systems. Therefore, the
output of the sensors takes the form of electrical and optical signals. However, once
a signal has been detected, can it immediately be identified as information? More
specifically, does the detected signal contain any useful information?
Signal analysis is a mathematical science for the transformation of signals in order to de-
termine if, what, and how much information a signal might contain. The transform tech-
niques, probability theory and many other mathematical procedures of signal analysis
form the fundamental structure of all communication theory. The main method of sig-
nal analysis applied in the development of the interferometric biosensor is the Fourier
Transformation. This is described in detail in the following chapters, as well as possible
sources of error and signal processing techniques to their compensation.
2.4.1 Fourier Representations of Functions
It is stated that a function can be represented approximately over a given interval by a
linear combination of members of an orthogonal set of functions, e.g. gn(x). This term
cannot always be made an equality and hence the word "approximately" above.
f (x) ≈
∞
∑
n=−∞
cngn(x) (34)
Analogous to the dot product, where the result of two orthogonal vectors is zero, or-
thogonal functions are those with a property, which states: over a given interval, a
particular performed operation between two distinct members of the set yields zero. If
we consider two functions g1(x) and g2(x), then they are orthogonal over the interval
t0 to t0 + 1/f0 if:
< g1(x)|g2(x) > =
t0+1/f0
t0
g1(x)g2(x)dx = 0 (35)
It is stated here without proof that the following set of harmonic time functions is a
complete orthogonal set of functions over the interval t0 to t0 + 1/f0:
cos(2πn f0t), sin(2πn f0t) where 0 ≤ n < ∞
21
2 FUNDAMENTAL THEORY
For convenience, when working with time functions, the opportunity is taken to define
the function period T:
T = 1/f0 (36)
Here the previously mentioned term "signal" is reintroduced, s(t), and is a function to
represent a time signal. Following this line of logic and applying Equation (34) & the or-
thogonal harmonic time functions, any time function, or signal, s(t) can be represented
by:
s(t) = a0 cos(0) +
n=∞
∑
n=1
[an cos(2πn f0t) + bn sin(2πn f0t)] (37)
for
t0 < t < t0 + T
The above expansion is known as the Fourier Series. If the index of the series n is now
any positive or negative integer, it is possible to consider a complete set of complex
harmonic exponentials. Over one period, with the same time interval of t0 to t0 + T,
and in the form of Euler’s Identity, it can be written:
ej2πn f0t
= cos(2πn f0t) + j sin(2πn f0t) (38)
As stated, the series expansion in Equation (37) applies for the time interval of t0 to t0 + T
and the signal s(t) can therefore be expressed as a linear combination of complex
coefficients over this interval. With this considered, Equation (37) can be stated in the
following complex form:
s(t) =
∞
∑
n=−∞
cnej2πn f0t
(39)
Through the multiplication of Equation (39) by e−j2πn f0t and the integration of both sides,
the complex coefficients cn are given by:
cn =
1
T
t0+T
t0
s(t)e−j2πn f0t
dt (40)
With these coefficients one is able to plot the Complex Fourier Spectrum, which nor-
mally takes the form of the magnitude and phase of cn plotted over the range of n
multiplied by the fundamental frequency f0 (i.e. cn vs n f0). In this manner a different
22
2 FUNDAMENTAL THEORY
representation of our original signal s(t) has been achieved for consideration with re-
spect to frequency.
In the case of the interferometric biosensor under development, the signal of interest
is in the form of an interferogram and is a periodic signal. The analysis of the interfero-
gram’s signal in the frequency domain is extremely crucial in the determination of the
period of the interferogram, which in turn corresponds to the change in phase velocity
of the optical beam as it propagates through a biological sample. Theses relationships
have been explained in the section 2.3.
2.4.2 The Continuous Fourier Transformation
Now that the basis for the Fourier representation of periodic time signals has been
briefly explained, the question arises whether or not it is possible to achieve a Fourier
representation for non-periodic signals. Since the Fourier Transformation is very widely
known and studied (see [1, 7, 11]), the details of its derivation are not covered in this
thesis. Instead, the determination of the Fourier Transform is explained in the following
manner.
Non-periodic signals can be thought of as particular instances of periodic signals
whose period approaches infinity. In inverse relationship to the period, the fundamen-
tal frequency therefore approaches zero. In effect the separation of the harmonics
becomes smaller. Continuing with the limit as f0 approaches zero, the summation of
the Fourier Series representation of s(t) becomes an integral.
As mentioned in the previous section, our time signal s(t) was able to be represented
with the Complex Fourier Spectrum in reference to a different variable, namely fre-
quency (f0). For convenience the new representation in frequency is written as S(f).
Analogous to a function, where a set of rules substitutes one number for another, trans-
forms are sets of rules that substitute one function for another. In this case the transfor-
mation is written as:
S(f ) =
∞
−∞
s(t)e−j2π f0t
dt (41)
With t as a dummy variable for integration, the transform above defines how to ev-
ery function of t, a new function of f is assigned. The above equation is the Fourier
Transform and states that, given the Fourier transform of a function of time, the original
23
2 FUNDAMENTAL THEORY
time function can always be uniquely recovered, meaning that either s(t) or S(f) can
uniquely characterize a function. This uniqueness and recovery is accomplished with
the further aid of the Inverse Fourier Transform:
s(t) =
∞
−∞
S(f )ej2π f0t
df (42)
2.4.3 Discrete Signals
Simply defined, discrete signals are signals, which are not continuous in time. They are
signals comprised of values at a defined interval along an axis, for example a transient
axis (signal value sampled every second) or a spatial axis (signal value sampled every
unit interval). Sampling is the process of converting a continuous axis into a discrete
axis by only considering values at defined sampling intervals. For clarification, contin-
uous signals have a value for every infinitesimal interval along an axis. For pen and
paper analytical solutions of continuous signals are feasible, but such analysis often
has little practical use in the development of systems. Signals must often be analyzed
and processed in discrete steps, because the infinitesimal point-by-point analysis of a
continuous signal would quite literally take forever.
Fortunately, due to Nyquist’s Sampling Theorem, not all points are needed. Simply
knowing enough values at discrete time points makes it possible to fill in the curve be-
tween these points precisely. The limitation is having "enough" of these discrete values.
In the case of time, Ts represents our sampling period. The Nyquist Theorem states that
Ts must be less than 1/2 of the maximum frequency fmax in the signal [11]:
Ts < 1/2 fmax or fs > 2 fmax
In other words, the sampling frequency fs must be greater than twice the maximum
frequency fmax of the signal being sampled. Twice the maximum frequency, 2 fmax, is
known as the Nyquist Frequency. The proof for Nyquist’s Sampling Theorem will not be
shown. Relating this to the interferometric biosensor the discrete axis is not a time axis,
but rather a spatial axis represented by pixels. Each pixel can be thought of as an
individual detector and therefore has a corresponding intensity value. The sampling
requirements are fulfilled, when the period of the interferogram is greater than 2 pixels.
24
2 FUNDAMENTAL THEORY
2.4.4 The Discrete Fourier Transformation
With the determination that signals are processed as discrete signals, it seems that the
continuous Fourier Transform (CFT) is of less use. This is not so, it simply obtains a discrete
form. An important mathematical tool for the software implementation of signal pro-
cessing and analysis is the discrete Fourier transform (DFT). For a discrete signal x(nTs)
it is possible to form a corresponding periodic signal xp(nTs) with a period NTs, as has
been done with the Fourier representation of signals.
xp(nTs) =
∞
∑
r=−∞
x(nTs + rNTs) (43)
The DFT of xp(nTs) is then defined as:
Xp j
2πk
NTs
=
N−1
∑
n=0
xp(nTs)e−j 2πkn
N (44)
N ≡ Total sampled points
Ts ≡ Sampling period
n ≡ Index of discrete points in the signal
r ≡ Number of repeated periods
Notice the extreme similarity of the DFT to the CFT. Similarly the DFT has an inverse trans-
formation, the IDFT, but this is not covered. In general Xp(j 2πk
NTs
) is a complex function
and is often written in simplified notation as X(k).
X(k) = A(k)ejφ(k) ⇒ Xp j 2πk
NTs
= A 2πk
NTs
ejφ( 2πk
NTs )
Information to be drawn from this can be the magnitude and phase of the signal.
where A(k) = |X(k)| and φ(k) = arg[X(k)]
Fast Fourier Transformation
Of more importance to mention is the Fast Fourier Transformation (FFT). Again, this will
not be covered in great detail, but it is necessary to know that the FFT is set of powerful
algorithms used to efficiently calculate the DFT.
In brief, the DFT involves N complex multiplications and N-1 complex additions for each
value of X(k). Excluding the additions, the number of multiplications is N2 over the entire
signal X(k). This alone is quite a large computational load.
25
2 FUNDAMENTAL THEORY
The FFT, on the other hand, has a total number of multiplications over the entire signal
of (N/2)log2N. Continuing the multiplication example, if a signal has 100 values, N=100,
the direct DFT requires 10000 multiplications, while the FFT requires merely 350. In other
words, almost 97% less calculations. Notice the special condition for the FFT implied
by log2N. The result from this calculation must be rounded to the next highest integer.
Therefore, any signal N must be a power of 2 in order to implement the FFT. Nevertheless
the FFT can be applied to any finite duration signal by including an appropriate number
of trailing zeros to fulfill this condition, extending the signal length to the next power of
2. This is known as Zero-Padding and is discussed later in greater detail.
The FFT plays a crucial role in the processing and analysis of the interferogram sig-
nal of the biosensor. Since the interferogram is sampled many times per second, the
computer and software are placed under high demands. Therefore, as one step to
maximize the algorithmic and computational efficiency the FFT, not the direct DFT, is
performed in combination with zero-padding, where necessary.
2.4.5 Distortion Effects in Signal Analysis
The following effects are possible sources of error (e.g. noise), which are associated
with the sampling and Fourier transform of the interferogram. Please recall that random
noise also exists in the Fourier representation, but it will not be discussed here, since it
was already introduced in section 2.3.3. This section gives only a brief introduction to
some distortion effects. For further information please refer to [11].
Aliasing
When the sampling frequency is too low and the Nyquist theorem is not fulfilled, the
result is aliasing. The name is derived from the fact that higher frequencies disguise
themselves in the form of lower frequencies. The sampling frequency fs is directly re-
lated to the CCD camera and its pixel sizes. The CCD camera is discussed in section
3.1 and the fulfillment of Nyquist’s theorem for the interferometric biosensor is dealt with
in section 4.2.
26
2 FUNDAMENTAL THEORY
Quantization Noise
Quantization is the conversion of an analog signal (e.g. light interference pattern) to
its digital approximation. The approximation arises from the fact that rounding errors
occur, due to a finite encoding resolution, or number of quantization levels. These er-
rors are considered noise and it is defined as the function of the time difference be-
tween a quantized signal and the original signal, which give rise inaccurate encoding
approximations of the original signal’s amplitude. Again, quantization is performed in
combination with the CCD camera and its electronics. Stated by M. Kujawinska and J.
Wojciak [10] the quantization is negligible for a resolution greater than 6 bits. Therefore,
the 12-bit CCD camera resolution of the interferometric biosensor excluded quantiza-
tion noise as a major factor.
Oscillations & Leakage
It is always the case that only a finite portion of a signal is to be analyzed and there-
fore truncation error is always present. This truncation will lead to oscillations or ripples
in the frequency domain when analyzed by a Fourier transformation. This behaviour
is commonly known as Gibb’s Phenomenon and these ripples are known as Gibb’s
Oscillations[1, 15].
Since signals of finite duration also have a finite energy, these ripples represent an un-
wanted distribution of a partial amount of the signal’s energy spread over surrounding
elements of a domain (e.g. frequency domain). In other words, the energy leaks out
to the surroundings. Commonly, this is also referred to as leakage (e.g. DFT leakage).
The simplified explanation for this results from the attempt to transform discontinuities,
as is the case in "the corners" of a rectangular window. A solution is then to use a non-
rectangular truncation or window function, which is discussed in section 2.4.6.
2.4.6 Signal Processing
By means of signal processing it is often possible to minimize the effect of many distor-
tions, and convert signals into desirable forms, where the wanted information is, e.g.
more easily obtained or extracted. The following sections offer a theoretical explana-
tion of how system related noise and signal analysis distortion effects may be dealt with
to provide an optimum measurement signal. The actual algorithms and their imple-
mentation is explained in further detail in section 3.2 and in A.2.
27
2 FUNDAMENTAL THEORY
Window Functions
One method for the reduction of Gibb’s oscillations is through the use of non-
rectangular windows without discontinuities. Many window functions exist and they are
frequently used for filtering applications, especially in digital filtering. Merely for exam-
ple, two very common windows, mentioned here together due to their strong similarity,
are the von Hann and the Hamming windows.
wH(nT) =



α + (1 −α) cos 2πn
N−1 f or |n| ≤ N−1
2
0 otherwise
(45)
vonHannWindow α = 0.50
HammingWindow α = 0.54
Figure 8: Rectangular window, von Hann window (α = 0.50) and Hamming window (α = 0.54)
Windowing plays a role in the final determination of the period of the interferogram by
reducing leakage. This reduction of leakage then reveals a more precise shape of the
frequency peak in the FFT. Figure 9 attempts to illustrate this.
Based on the findings in section 2.3.4, window functions may also play a key role in the
reduction of random noise, referred to as phase uncertainty. This is only true, however, if
their implementation is able to increase the SNR significantly enough to overcome the
introduced uncertainty from their implementation, implied by Equation (19).
28
2 FUNDAMENTAL THEORY
Figure 9: Possible representation of a leakage-distorted peak and its window-corrected peak
in the FFT domain, which have sampled maximums at different frequencies.
Zero Padding
Zero padding is a method in signal processing to extend the length of a causal signal
or spectrum by appending zeros to the end. The main goal of such an operation is to
adjust the signal or spectrum length, such that the number of samples are a power
of 2. When this is accomplished the signal can be analyzed using the FFT instead of
the less efficient DFT. Given any function, f(x), zero-padding can be represented by the
following relationship.
Zeropadding[f (x)] =



f (x) |x| < N/2
0 otherwise
(46)
In addition to zero padding’s usefulness with the FFT, it is also often implemented
as a method for spectral interpolation. In combination with the Fourier theorems,
zero padding, for example in the time domain of periodic functions, yields an ideal
band-limited interpolation in the frequency domain[1]. Zero-padding is, however, not a
method to increase spatial resolution and hence is not an oversampling method. In the
case of the interferometric biosensor oversampling could only take the form of a new
camera with smaller pixels. Due to the efficiency of the FFT, its use with zero-padded sig-
nals is a highly practiced and practical method for interpolating the spectra of signals
with finite durations.
In general, under ideal conditions of continuous signals, the general shape of the peak
indicating the frequency or alternatively the period of the interferogram should be
Gaussian in nature. From Figure 10 it is obvious that the DFT of the unprocessed interfer-
29
2 FUNDAMENTAL THEORY
ogram signal is a very poor approximation of this Gaussian. Since the DFT of a discrete
signal in one domain is a discrete signal in the other domain, the information between
the discrete points is not immediately known. This could mean that the true maximum
and its corresponding spatial frequency lies between two discrete points, except in
cases of extreme coincidence.
Figure 10: MatLab simulation of the a signal peak of the DFT from an interferogram signal with
230 points & the theoretical representation of a continuous Fourier method.
Again, since the interferometric biosensor system is designed to be a highly sensitive
measuring device, the attempt must be made to retrieve the precise period of the
interferogram. From Figure 10 the estimated frequency of the peak is .23045, yielding a
spatial period of nearly 4.34 [pixels] for the interferogram.
After the implementation of zero padding Figure 11 shows the result of the FFT with 2048
points. In comparison with Figure 10, the Gaussian nature of the peak is more evident.
Now the estimated frequency of the peak is .2315, yielding a spatial period of 4.32
[pixels] for the interferogram.
Important to note is that the signal-to-noise ratio, SNR, of a zero-padded signal is dis-
torted from the true signal SNR [1]. Therefore, there is a compromise with zero padding:
1. Precise determination of the period of the interferogram.
2. Precise determination of the SNR in the frequency domain.
30
2 FUNDAMENTAL THEORY
Figure 11: MatLab simulation of the a signal peak of the DFT from an interferogram signal sam-
pled with 230 points and zero-padded to 2048 points.
Some questions arise: Should zero-padding be implemented? If yes, should it be ap-
plied only to comply with the FFT criterion or should linear interpolation be applied as
well? Additionally, how much zero-padding should be carried out? Theoretically the
linear interpolation could be carried on indefinitely, but the price is an increased pro-
cessing load and calculation time. Should 100 zeros be appended or 1 million? The
latter question has an almost immediate answer. Figure 12 shows the convergence of
the period calculation to an acceptable solution for the example case in this section.
Figure 12: Solution convergence for the period of an interferogram.
31
2 FUNDAMENTAL THEORY
Noise Averaging
The standard deviation is a statistic used as a measure of the dispersion or variation
in a data distribution. Quite often in measurements these variations are considered to
corrupt the data distribution (e.g. signal) with noise. Therefore, the standard deviation
increases when a signal becomes noisier. It is the task of filtering in signal processing
to reduce the effects of this noise by means of attenuation, elimination, and other
common practices, such as averaging. Two methods used in the processing of the
measurement signal in the interferometric biosensor are briefly described below.
Block Averaging
This form of averaging requires the definition of a buffer size n and the sum of the val-
ues in a full buffer divided by the buffer size. Through this the mean value of a small
section or block of a larger sequence is obtained. The calculated mean values can
then form a mean value sequence xk of the original sequence xi. Figure 13 illustrates
this graphically.
Figure 13: Graphical representation of the averaging of a data series in blocks of size n.
Compared to other averaging methods block averaging is more calculation intensive.
A mean value can only be calculated after n measurements and the next mean then
only after the next n measurements, and so on. However, this method can be useful
when it is implemented to reduce the number of data points in a series, thereby saving
memory if the mean value sequence is recorded in a data file. Furthermore, it is often
the case that the mean value sequence xk has a lower standard deviation than the
original sequence xi. This will be explored in section 4.2.4.
32
2 FUNDAMENTAL THEORY
Moving Average Filter
In addition to, not in replacement of block averaging, would be moving-average filter-
ing. This is one technique for the recursive averaging and smoothing of a measurement
sequence. The method is demonstrated in Figure 14.
Figure 14: Graphical representation of the moving average principle.
This method depends only on the last calculated average and the newest measured
value. Therefore, the data buffer must become filled only once, instead of the continu-
ous emptying and re-filling of the buffer as in block averaging. Upon derivation, which
is not shown here, the latest averaged value in the moving sequence is:
xk = xk−1 +
1
n
[x − xk−n] (47)
33
3 INTERFEROMETRIC SENSOR SYSTEM
3 Interferometric Sensor System
This section provides more information about the individual hardware components and
software algorithms implemented in the interferometric biosensor. Overviews, sample
calculations and derivations may be provided for certain components or software al-
gorithms, but this section will not explore the fundamental principles of how these com-
ponents (i.e. lenses) or software packages (i.e. MATLAB) operate.
3.1 Detailed System Configuration
In the introduction of this thesis a system overview in Figure 1 was provided as an aid
in the further understanding of the principles of operation and theory involved in the
interferometric biosensor. Figure 15 illustrates the actual system construction used for
the test measurements, which are presented in section 4. Not all labeled components
will be discussed. Instead, several important components (e.g. double slit, optical chip),
which are not visible in Figure 15, are discussed in greater detail
Figure 15: Actual system construction.
34
3 INTERFEROMETRIC SENSOR SYSTEM
3.1.1 Lasers & Light Sources
Over the development of the interferometric biosensor the main light source has been
a Helium-Neon laser. Current measurements are being taken to implement a new light
source known as a super-luminescent light source. The choice of the light source de-
pends greatly on the optical characteristics of the optical chip, since the coupling
properties are related to the e.g. wavelength of the light source.
Helium-Neon Laser
The current light source for the interferometric biosensor is a Helium-Neon laser with a
wavelength of 632.8 nm and a minimum power output of 2,0 mW. The laser in use is
not a controlled light source, but Helium-Neon lasers are generally have a high wave-
length stability. The worst case assumption from section 2.3.4 was a stability ∆λ/λ of
1×10−6. Furthermore, to avoid fluctuations in the output, power measurements were
performed only after a warm-up time of > 30 minutes. These and other laser properties
are summarized in the following table:
Minimum power 2.33 mW
Wavelength 632.8 nm
Beam diameter 0.9 mm
Total length 272 mm
Table 1: Basic parameters of the Helium-Neon laser used in the development of the interfero-
metric biosensor.
Super-Luminescent Diodes (SLD)
A super-luminescent diodes is not a laser or laser diode and is also different from a con-
ventional LED. The specifics of a SLD are beyond the scope of this report, but generally
known about SLDs is their shorter coherence length compared to lasers. SLDs also emit
light, which consists of amplified spontaneous emissions and their beam divergence is
comparable to Fabry-Perot laser diodes. Due to their high temperature sensitivity it is
also necessary that their operation is temperature controlled. This is a main advantages
for choosing a SLD light source. Others include the compactness of the light source and
the avoidance of patent infringes with similar systems using the HeNe laser-light source.
35
3 INTERFEROMETRIC SENSOR SYSTEM
3.1.2 Lenses & Beam Forming
The beam forming of the interferometric biosensor is quite simple and consists of 2 cylin-
drical lenses with focal lengths of -50 mm and 200 mm respectively. The first lens (f1 =
-50 mm) widens the HeNe-laser beam along the vertical axis, which also narrows the
beam in the horizontal axis. The second lens compensates for this widening and serves
to focus the beam at its focal length (f1 = 200 mm). The resulting beam profile is elliptical
in nature and closely resembles a line to the eye.
The main purpose of forming the beam is to better fit the beam profile to the dimensions
of the grating of the optical chips. Without beam forming much of the laser light, with
an original beam diameter of 0.9 mm, would not impinge upon the grating, which
has a smaller width of 0.5 mm. Figure 16 shows an example beam profile after beam
forming. In this case the light source had been a SLD with its corresponding beam-
forming optical configuration, but the underlying purpose and results would be similar
to the HeNe laser.
Figure 16: Beam profile of a SLD source after beam forming. The intensity distribution is shown to
be nearly Gaussian in both the vertical and horizontal axis.
36
3 INTERFEROMETRIC SENSOR SYSTEM
3.1.3 Flow Cells
It is inside the flow cell, where the sampling materials first come into contact with the
chip surface and the evanescent sensing field. The main properties of this component
are summarized in the following table:
Material Sylgard 170
Width ca. 1 mm
Length ca. 7 mm
Volume ca. 7 µl3
Flow cell separation 1.3 mm
Table 2: Fundamental properties of the flow cells.
Sylgard 170 is a black, silicon-based fluid, which is formed and heat treated into a soft
and flexible material. Importance is the separation of the flow cells (middle to middle)
of 1.3 mm. This distance d must be exactly match by the separation of the double slits.
In addition, the velocity of the material and fluid drawn through these flow cells during
a measurement is 1 µl per second. Figure 17 is an illustration of an actual flow cell, as
well as a simulated design of the entire flow cell in its mount.
Figure 17: A photograph of a demonstration flow cell made of transparent silicon and the com-
puter design of the flow cell in its mount.
37
3 INTERFEROMETRIC SENSOR SYSTEM
3.1.4 Optical Chips
The key components of the interferometric biosensor are the optical chips, since almost
the entire design and development of the system is based on their implementation and
characteristics. The chips are supplied by Unaxis Optics. Table 3 provides a summary of
the relevant chip information. In addition, some information about a protective layer is
given. This represents an alteration to the supplied optical chips and the arguments for
this are discussed in the next subsection. Figure 18 illustrates this information and serves
as a reference for the description of the protective layers and their function.
Substrate AF 45 (n=1.52) 16×48×0.7 mm3
Waveguide Ta2O5 (n=2.1) thickness = 150 nm
Protective layer SiO2 (n=1.46) thickness ≈ 510 nm
Grating period 320 nm (depth ≈ 12 nm)
Coupled-wave polarization TE (parallel to grating)
Coupling angle ≈ 3°
Penetration of the evanescent field 27.5 nm
Power in the evanescent field 10.6%
Table 3: Fundamental properties of the optical chips supplied by Unaxis Optics.
Figure 18: An optical chip with protective layers and the placement of the flow cell.
38
3 INTERFEROMETRIC SENSOR SYSTEM
Protective Layers
A protective layer is an additional layer of glass (SiO2) above the region of the optical
gratings. As the name implies, these layers protect the optical gratings, but the main
advantage of having these glass layers is to ensure that the evanescent field never
comes into contact with the silicon flow cell and, therefore, does not measure the
refractive index of the silicon. In this case, it would not be possible to ensure that the
flexible silicon would have the same shape, thickness and surface distribution in both
flow cells (e.g. due to pressure changes). The advantage of relative measuring would
be severely compromised, due to differing conditions of the individual flow cells.
Instead, the soft silicon wraps and forms to the shape of the glass protective layers. The
evanescent wave then only encounters the glass upon exiting the flow cells, and does
this in both flow cells. Furthermore, the solid glass layer neither shifts nor alters shape due
to internal or external factors. In this case the evanescent field is assumed to undergo
the same change in phase velocity in both cells regarding its interaction with the glass
protective layer. This better maintains the goal of relative measuring. Figure 18 attempts
to demonstrate the propagation of the light and evanescent field in reference to the
position of the flow cells and protective layers.
Scattering & Spreading
Since the interferometric biosensor is based on the sensing of material on the surface of
the chips it is also important to understand the resulting influence of unwanted foreign
material, such as dust and dirt, on this surface. If some foreign matter were to be present
in the beam’s propagation path the effect could be a scattering along the surface,
within the waveguide, as demonstrated by Figure 19.
The scattered light would then propagate and meet the output coupling at many
non-perpendicular angles. Instead of the beam profile as a concentrated point, the
result would resemble a bent line as the light spreads into a curvature form. This could
cause light of weakened intensities to enter the double slit at undesired angles, which
might be detected as interference patterns at neighbouring pixel regions of the CCD
camera. The results of such tests are presented in section 4.1.
39
3 INTERFEROMETRIC SENSOR SYSTEM
Figure 19: Possible scattering effects due to foreign matter on the chip surface.
3.1.5 Double Slit
The double slit is the component, which has undergone the most alteration and re-
design in the interferometric biosensor, since it’s dimensions are dependent on both
the flow cell and the optical chip. Figure 20 illustrates the version of the double slit used
for the majority of measurements presented in this thesis.
Figure 20: A non-proportional illustration of the double slit film used both for spatial filtering (36
µm) and for inducing diffraction and interference (30 µm).
Remember that the slit separation d of 1.3 mm matches the separation of the flow
cell. This is of critical importance to ensure that only the light is captured, which has
passed through the flow cell (i.e. sensing regions) and has therefore undergone a rel-
ative change in phase velocity. Since the width of the flow cells (ca. 1 mm) is much
larger than the width of the output double slit (30 µm), there is some room remaining
for adjustments.
40
3 INTERFEROMETRIC SENSOR SYSTEM
Also of importance is the determination of the slit width w. Using a relationship of single-
slit interference, Equation (49), it is possible to approximate the size of the interference
region of an interferogram. In A.1 the size of the interference region for a double slit is
approximated to be the width of the interferogram B minus the slit separation d. Table
4 shows the results of several sample calculations for slit widths of 30, 50 and 80 µm.
Slit Width [µm] Period [µm] Width [mm] Overlap [mm] Contained Periods
w p = λD/d B = 2λD/w B − d (B − d)/p
30 48.68 4.22 2.92 60.0
50 48.68 2.53 1.23 25.3
80 48.68 1.58 0.28 5.8
Table 4: Sample calculations: Number of periods contained in an interferogram detected at a
distance D=100 mm, separation d=1.3 mm and λ= 632.8 nm.
It is not surprising that a smaller slit width at the output double slit results in a wider
interferogram. A wider interferogram can then be sampled by more points N. A larger
N also fulfills an earlier requirement for the reduction of random noise components
(section 2.3.4). In this case any slit width between 50 µm and 30 µm would be sufficient
for analysis with the CCD camera at a distance of 10 cm. Furthermore, it is only the 30
µm slit that offer an acceptable slit width if the distance between the slit and the CCD
camera is decrease. The number of periods is important for the visual approximate of
the interferogram’s contrast, but this is not discussed in detail in this section.
Note: An acceptable interferogram width or overlap region would be one that offers
the possibility to sample with a large number of points, which is also divisible by 2 to
fulfill the FFT requirement: For example, N=256 and a CCD camera pixel width of 14 um
would require the interferogram to be ≈ 3.58 mm wide.
A double slit serves a dual purpose in the current design of the interferometric biosensor.
The input slit has already been mentioned in section 1.2. Since the sensing region of
interest lies only within the flow cells, there is no need for coupled light in the optical chip
in other regions. Such light gives rise to unwanted scattering effects, which is discussed
in greater detail in section 3.1.4. Therefore, the slit width for the input double slit w
should be as small as possible, while remaining smaller than the output double slit (e.g.
w’ > 30 µm). Therefore, 36 µm was chosen, since it is the next largest slit width possible
with the production methods of the slit present in Figure 20.
41
3 INTERFEROMETRIC SENSOR SYSTEM
The early production methods of the double slit involved the development and ex-
posure of high-quality film (e.g. lithographic) based on vector-graphic image formats.
Such processes eventually resulted in double slits of very good quality (i.e. sharp slit
edges, excellent contrast and few spots on the slits in the slit regions, etc.), but such
slits could not be reliably produced at the wanted dimensions (e.g. 30 µm). Therefore,
the winning solution, which had been quick, free and resulted in good slit quality, was
a high-quality laser printing onto transparent film. Now, that the slit widths have proved
to be sufficient for testing, future double slits will be laser cut from metal.
Figure 21 serves the multiple purpose of demonstrating the chip placement, the in-
put slit’s function, and the introduction of the main mount for the chip and the flow
cell. For additional understanding the anticipated beams from the input double slit
are shown, as well as their input coupling, prorogation path along the waveguide, the
output couping and eventual arrival at the output double slit.
3.1.6 Optical Chip Mount
The current mount for the optical chip has been designed for the quick replacement or
adjustment of the optical chip between tests. Basically, the double slit film sits on small
pins and is fixed against the mount surface to prevent if from shifting. The optical chip,
which also sits upon the same pins, is lightly pressed against the double slit film by the
soft silicon flow cell. Small braces surrounding the flow cell come into contact with the
chip mount to prevent the application of excessive force against the chip.
In reference to Figure 21 the light beam enters from the left into the labeled opening.
Within there is an adjustable mirror. This places the final degree of freedom almost di-
rectly in front of the chip and the optical grating. Upon reflection the beams is filtered
into two parallel beams and is coupled into the chip waveguide after impinging upon
the optical gratings. The beams propagate over the 9 mm distance, which separates
the optical gratings. Any diffraction resulting from the input double slit is disregarded,
since this distance is so small. The light then impinges upon the output grating and is
diffracted by the second double slit to form the interferogram, detected by a CCD
camera.
42
3 INTERFEROMETRIC SENSOR SYSTEM
Figure 21: View the chip mount demonstrating the relative double slit and chip placement.
3.1.7 CCD Camera
The device used for the current detection of the interferogram is a Stresing ILX 511 CCD
camera. The output from CCDs or charge-coupled devices is a series of analog pulses,
which represent the intensity distribution at a series of discrete locations, or pixels [8].
Very simplified, the operation of a CCD is controlled by a clock, which controls the
timing of how long a pixel will collect a charge resulting from the intensity of an optical
signal. This charge is then transfered and converted into a measurable voltage signal,
often with the intent of further computer analysis. The relevant specifications for the
current CCD camera are listed in the following table.
Active sensor length 28.7 mm
Pixel area ca. 14 × 200 um2
Max. exposure time ca. 6 seconds
Resolution 12 bit (4096 levels)
Clock speed 2.5 MHz
Table 5: Basic characteristics of the Stresing ILX 511 CCD camera with 2048 pixels.
43
3 INTERFEROMETRIC SENSOR SYSTEM
3.1.8 Pump
The pump is of course responsible for the introduction and flow of all samples and fluids
into the flow cells. In actuality, the pump does not pump the samples at all. Instead the
samples are drawn from their containers, directly into the flow cells. Pumping would
then first involve the filling of the syringes, which are visible in Figure 22, and this may
give rise to unwanted reactions and mixtures before the sample enter the flow cells.
The pump is programmable and various parameters, most importantly the pumping
or drawing velocity, are adjustable. The chosen velocity is 1 µl per second. Since a
syringe has a volume of 250 ml once pumping or drawing cycle lasts approximately 4.2
minutes. In the current configuration the intake tubes have a diameter of 0.3 mm and
the output tubes have a diameter of 0.8 mm.
As visible in Figure 22, the pump resides outside of the optical system. This helps to
reduce the effects of vibrations on the system due to the pump, as well as the ability
to operate the pump and to change samples, without the disruption of the optical
system.
Figure 22: Photograph of the system pump and several beakers of water.
44
3 INTERFEROMETRIC SENSOR SYSTEM
3.2 System Algorithms
The theory and principle relationships required for determining the measurement signal
(i.e. phase and ∆ne f f ) have been discussed in detail in section 2. This section provides
a brief overview of the software realization of the presented theory.
3.2.1 FFT-Based Measurement Algorithm
In reference to Figure 23 the signal x(i) is the intensity array from the CCD camera (i.e.
the interferogram signal) for the current processing step. At this point truncation of N
samples and filtering (e.g. windowing) can be implemented. With zero padding, the in-
terferogram signal can be analyzed by the FFT algorithm. The interferometric biosensor
software then automatically determines the frequency of the maximum peak, which
represents the period of the interferogram upon conversion (p = N/fmax).
Next, the phase value at the calculated fmax is obtained. This phase value represents
the phase of the interferogram for this processing step. With Equation (17), the value
for the ∆ne f f is obtained. Both values (phase and ∆ne f f ) are recorded and plotted. This
entire process is then repeated for every interferogram detection by the CCD camera,
resulting in phase and ∆ne f f trends.
Figure 23: The FFT-based algorithm for the determination of ∆ne f f from the analysis of the inter-
ferogram, x(i)
45
3 INTERFEROMETRIC SENSOR SYSTEM
3.2.2 Fourier-Coefficient Correlation Algorithm
This algorithm is an extended application of the original algorithm proposed by A.
Brandenburg [2]. Essentially, this algorithm is a Fourier transformation broken into sev-
eral steps. Overall the algorithm is a form of the complex Fourier series from Equation
(39). Therefore, the actual Fourier analysis is not performed with the FFT, which is a dis-
advantage of this algorithm. Referring to Figure 24 this should not be confused with the
one-time FFT of the input signal x(i) in the calibration stage. However, an advantage of
this algorithm is the added control at each stage of calculation.
Again the input signal is truncated and possibly filtered to obtain x(i). Notice that the
later steps of the algorithm require the period and envelope width of the interferogram.
Therefore, in the calibration stage, x(i) is possibly zero padded and the FFT yields the
period of the interferogram, exactly as in the FFT-based algorithm. If the parameters
of the double slit are known (i.e. slit separation d and slit width w), then it is possible to
calculate the interferogram width B from the period p: B = 2pd/b.
Figure 24: The extended Fourier-based algorithm for the final determination of ∆ne f f .
46
3 INTERFEROMETRIC SENSOR SYSTEM
Also performed only once in the calibration stage is the generation of a test signals or
basis functions for their later correlation with the interferogram signal. The basis func-
tions are Fourier based (i.e. comprised of cos and sin). The "index" parameter refers to
the spatial position or pixel, i. The 1+cos term is a form of the chosen window function
(e.g. von Hann). Important to note is that the implementation of a specialized window
function at this point is a valid option, but not a requirement.
All subsequent steps are repeatedly performed for every new instance of x(i). First the
sin and cos components are multiplied with x(i) and summed to yield the complex and
real components of the transformed interferogram signal. The phase is found by the
negative arctan of the complex over the real components and the ∆ne f f is calculated
with Equation (17), as has been done with the FFT-based algorithm.
Not shown in the above algorithm is the capability to reconstruct the envelope function
of the interferogram based on the point to point multiplication of the of the shifted basis
functions with the interferogram signal [2].
Sk =
N
∑
k=1
xisi−k
Ck =
N
∑
k=1
xici−k
Therefore, the maximum of the distribution T(k) marks the maximum of the interfero-
gram envelope.
T(k) = S2
k + C2
k
47
3 INTERFEROMETRIC SENSOR SYSTEM
3.3 Software Development Environments
Based on the methods and concepts introduced in section 2.3, it would be possible
to carry out measurements with simple measurement software. This, however, is not
practical for a number of reasons. Firstly, those sources of distortion discussed in section
2.4.5, which often disrupt the measuring signal. Secondly, the interferometric biosensor
under development is designed to be a highly sensitive measuring device and each
step of the signal analysis should be optimized with the appropriate algorithms.
This has required the use of sophisticated development software. The main software
environments and language used for the development of the interferometric biosen-
sor are: LabVIEW, Visual C++ for C-programmed DLLs (Dynamic Linking Libraries), and
MATLAB. This section provides a brief overview of each of these environments and their
role in the final software product, the interferometric measurement program.
3.3.1 LabVIEW
Laboratory Virtual Instrument Engineering Workbench (LabVIEW) is developed and dis-
tributed by National Instruments. LabVIEW is said to be programmed in G, as it is a
graphical programming languages. This operates on the principle of hierarchies and
each entity in an hierarchy is referred to as a Virtual Instrument or VI. In turn these VIs
are composed of graphical elements or other VIs. Due to its graphical nature LabVIEW
uses the concept of nodes to connect each element and VI throughout the program
or hierarchical structure.
Each VI has two associated levels. One is the front panel and is in simple terms the GUI,
with which the user interacts (i.e. graphs, buttons, controls, etc.). The second level is
the block diagram, which contains all of the elements and connections to control the
visible graphic elements. On this level it is also possible to integrate C and C++ code for
the execution of custom commands or intense computations beyond the capabilities
of LabVIEW’s pre-programmed routines. The common practice of linking LabVIEW and
C is through the use of function libraries known as DLL files, however, it is LabVIEW itself,
which represents the backbone of the interferometric measurement program.
48
3 INTERFEROMETRIC SENSOR SYSTEM
3.3.2 Visual C++ & Dynamic Linking Libraries
Dynamic Linking Libraries or DLLs are common in several operating systems, which al-
low the sharing of code modules or function libraries amongst applications. DLLs are
then forms of compiled code, which are linked to applications only when a running
program invokes a function call to the DLL. This offers the capability to share one DLL
between many applications. Since DLLs are compiled code modules it is not possible
to debug their operation without third-party software with a sophisticated debugging
environment.
Visual C++ was chosen for its ability to establish this dynamic link between a running
LabVIEW program and an associated DLL in a special Windows debugging environ-
ment. In addition Visual C++ contains templates for general 32-bit DLL creation, as well
as many expandable features, such as version control software. With all of these fea-
tures the C-based DLLs are responsible for the repetitive and intense number crunching
required for the signal analysis in the interferometric biosensor.
3.3.3 MATLAB
The Matrix Laboratory environment (MATLAB) is developed and distributed by Math-
Works Inc. At its essence MATLAB is a shell-based, sophisticated matrix calculator, which
can be extended through toolboxes. Toolboxes are themselves small programs or func-
tion libraries often written in MATLAB or C. Since MATLAB is an open environment, it has
grown as specialized toolboxes have been created, such as for signal processing, con-
trol systems, symbolic mathematics, etc.
MATLAB has become a standard for the simulation of mathematical processes, linear
and non-linear systems. Therefore, it has been possible to simulate many of the physical
phenomena associated with the interferometric biosensor (i.e. interferometry, scatter-
ing, reflections, etc.) for the deeper understanding of measurement results beyond the
capabilities of LabVIEW. Furthermore, MATLAB represents an almost ideal testing envi-
ronment for many signal analysis and signal processing algorithms, before their imple-
mentation in LabVIEW. Thus, MATLAB has no direct role in the interferometric measure-
ment program, but has proved critical in researching the performance characteristics
thereof.
49
3 INTERFEROMETRIC SENSOR SYSTEM
3.4 Interferometric Measurement Program
Due to the complexity of demonstrating the LabVIEW graphical code as a complete
element, all of the function-critical algorithms and additional features for the interfer-
ometric measurement program are shown in A.2 with an accompanying description.
Since the LabVIEW algorithms often depend on the DLL-files programmed in C, several
small C examples are also shown.
What is shown in this section are examples of the program screen, which are visible
and usable by the program operator, along with the description of the main features
and functionality of the measurement program. Since the graphical interfaces for both
the FFT-based and the stepwise-Fourier algorithms are so similar, only the frontpanal
views for the FFT-based software is demonstrated. Figure 25 represents the program
view when a measurement is in progress.
Figure 25: The main measurement screen for the FFT-based analysis algorithm.
50
3 INTERFEROMETRIC SENSOR SYSTEM
Labeled in Figure 25 are also some highlighted functions of the interferometric mea-
surement program, which are described below in more detail.
• Interferogram - display of the interferogram, with the added ability to record any
instant of the interferogram in a separate data file. This is useful for external simu-
lation and demonstration.
• Simultaneous measurements - an interferogram can be analyzed with varying set-
tings (e.g. signal process and noise averaging) for comparison. The measurement
data file records the values: time step, minimum of the interferogram, maximum,
middle value, contrast, laser temperature, system temperature, two voltages from
photo diodes, two phase values, two converted ∆ne f f values, and the spatial fre-
quency of the interferogram from the FFT.
• Measurement views - there are 4 measurement views (2 phase measurements
and their corresponding ∆ne f f conversions), which can be viewed at varying
scales.
• Noise averaging - adjustment possibility of the block averaging and smoothing
factors for the overall noise reduction of the measurement signal.
• Parameter capturing - system parameter inputs (left) are recorded in a separate
data file associated with the recored measurement data.
The second main screen of the interferometric measurement program is the display
of the magnitude and phase response from the FFT of the interferogram. This is the
source of the program’s ability to provide 2 independent phase measurements, since
2 separate and pseudo-parallel FFT algorithms are performed on the chosen regions of
the interferogram.
In principle, the user must supply the program with the first pixel defining the begin-
ning of the interferogram (e.g. from Figure 25 a possible input would be pixel 950). The
user then inputs the length of the interferogram region (e.g. 350), which marks the last
pixel at (e.g. 1300). Therefore, the FFT would be performed in this region (e.g. 950 to
1300). The rest of the analysis is performed automatically, without additional input inter-
ference. Figure 26 demonstrates the analysis of the interferogram in Figure 25.
51
3 INTERFEROMETRIC SENSOR SYSTEM
Figure 26: Software 2
52
4 MEASUREMENTS & RESULTS
4 Measurements & Results
Now that most relevant theory, techniques and algorithms have been introduced it is
possible to present their results and to discuss their benefits and successes. All measure-
ments have been carried out with the Helium-Neon laser-light source with the wave-
length 632.8 nm. This section presents the results from system measurements (i.e. opti-
cal chip characteristics), signal processing of the interferogram (i.e. windowing, zero-
padding), and noise analysis. Finally, test measurements with Glycerin are presented
at the end of the section as a demonstration of the capabilities of the interferometric
biosensor.
4.1 System & Chip Measurements
During the initial development stages many components and devices underwent thor-
ough testing and measurement. As the key component, the optical chips also under-
went such testing. This section presents the observations and measurements from the
coupling efficiency and scattering properties of the optical chips.
4.1.1 Coupling Effeciency
In the early construction of the interferometric biosensor the addition of a protective
layer (glass, SiO2) over each of the optical gratings was considered. In addition to those
factors presented in section 3.1.4, one of the main deciding factors for their implemen-
tation would be the comparison of the coupling efficiency of the chip, both with and
without these protective layers. Therefore, the corresponding power measurements
were carried out and Figure 27 provides an illustration of the measurement locations
and quantities.
The results from Table 6 are quite easy to interpret. Since the protective layer has a
higher refractive index than air, the sum of the direct reflections (R1 + R2 + ...Rn) is
greater. This also explains the smaller transmission power T, since some portion of the
newly reflected light is coupled into the excited mode, as seen as the output power,
A1 & A2. A1 is the output power of interest since it is this light that will eventually be de-
tected by the CCD camera. Despite having only 3% of the total power, even this must
often be filtered down by a factor of 10-1000, such that the camera doesn’t saturate.
The conclusion is that the coupling efficiency of the chips is sufficient, even with the
additional protective layers.
53
4 MEASUREMENTS & RESULTS
Figure 27: Input and output coupling of the laser power at various locations for chips with and
without an additional protective layer (glass, SiO2).
Without Protective Layer With Protective Layer
% Laser Power P0 % Laser Power P0
P0 2.33 [mW] 2.33 [mW]
R1 + R2 + ...Rn 60.1% 62.0%
T 31.2% 30.1%
A1 2.6% 3.0%
A2 2.9% 3.5%
Table 6: Percent of total laser power at various input and output coupling locations from a
Helium-Neon laser, wavelength 632.8 nm, polarization TE, input-coupling angle ≈ 3.4°.
4.1.2 Scattering & Spreading Measurements
In response to those concerns expressed in section 3.1.4 steps were taken to observe
and record the scattering and spreading of the coupled beam. If scattering was in fact
due to the interaction of the coupled beam with foreign material on the chip surface
then the following should be true:
• An optical chip, with good coupling characteristics and without a curvature
spreading of the output beam, could be forced to show signs of spreading if for-
eign material would intentionally be placed on the chip surface.
• If the scattering of light on the chip surface is a localized effect, then there should
be propagation paths, where no or less foreign material is encountered. The result
should be the elimination or reduction of the output spreading.
54
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor
Development of an Interferometric Biosensor

More Related Content

What's hot

Implementation of a Localization System for Sensor Networks-berkley
Implementation of a Localization System for Sensor Networks-berkleyImplementation of a Localization System for Sensor Networks-berkley
Implementation of a Localization System for Sensor Networks-berkleyFarhad Gholami
 
Maxime Javaux - Automated spike analysis
Maxime Javaux - Automated spike analysisMaxime Javaux - Automated spike analysis
Maxime Javaux - Automated spike analysisMaxime Javaux
 
MACHINE LEARNING METHODS FOR THE
MACHINE LEARNING METHODS FOR THEMACHINE LEARNING METHODS FOR THE
MACHINE LEARNING METHODS FOR THEbutest
 
Dissertation_of_Pieter_van_Zyl_2_March_2010
Dissertation_of_Pieter_van_Zyl_2_March_2010Dissertation_of_Pieter_van_Zyl_2_March_2010
Dissertation_of_Pieter_van_Zyl_2_March_2010Pieter Van Zyl
 
Olanrewaju_Ayokunle_Fall+2011
Olanrewaju_Ayokunle_Fall+2011Olanrewaju_Ayokunle_Fall+2011
Olanrewaju_Ayokunle_Fall+2011Ayo Olanrewaju
 
Seismic Tomograhy for Concrete Investigation
Seismic Tomograhy for Concrete InvestigationSeismic Tomograhy for Concrete Investigation
Seismic Tomograhy for Concrete InvestigationAli Osman Öncel
 
Real-time and high-speed vibrissae monitoring with dynamic vision sensors and...
Real-time and high-speed vibrissae monitoring with dynamic vision sensors and...Real-time and high-speed vibrissae monitoring with dynamic vision sensors and...
Real-time and high-speed vibrissae monitoring with dynamic vision sensors and...Aryan Esfandiari
 
Computer security using machine learning
Computer security using machine learningComputer security using machine learning
Computer security using machine learningSandeep Sabnani
 

What's hot (17)

Implementation of a Localization System for Sensor Networks-berkley
Implementation of a Localization System for Sensor Networks-berkleyImplementation of a Localization System for Sensor Networks-berkley
Implementation of a Localization System for Sensor Networks-berkley
 
feilner0201
feilner0201feilner0201
feilner0201
 
Maxime Javaux - Automated spike analysis
Maxime Javaux - Automated spike analysisMaxime Javaux - Automated spike analysis
Maxime Javaux - Automated spike analysis
 
thesis
thesisthesis
thesis
 
MACHINE LEARNING METHODS FOR THE
MACHINE LEARNING METHODS FOR THEMACHINE LEARNING METHODS FOR THE
MACHINE LEARNING METHODS FOR THE
 
M2 - Graphene on-chip THz
M2 - Graphene on-chip THzM2 - Graphene on-chip THz
M2 - Graphene on-chip THz
 
exjobb Telia
exjobb Teliaexjobb Telia
exjobb Telia
 
Dissertation_of_Pieter_van_Zyl_2_March_2010
Dissertation_of_Pieter_van_Zyl_2_March_2010Dissertation_of_Pieter_van_Zyl_2_March_2010
Dissertation_of_Pieter_van_Zyl_2_March_2010
 
論文
論文論文
論文
 
phd-thesis
phd-thesisphd-thesis
phd-thesis
 
Ee380 labmanual
Ee380 labmanualEe380 labmanual
Ee380 labmanual
 
Olanrewaju_Ayokunle_Fall+2011
Olanrewaju_Ayokunle_Fall+2011Olanrewaju_Ayokunle_Fall+2011
Olanrewaju_Ayokunle_Fall+2011
 
Seismic Tomograhy for Concrete Investigation
Seismic Tomograhy for Concrete InvestigationSeismic Tomograhy for Concrete Investigation
Seismic Tomograhy for Concrete Investigation
 
Real-time and high-speed vibrissae monitoring with dynamic vision sensors and...
Real-time and high-speed vibrissae monitoring with dynamic vision sensors and...Real-time and high-speed vibrissae monitoring with dynamic vision sensors and...
Real-time and high-speed vibrissae monitoring with dynamic vision sensors and...
 
MS_Thesis
MS_ThesisMS_Thesis
MS_Thesis
 
Sona project
Sona projectSona project
Sona project
 
Computer security using machine learning
Computer security using machine learningComputer security using machine learning
Computer security using machine learning
 

Similar to Development of an Interferometric Biosensor

Masters Thesis - Ankit_Kukreja
Masters Thesis - Ankit_KukrejaMasters Thesis - Ankit_Kukreja
Masters Thesis - Ankit_KukrejaANKIT KUKREJA
 
Project report on Eye tracking interpretation system
Project report on Eye tracking interpretation systemProject report on Eye tracking interpretation system
Project report on Eye tracking interpretation systemkurkute1994
 
Integrating IoT Sensory Inputs For Cloud Manufacturing Based Paradigm
Integrating IoT Sensory Inputs For Cloud Manufacturing Based ParadigmIntegrating IoT Sensory Inputs For Cloud Manufacturing Based Paradigm
Integrating IoT Sensory Inputs For Cloud Manufacturing Based ParadigmKavita Pillai
 
Masters' Thesis - Reza Pourramezan - 2017
Masters' Thesis - Reza Pourramezan - 2017Masters' Thesis - Reza Pourramezan - 2017
Masters' Thesis - Reza Pourramezan - 2017Reza Pourramezan
 
Analysis and Classification of ECG Signal using Neural Network
Analysis and Classification of ECG Signal using Neural NetworkAnalysis and Classification of ECG Signal using Neural Network
Analysis and Classification of ECG Signal using Neural NetworkZHENG YAN LAM
 
Master Thesis Overview
Master Thesis OverviewMaster Thesis Overview
Master Thesis OverviewMirjad Keka
 
High Performance Traffic Sign Detection
High Performance Traffic Sign DetectionHigh Performance Traffic Sign Detection
High Performance Traffic Sign DetectionCraig Ferguson
 
Aspect_Category_Detection_Using_SVM
Aspect_Category_Detection_Using_SVMAspect_Category_Detection_Using_SVM
Aspect_Category_Detection_Using_SVMAndrew Hagens
 
MACHINE LEARNING METHODS FOR THE
MACHINE LEARNING METHODS FOR THEMACHINE LEARNING METHODS FOR THE
MACHINE LEARNING METHODS FOR THEbutest
 
Au anthea-ws-201011-ma sc-thesis
Au anthea-ws-201011-ma sc-thesisAu anthea-ws-201011-ma sc-thesis
Au anthea-ws-201011-ma sc-thesisevegod
 
Low Power Context Aware Hierarchical System Design
Low Power Context Aware Hierarchical System DesignLow Power Context Aware Hierarchical System Design
Low Power Context Aware Hierarchical System DesignHoopeer Hoopeer
 
Master_Thesis_Jiaqi_Liu
Master_Thesis_Jiaqi_LiuMaster_Thesis_Jiaqi_Liu
Master_Thesis_Jiaqi_LiuJiaqi Liu
 
aniketpingley_dissertation_aug11
aniketpingley_dissertation_aug11aniketpingley_dissertation_aug11
aniketpingley_dissertation_aug11Aniket Pingley
 
Neural Networks on Steroids
Neural Networks on SteroidsNeural Networks on Steroids
Neural Networks on SteroidsAdam Blevins
 

Similar to Development of an Interferometric Biosensor (20)

Masters Thesis - Ankit_Kukreja
Masters Thesis - Ankit_KukrejaMasters Thesis - Ankit_Kukreja
Masters Thesis - Ankit_Kukreja
 
Project report on Eye tracking interpretation system
Project report on Eye tracking interpretation systemProject report on Eye tracking interpretation system
Project report on Eye tracking interpretation system
 
Integrating IoT Sensory Inputs For Cloud Manufacturing Based Paradigm
Integrating IoT Sensory Inputs For Cloud Manufacturing Based ParadigmIntegrating IoT Sensory Inputs For Cloud Manufacturing Based Paradigm
Integrating IoT Sensory Inputs For Cloud Manufacturing Based Paradigm
 
thesis_report
thesis_reportthesis_report
thesis_report
 
Masters' Thesis - Reza Pourramezan - 2017
Masters' Thesis - Reza Pourramezan - 2017Masters' Thesis - Reza Pourramezan - 2017
Masters' Thesis - Reza Pourramezan - 2017
 
wronski_ugthesis[1]
wronski_ugthesis[1]wronski_ugthesis[1]
wronski_ugthesis[1]
 
Analysis and Classification of ECG Signal using Neural Network
Analysis and Classification of ECG Signal using Neural NetworkAnalysis and Classification of ECG Signal using Neural Network
Analysis and Classification of ECG Signal using Neural Network
 
Master Thesis Overview
Master Thesis OverviewMaster Thesis Overview
Master Thesis Overview
 
High Performance Traffic Sign Detection
High Performance Traffic Sign DetectionHigh Performance Traffic Sign Detection
High Performance Traffic Sign Detection
 
Aspect_Category_Detection_Using_SVM
Aspect_Category_Detection_Using_SVMAspect_Category_Detection_Using_SVM
Aspect_Category_Detection_Using_SVM
 
MACHINE LEARNING METHODS FOR THE
MACHINE LEARNING METHODS FOR THEMACHINE LEARNING METHODS FOR THE
MACHINE LEARNING METHODS FOR THE
 
Au anthea-ws-201011-ma sc-thesis
Au anthea-ws-201011-ma sc-thesisAu anthea-ws-201011-ma sc-thesis
Au anthea-ws-201011-ma sc-thesis
 
Thesis small
Thesis smallThesis small
Thesis small
 
Low Power Context Aware Hierarchical System Design
Low Power Context Aware Hierarchical System DesignLow Power Context Aware Hierarchical System Design
Low Power Context Aware Hierarchical System Design
 
Master_Thesis_Jiaqi_Liu
Master_Thesis_Jiaqi_LiuMaster_Thesis_Jiaqi_Liu
Master_Thesis_Jiaqi_Liu
 
main
mainmain
main
 
aniketpingley_dissertation_aug11
aniketpingley_dissertation_aug11aniketpingley_dissertation_aug11
aniketpingley_dissertation_aug11
 
Neural Networks on Steroids
Neural Networks on SteroidsNeural Networks on Steroids
Neural Networks on Steroids
 
AAPM-2005-TG18.pdf
AAPM-2005-TG18.pdfAAPM-2005-TG18.pdf
AAPM-2005-TG18.pdf
 
Fulltext02
Fulltext02Fulltext02
Fulltext02
 

Development of an Interferometric Biosensor

  • 1. Development of an Interferometric Biosensor Master Thesis Robert MacKenzie Submitted to Institute for High-Frequency and Quantum Electronics (IHQ) Universität Karlsruhe (TH), Germany Carried out at Fraunhofer Institute for Physical Measurement Technology (IPM) Freiburg, Germany October 31, 2003
  • 2. Declaration With this statement I ensure that the submitted thesis is a product of my individual work, except for those aids, materials and assistances known to my supervisor. Furthermore, I have acknowledged the use of all information, results and work of others with exact and complete references. Karlsruhe, 31. October, 2003 Robert MacKenzie
  • 3. Acknowlegements Firstly, I would like to acknowledge Prof. Dr. Wolfgang Freude and express my thanks to him for the permission to perform my research external to the University of Karlsruhe. The communication and coordination of my work and presentations with him has been seamless. Secondly, I owe a great deal of thanks and gratitude to Dr. Bernd Schirmer, my project supervisor and mentor for the duration of this project. Many intelligent suggestions, in- teresting experiments, and volumes of patience and encouragement have helped me to thoroughly enjoy every moment of our working time together. His professional- ism, criticism, approachability and good taste in music have also been paramount to the successful completion of this thesis.
  • 4. Contents 1 Introduction 1 1.1 Biosensors & Label-Free Detection . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 System Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2 Fundamental Theory 5 2.1 Evanescent-Field Sensing Technology . . . . . . . . . . . . . . . . . . . . . . 5 2.2 Interferometry & Diffraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2.1 Young’s Interferometer . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2.2 Single-Slit Diffraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2.3 Double-Slit Diffraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.3 System Principles & Relationships . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.3.1 Determination of Phase Difference . . . . . . . . . . . . . . . . . . . . 12 2.3.2 Determination of Refractive Index Difference . . . . . . . . . . . . . . 13 2.3.3 System Noise & Sources of Error . . . . . . . . . . . . . . . . . . . . . . 13 2.3.4 Phase Error & Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.4 Fourier Analysis & Signal Processing . . . . . . . . . . . . . . . . . . . . . . . . 21 2.4.1 Fourier Representations of Functions . . . . . . . . . . . . . . . . . . . 21 2.4.2 The Continuous Fourier Transformation . . . . . . . . . . . . . . . . . . 23 2.4.3 Discrete Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.4.4 The Discrete Fourier Transformation . . . . . . . . . . . . . . . . . . . . 25 2.4.5 Distortion Effects in Signal Analysis . . . . . . . . . . . . . . . . . . . . . 26 2.4.6 Signal Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3 Interferometric Sensor System 34 3.1 Detailed System Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 3.1.1 Lasers & Light Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.1.2 Lenses & Beam Forming . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 3.1.3 Flow Cells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 3.1.4 Optical Chips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 3.1.5 Double Slit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 3.1.6 Optical Chip Mount . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3.1.7 CCD Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.1.8 Pump . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 3.2 System Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 3.2.1 FFT-Based Measurement Algorithm . . . . . . . . . . . . . . . . . . . . 45 3.2.2 Fourier-Coefficient Correlation Algorithm . . . . . . . . . . . . . . . . 46 i
  • 5. 3.3 Software Development Environments . . . . . . . . . . . . . . . . . . . . . . . 48 3.3.1 LabVIEW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.3.2 Visual C++ & Dynamic Linking Libraries . . . . . . . . . . . . . . . . . . 49 3.3.3 MATLAB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 3.4 Interferometric Measurement Program . . . . . . . . . . . . . . . . . . . . . . 50 4 Measurements & Results 53 4.1 System & Chip Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 4.1.1 Coupling Effeciency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 4.1.2 Scattering & Spreading Measurements . . . . . . . . . . . . . . . . . 54 4.1.3 Scattering Improvements . . . . . . . . . . . . . . . . . . . . . . . . . . 56 4.2 Signal Analysis & Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.2.1 The Interferogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.2.2 Fourier Transformation of the Interferogram . . . . . . . . . . . . . . . 58 4.2.3 Signal Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 4.2.4 Signal Noise & Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . 67 4.3 Test Measurements with Glycerin . . . . . . . . . . . . . . . . . . . . . . . . . 76 4.3.1 Detection Limit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 4.3.2 Measurement Drift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 4.4 Comparison of System Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 85 5 Summary 87 5.1 Future Prospects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 References 91 A Appendix 92 A.1 Single-Slit Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 A.2 LabVIEW & C-Coded Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 94 A.3 Refractive Index Tables & Plots . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 ii
  • 6. 1 INTRODUCTION 1 Introduction In the medical and pharmaceutical industries there exists a demand for stable mea- surement systems, which can deliver low-cost and highly sensitive biomolecular de- tection. The interferometric biosensor is a single-sensor, real-time optical measurement system in the field of label-free analysis & detection to meet these requirements. As proposed by A. Brandenburg [2, 3], the interferometric biosensor (i.e. the system under development) is based on the principle of a Young interferometer and is potentially much more sensitive than the conventional systems in the field of label-free detection and measurement. While the theory surrounding Young interferometry is at the core of the design, the key component of the optical system is, however, the transducer chip with a mono-mode waveguide film. These chips offer a reusable and an inexpensive method for surface sensing. Through the application of evanescent-field sensing technology, section 2.1, it is possible to detect changes in adsorbate layer thickness or mass coverage on a bio- chemically active surface[5]. This, in combination with additional optical hardware and software-powered signal analysis & processing algorithms, enables the final realization of the Young interferometer as an effective biosensing device. The focus of this thesis is to demonstrate the development of the interferometric biosen- sor through the discussion of the following tasks and the presentation of their results: • Improvement in the optical construction • Analysis and determination of the coupling properties of the optical chips • Development of the system measurement software • Maintenance and improvement of the detection and data collection system • Implementation and improvement of the system signal analysis algorithms • Adaptation and application of signal processing algorithms • Determination and reduction of noise effects and disturbances (e.g. signal drift) in the system and their influence on the final measurement • Execution of test measurements • Determination of measurement parameters (e.g. detection limit, measurement time constant) 1
  • 7. 1 INTRODUCTION 1.1 Biosensors & Label-Free Detection A sensor, in reference to the field of engineering, is an electronic device for measuring physical quantities by converting the information into an electronic signal. A biosen- sor is simply a specific type of sensor for retrieving this converted information from a biological or physiological process. Due to its function as a real-time detection system, the interferometric biosensor is able to perform kinetic analysis for the examination of biochemical & biomolecular reac- tions. This further allows the investigation of affinity and binding reactions, specific ana- lyte detection, analysis of protein interactions, concentration determination, and more. Therefore, the target applications of the interferometric biosensor are for the areas of pharmaceutical research, medical diagnostics and substance screening. Label-free detection and measurement is essentially the direct sensing of samples with- out the prerequisite of an elaborate and often expensive sample preparation. For this reason label-free detection is extremely attractive for specific application in protein re- search. Since measurements involving proteins are so important, for example in drug and medicine research, there exists a great need for the capability to directly mea- sure uninfluenced protein reactions and bindings. Label-free measurement techniques have this capability, which stems largely from the application of evanescent-field sens- ing technology, section 2.1. A disadvantage of label-free detection is the generally lower sensitivity compared to very effective label-based measurement systems. Labeling, however, requires an elab- orate preparation of the sample material, which often involves some type of chemical marking (e.g. fluorescence) or the creation of reporter molecules in order to retrieve the wanted information from a measurement or reaction. However, due to the geom- etry and nature of proteins, these markers or labels often interfere severely with the measurements and the reactions under analysis. This is a major disadvantage of label- based detection. Therefore, the interferometric biosensor represents an alternative, label-free applica- tion in the optical measurement of specific biomolecular and biochemical reactions, which are beyond the current limitations of many labeling technologies, specifically in protein research. Thus, the interferometric biosensor is not a competing technol- ogy with label-based detection. It is important to understand that the interferometric biosensor is, instead, a competing technology for other label-free detection technolo- 2
  • 8. 1 INTRODUCTION gies targeting protein research. Such competing technologies would be the grating coupler or the SPR (Surface Plasmon Resonance) system, which is currently the most widely used label-free detection system [5]. 1.2 System Overview This section provides a general overview of the interferometric biosensor, which is useful for the better understanding of the coming theory. A more detailed system description is given in section 3.1. Figure 1: Simple system diagram of the interferometric biosensor. Figure 1 illustrates the general system layout and design principle of the interferometric biosensor under development. Depending on the light source (e.g. Helium-Neon laser, with a wavelength λ) and the optical chip properties (e.g. grating couplers), an optical configuration first forms the laser beam. The result of the beam forming is an elliptical, almost line-shaped, beam profile in order to better adjust the beam to the dimensions of the optical grating. Next is the most essential system element: the optical chip. It may also be the case that a spatial filter, in the form of a double slit, is placed at the input coupling of the chip for the purpose of reducing coupled light, which is not related to the regions of interest (i.e. the sensing path and the reference path). The result is two parallel light 3
  • 9. 1 INTRODUCTION beams originating from the the two slits of the initial double slit. Coupled light outside the regions of interest could lead to internal scattering and this would interfere with the desired optical sensing output. For further information on this internal scattering, also known as M-lines, please refer to section 3.1.4. The slit dimensions of the initial double slit (slit width w ) are almost uncritical since the optical path length is too small to allow for unwanted diffraction and interference of the propagating light beams before their interaction at the output double slit (slit width w < w ). As mentioned earlier, the sensing and reference regions (length L) are parallel along the path of two coupled light beams. Through the application of evanescent-field sens- ing these parallel light beams interact with the material, which forms the superstrate. These light beams are coupled out of the chip after propagating through the mea- surement regions and are forced to interfere through diffraction as a result of their inter- action with an output double slit. This forms the interferogram, which is detected and encoded by a CCD camera at a considerable distance (D), which is large enough to fulfill the small-angle criteria, where CCD distance (D) >> slit width (w). It is then the difference of the change of the optical properties in the sensing and refer- ence regions (i.e. measurement regions) that eventually alters the interference pattern at the output double slit and finally the form of the interferogram. For the interferomet- ric biosensor, the phenomena to be detected is the individual change in phase ve- locity resulting from the propagation through the measurement regions. This is covered in greater detail in section 2.1. Depending on this difference relationship the interfer- ogram undergoes a relative phase shift along a spatial axis. The direction of this shift depends on the choice of the sensing path. For example, if path 1 is chosen, then the difference between the paths could be positive (e.g. a shift in the positive direction). Alternatively, if path 2 would be chosen for the same measurement, the difference and the shift of the interferogram would also reverse. This difference relationship inherent to the interferogram is extremely beneficial, since the optical properties in the regions of interest are compared relative to each other. It allows for the almost entire cancellation of localized disruptions (e.g. temperature, pressure), since all conditions in the measuring regions should be identical, with the only disparity being the fundamental difference of the effective refractive index. 4
  • 10. 2 FUNDAMENTAL THEORY 2 Fundamental Theory Throughout the development of this project different theories from a variety of areas have been applied. This section describes the fundamentals of the major theoretical principles, in such areas as optics, physics, biology, signal processing and computing, and their connection to the overall project. First, evanescent-field sensing and its use with interferometry are explored in the first two sections of this chapter. The third section discusses the specific system principles and relationships of the interferometric biosensor, resulting in the derivation of the needed measured parameters for the determination of the final measurement signal (i.e. phase changes and changes in the effective refractive index). The sections following this de- scribe the various noise effects and influences present in the system, as well as exploring the nature of phase error and fluctuations present in the final measurement signal. The final major section gives an in-depth explanation of the Fourier transform, which forms the basis of the various signal analysis algorithms. These algorithms are the math- ematical mechanism for the determination and calculation of the system variables and final measurement signal. Associated with these algorithms are errors and noise sources inherent to the Fourier methods. As a final topic of the chapter signal process- ing theory and methods are discussed for the overall compensation and reduction of error and noise for the physical system and the analysis algorithms. 2.1 Evanescent-Field Sensing Technology With the application of thin-film waveguides it is possible to excite a guided mode of coupled light, which possesses an evanescent field distribution that decays exponen- tially into both the substrate (e.g. optical glass) and the superstrate (e.g. an interface, such as air, fluid, biological sample). With this configuration the change in the optical properties of the superstrate is probed by the evanescent tail of the mode propagating along the waveguide structure. An interaction with the superstrate will both alter the tail’s propagation speed, caused by a change in refractive index, and alter its attenua- tion, effected by a change in the absorption coefficient. These alterations would then also be detectable as a corresponding change in the phase velocity of the guided wave. Figure 2 illustrates this. 5
  • 11. 2 FUNDAMENTAL THEORY Figure 2: Overview of evanescent-field sensing technology with an optical chip. The phase velocity vp can be used in combination with the speed of light c to introduce a new quantity, the effective refractive index ne f f . Effective Refractive Index = Speed of Light Phase Velocity or ne f f = c vp (1) Image 1 in Figure 2 shows the basic structure of an optical chip. Image 2 then repre- sents this chip with an interface or superstrate, written as "Fluid". Intuitively this fluid has a refractive index and its corresponding change in phase velocity vre f can be either cal- culated in relation to this refractive index or with an effective refractive index. In Image 3 the necessity of an effective refractive index is clearer, since it is not intuitively correct to speak of the refractive indexes of molecules. However, since it is obvious that these molecules will interact with the evanescent field, it is easily understood that the result will be a phase velocity v1 smaller than the speed of light. Image 4 is a continuation from Image 3. Here, the introduction of new molecules (e.g. representing a binding reaction of antibodies and antigens) further alters the optical properties of the super- strate. This causes a change in the phase velocity v2, which can be compared to v1 to obtain a change in the effective refractive index ∆ne f f . This ∆ne f f is then the measured result of e.g. introducing antigens to a biochemically prepared chip surface. Through interaction with the evanescent wave, the binding of target bioagents to the receptors on the sensing surface produces a phase change in the guided light beam relative to a parallel reference light beam. These two parallel beams propagate along separate regions of the optical chip surface and form the basis of an interferometer, or an optical sensing circuit. In the specific case of the interferometric biosensor, these two parallel beams are coupled out of the optical chip via an optical grating. This output then impinges upon a double slit to induce diffraction and interference. This is visible in Figure 1. Interferometry is discussed in greater detail in the following section. 6
  • 12. 2 FUNDAMENTAL THEORY 2.2 Interferometry & Diffraction Interference is the term, from which both "interferometry" and "interferogram" are de- riven. Within the context of this project, interference is a phenomenon involving waves of any kind, when they interact at the same time and place. Interference can be vi- sualized as the superposition (i.e. addition & subtraction) of two or more waves. The result of this is either constructive interference, destructive interference, or no resulting interference (e.g. if the waves are orthogonal to each other). An interferometer is then a device for producing interference between two or more waves. Thus, an interferogram is quite literally the resulting diagram of the interference, which is recognizable by a number of light fringes. The fringes are bright, where the waves interfere constructively. Alternatively the fringes are dark, where wave interfere destructively. Figure 3 illustrates the visible interference pattern of a single slit and a double slit from a Helium-Neon Laser (λ=632.8 [nm]). Figure 3: Fringes from a single slit and a double slit. The pattern displays 4 modes (i.e. one max- imum and 3 neighbouring maximums on each side). With all other parameters remaining equal (e.g. slit width, wavelength, slit separation, etc.) the single slit pattern represents the envelope of all further multiple-slit patterns. Each visible spot maximum, or maximum of constructive interference, is also known as a mode. In an interferogram there are (2 × Modes) − 1 maximums. Since there are seven maximums in the single-slit pattern of Figure 3, there are four modes overall. The main maximum in the center is mode zero (m=0). The maximums immediately neighbouring the main maximum on the left and the right are mode one (m=1), then mode two, and so on. For the single-slit pattern, the difference in intensity is quite easy to determine between the bright and dark fringes, or spots. There is also a difference in intensity between the visible spots (i.e. not all maximums have the same intensity). Thus, another form of viewing an interferogram is to measure and plot its intensity distribution over space. Figure 4 depicts the intensity distribution over space of an ideal interferogram. For a 7
  • 13. 2 FUNDAMENTAL THEORY double slit, the far-field interferogram consists of a cos2(α1) interference pattern, which is modulated by the envelope (sin(α2)/α2)2 . Both the envelope and the interference pattern are depicted. Figure 4: Envelope and interference pattern from a double slit, which shows 4 modes: m=0,1,2,3 The theory of single and double slits is explained in further detail in combination with Fraunhofer diffraction. 2.2.1 Young’s Interferometer The interferometric system of this project is based on the Young Interferometer. This type of interferometer is essentially a wavefront splitting interferometer, where monochro- matic plane waves impinge upon a surface with two small holes or slits. When these slits, separated by d and with a width w, can be considered a line of point sources, then a fringe pattern, as discussed earlier, can be seen on a screen or by a camera at a sufficiently large distance D, as best illustrated in Figure 1 and Figure 6. The principles and operation of the Young Interferometer are described systematically through the examination of single-slit diffraction and double-slit diffraction, also often referred to as Young’s Double-Slit Experiment. 8
  • 14. 2 FUNDAMENTAL THEORY 2.2.2 Single-Slit Diffraction Light in the form of plane waves, which impinges upon a narrow slit, produces a dis- tributed pattern of light produced by diffraction. Diffraction is the phenomenon, by which wavefronts of propagating waves bend upon interaction with obstacles. Figure 5: Single-slit diffraction phenomenon, where r is the radius of the aperture and D is the distance from the aperture. In simple cases diffraction can often be further identified as either Fresnel (near-field) or Fraunhofer (far-field) Diffraction. The region of interest can be determined by the Fresnel Number: NF = r2 aperture λD (2) These relationships are discussed by Born and Wolf in [9]. Referring to Figure 5 and Equa- tion (2), if NF > 1, then the region is the near field and NF < 1 it corresponds to the far field. The interferometric biosensor operates in the far field, as based on the Young In- terferometer, and therefore the diffraction of interest is Fraunhofer diffraction. It is the single-slit diffraction which determines the fundamental shape of the interfero- gram, or rather the interferogram envelope, as described by the following relationship: W = 2λD w (3) In words, the envelope width W increases with an increase in the wavelength λ and/or a decrease in the slit width, w. As mentioned in section 2.2, the shape of the envelope is determined by the (sin(α2)/α2)2 function. 9
  • 15. 2 FUNDAMENTAL THEORY 2.2.3 Double-Slit Diffraction The intensity distribution resulting from the addition of a slit follows the cos function, and precisely the cos2(α1) function in the case of the double slit. Despite a new intensity dis- tribution, the envelope of this interferogram is still governed in general by the properties of the single-slit for the case w << d. Important for the understanding of interferometry as a tool for measurement is to un- derstand how any change in the interferogram can be detected and represented by a meaningful quantity or parameter. A starting point is the direct analysis of the Young Double-Slit Experiment, shown in Figure 6. Figure 6: Young’s experiment. Two slits of width w, separated by d, with a distance D to a screen in the far-field. ∆s represents the difference in path length of light beams from the various slits and x is the spatial offset of the interferogram. Direct trigonometry yields the following relationships from the double-slit geometry: tanθ = ∆x D (4) sinθ = ∆s d (5) Further, with the assumption that D >> w the application of the small-angle approxi- mation yields: tan(θ) ≈ sin(θ) = ∆x/D 10
  • 16. 2 FUNDAMENTAL THEORY and ∆s = d∆x D (6) Equation (6) is an extremely important relationship. It is is able to connect the spatial offset of the interferogram to the change in path length ∆s. Also of great importance is the phase difference of the two interfering beams ∆φ, which is given below, as well as the result of the substitution of Equation (6). ∆φ(x) = 2π∆s λ = 2πd∆x Dλ (7) One further relationship, which can be calculated with the introduced parameters and variables, is the period of the interferogram: p = Dλ d (8) These relationships and how they depend on one another regarding the interferometric biosensor is derived and discussed in section 2.3. 2.3 System Principles & Relationships Now that a certain theoretical foundation has been explained, it is possible to expand upon the interferometric relationships, which have been introduced thus far. Recall that the approximate intensity distribution resulting from Young’s Double-Slit Experiment is: I(x) ∝ cos2 (φ(x) − δ) (9) The parameter φ(x) from Equation (9) can also be written as follows: φ(x) = 2π f x (10) Equation (10) is the radian representation of φ(x) for the cos2 function. The variable x has already been defined as an offset along a spatial axis, Figure 6. The variable f is the frequency of the cos2 function. Recall Equation (8). It is also well known in interferometry that the spatial period p of an interferogram is: p = Dλ/d 11
  • 17. 2 FUNDAMENTAL THEORY 2.3.1 Determination of Phase Difference Knowing that the spatial period is simply the inverse of the spatial frequency, Equation (10) can be re-written as the following and expanded with Equation (8): φ(x) = 2πx p or ∆φ(x) = 2π∆x p = 2πd∆x Dλ (11) For a second confirmation of this result recall Equation (7), which was derived directly from the Young double-slit geometry: ∆φ(x) = 2π∆s λ = 2πd∆x Dλ (12) The other variable of Equation (9) is the phase shift δ of the intensity distribution I(x). As stated by A. Brandenburg[2] "the phase shift denotes the influence of the media in the flow cells." This phase shift is dependent on the difference in effective refractive index ∆ne f f in the flow cells and the length of the flow cells L, which have been introduced in section 1.2. δ = 2πL∆ne f f λ (13) Comparing Equation (12) and Equation (13) the result is: ∆x = DL∆ne f f d (14) When the result from Equation (14) is re-substituted into Equation (12) the following is obtained: ∆φ = 2πL∆ne f f λ (15) or ∆φ = 2πDL∆ne f f pd (16) This is an important result, since there is now a relationship between the phase differ- ence ∆φ and the difference in effective refractive index ∆ne f f , shown in Equation (15). Furthermore, in order to determine ∆φ all variables of Equation (16) other than the period p are known constants during a measurement. In other words, if one can de- termine the period of the interferogram, then it is quite easy to determine the phase 12
  • 18. 2 FUNDAMENTAL THEORY difference and therefore the difference in effective refractive index. The determination of the period p can be accomplished with the Fourier Transform, which is discussed in great detail in section 2.4. In the specific case of the interferometric biosensor, this can be accomplished with the FFT, 2.4.4. 2.3.2 Determination of Refractive Index Difference Alternatively it is possible to re-arrange the above equations, e.g. Equation (15), for an expression of ∆ne f f : ∆ne f f = λ∆φ 2πL (17) Again, the variables λ and L are both known constants, which shows that the conver- sion between ∆ne f f and ∆φ is the simple multiplication of a single constant represented by: K = λ/2πL Referring to the interferometric biosensor’s system parameters (i.e. λ=632.8 nm and L=5 mm), K is calculated to be ≈ 20.14×10−6, meaning that the change in effective refrac- tive index is measured on a scale of 10−6 and can be influenced by disturbances even in the order of 10−8. 2.3.3 System Noise & Sources of Error The measurement signal (i.e. phase difference ∆φ or effective refractive index differ- ence ∆ne f f ) always contains a certain combination of distortion effects or noise, which are introduced by the environment, analysis methods and system, from which the signal resides or originates. Noise can be defined as "any unwanted signals, random or deter- ministic, which interfere with the faithful reproduction of the desired signal in a system" [18]. Due to the often random nature of noise it is normally best described through statistical properties (e.g. mean, RMS, standard deviation, etc.). Additional represen- tations include the use of the auto-correlation function (time domain) and its Fourier transformation (i.e. power spectral density). 13
  • 19. 2 FUNDAMENTAL THEORY The following are several common and possible sources of noise and signal fluctuation in the system of the interferometric biosensor. Laser Stability Fox et al[12] define the stability of a HeNe laser through several uncertainties. These include the wavefront curvature ∆λWC, beam misalignment ∆λAlign, HeNe wavelength ∆λHeNe and the counting resolution of the fringe patter ∆λCount. The most relevant for the interferometric biosensor would be the uncertainty of the HeNe wavelength, which is estimated in [12] to be ∆λDoppler/λ ≈ 3.2×10−6. This is based on the Doppler width of the Ne emission line, but these details are outside of the scope of this thesis. Electronic Noise Sources In electrical systems there exists many possible types of noise from electronic sources, several of which are described below [11, 16, 18]. • Thermal noise - the random thermal motion of electrons in a conducting medium. • Absorption noise - based on the theory of black body radiation, whereby the same energy absorbed by a body is radiated from that body as noise. • Shot noise (quantum noise) - arises from the actually discrete nature of current in electronic equipment and is caused by the diffusion of minority carries and the random generation and recombination of electron-hole pairs in semiconductor- based devices. • Flicker noise (1/f noise) - arises from the surface imperfections in electronic de- vices resulting from the fabrication process. It is most important at low frequencies, which is relative to the specific system (< 100 Hz, 1 kHz, 10 kHz etc.) • Photocurrent noise - W. Freude [16] proposes a noise present in photoreceivers (e.g. the pixels of the CCD camera). This arises due to fluctuations in the photocur- rent around its mean value. The fluctuations are the result of shot noise, thermal noise and a type of classical noise known as relative intensity noise, RIN. Mechanical Noise Sources This is a term to encompass the physical disturbances in the system. Examples could be the misalignment of system components, component construction defects, etc.. These could result in an external path difference of the propagation path, ∆l. Additionally, physical vibrations in the system could continuously alter ∆l and appear as fluctuations in the measurement signal. Furthermore, under certain assumptions the physical shifting of the CCD camera could mimic relatively large phase shifts of the interferogram. 14
  • 20. 2 FUNDAMENTAL THEORY Of particular interest are those noise sources contributing to random noise (e.g. flicker noise, shot noise, etc.). A further example of random noise is Gaussian white noise. This type of noise is independent of the spectral variable (e.g. frequency), has an average value of zero, and its standard deviation is described through a Gaussian probability density function. The autocorrelation thereof is shown in Figure 7. An important char- acteristic of the autocorrelation function of random noise is the existence of a central maximum (peak), with the surrounding values having small or zero amplitudes. Since the result of shifting a series over its own length N is 2N, the center is often either lo- cated at N or zero, if the axis is symmetric from -N to N. The former is true is this case. Figure 7: Gaussian white noise of 200 samples randomly generated from MATLAB and its au- tocorrelation. A center peak indicates that the samples are strongly uncorrelated, or random. In the cases of laser stability and random noise, their specific theoretical influences on the actual measurement signal (i.e. phase difference ∆φ) are discussed in more detail in the following section. 2.3.4 Phase Error & Uncertainty It is reasonable to assume a minimum phase error and phase uncertainty are required for the determination of the optimum detection limit for the interferometric biosensor. The improvement in the phase error through various techniques and signal processing methods is discussed in several papers and literary sources, however, the terms "phase error" and "phase uncertainty" in these works have many varying definitions. This section defines and explores the sources of error and uncertainty in the measured phase and the final measurement of the effective refractive index. 15
  • 21. 2 FUNDAMENTAL THEORY Phase Error Here, a definition of the phase error from S. Nakadate [14] is used and is defined as the absolute difference between the true phase φ and the measured phase φ . This is written as δφ to avoid confusion with ∆φ in section 2.3. δφ = φ − φ (18) The true phase would result from the exact period determination of the interferogram under ideal measurement and analysis conditions. The measured phase is then the phase evaluated at the approximated period of the interferogram, when noise, sam- pling and FFT associated error sources are considered. To minimize the phase error it is necessary to determine the precise period of the interferogram. The phase error is thus a systematic error induced by the distortion effects of signal analysis (section 2.4.5), which may be improved through signal processing. For this, methods, such as window- ing and zero padding, will be discussed in the following sections. Phase Uncertainty from Random Noise The other important term is the standard deviation of the measured phase, the "phase uncertainty," which S. Nakadate [14] determines from the measured phase variance, defined as: σ2 φ = 1 SNR2 ∑N k=1 W2 k ∑N k=1 Wk 2 (19) Here, Wk is the chosen window function and SNR is the signal-to-noise ratio. The oc- casion is taken to introduce the expression for the signal-to-noise ratio. A background influence or signal offset is subtracted from the maximum signal strength and then di- vided by the total noise RMS. SNR = Smax − Sbackground NoiseRMS (20) Equation (19) is calculated under the assumption that the noise present in the interfer- ogram is white, stationary and Gaussian noise of zero mean. For example, if a rectan- gular window function (section 2.4.6) is defined by: Wk =    1 f or |n| ≤ N−1 2 0 otherwise (21) 16
  • 22. 2 FUNDAMENTAL THEORY The standard deviation of the calculated phase σφ , would be: σφ = 1 SNR √ N (22) Notice, however, the result if a von Hann window is implemented [14]: σφHann = 1 SNR √ N − 1 3 2 (23) Therefore, it may be possible to reduce the phase uncertainty of the calculated phase due to random noise by increasing the SNR of the sampled signal and by increasing the number of sampled points, N. Furthermore, the use of a special window function, such as the von Hann, Hamming, Gaussian, etc. is then only useful if their application increases the SNR enough to compensate for their additional uncertainty (e.g. von Hann increases the σφ by a factor of 1.22). Of course, the SNR and the number of sampled points can’t be increased indefinitely, so a compromise for the ideal solution must be accepted. As in the case of the interferometric biosensor only a portion of the entire signal from the CCD camera is considered and sampled (e.g. from 2000 pixels, the interferogram is comprised of only e.g. N=256 pixels). An increase in N can be achieved by increas- ing the sampling window, i.e. by sampling more of the original signal. This will, however, intuitively alter the SNR and there is no guarantee of an increase in the SNR, especially if more noise is sampled. N can also be increased through oversampling. As already mentioned, this is only possible with a new camera with smaller pixel dimensions (e.g. width of the interferogram is unaltered, but if the pixels are half the size, then the inter- ferogram could be comprised of twice the number of pixels). Example: In a best case, where both the SNR and N are perhaps unrealistically dou- bled and the chosen window function is rectangular, then the resulting change in the calculated phase uncertainty due to random noise would be: σφoptimized = 1 2SNR 1 √ 2N ≈ 1 3 σφ (24) With twice an amplitudal SNR in the frequency domain 2·100 and twice the number of samples N=2·256, then σφoptimized would be 2.08×10−4. This is an relatively small error in the phase and it is equivalent to a resolution in the ∆ne f f ≈ 4.2×10−9. 17
  • 23. 2 FUNDAMENTAL THEORY In summary, based on the above findings, if the phase uncertainty is mainly a product of random noise, then the main goals of signal processing should be the following, such that the optimum detection limit of the interferometric biosensor is reached. 1. Calculation of the true phase to reduce the phase error 2. Increase of the SNR and where possible the number of sampled points N for a reduction in the phase uncertainty Further Sources of Phase Error Further sources of error can be derived using Equation (15) in a re-written form: ∆φ = 2π∆Lne f f λ (25) Here, it is assumed that there is no relative medium to be measured. In this case, instead of a difference in the effective refractive index, a measured phase change could be the result of varying path lengths, both internal and external to the flow cell. First the internal case shall be examined. An internal path difference would be represented as ∆L = L2 − L1, where L is the prop- agation path length in the flow cell. Also, if there would be no physical change in phase velocity, vp = c making the absolute effective refractive index ne f f = 1. The expression for the measured phase change would become: ∆φ = 2π λ ne f f · L2 − ne f f · L1 = 2π∆L λ (26) Ideally the system construction should ensure that ∆L is zero, since ∆L is limited to the distance between the optical gratings. In an extreme case where construction defects are considered, an assumed path difference could be 320 nm, which represents a full grating period (e.g. one half of a grating period at both the input and output grat- ings). However, since the fluctuations in the phase change are to be considered, it is possible to observe possible errors from other sources in combination with the path dif- ference. One such source could be an instability in the wavelength δλ, which could mimic a change in the measured phase. This can be obtained through the derivation of Equation (26). 18
  • 24. 2 FUNDAMENTAL THEORY d dλ ∆φ = 2π∆L λ2 (27) Under certain assumptions Equation (27) can be re-written as: δ ∆φ = 2π∆L λ2 · δλ (28) Upon rearranging this becomes: δ ∆φ ≈ 2π ∆L λ δλ λ (29) The result of this derivation, Equation (29), can also be found in [14] in a similar form. Example: Assume a path difference of ∆L = 1 × 10−6 m, which is much larger than the discussed 320 nm. Furthermore, assume a wavelength stability of the laser light ∆λ/λ = 3.2×10−6 for a Helium-Neon (λ = 632.8 nm)[12], then Equation (29) predicts a measured phase change of ≈ 3.0 ×10−5. Upon conversion this is a change in the effective refractive index of ≈ 6.0 × 10−10. As calculated in section 2.3.2 a change in the effective refractive index has the order of 10−6. It is clear that the effects of laser instability and internal path differences are too small to interfere with the uncertainty of the measurements of the interferometric biosensor. Note: In [14] a laser instability, ∆λ/λ = × 10−7 is assumed. The above example assumes an instability on the order of 10 times worse. Now the external path difference is considered and derived. The re-written form of (15) would now be: ∆φ = 2π λ (nair · l2 − nair · l1) = 2πnair∆l λ (30) Here l represents the external path difference (i.e. between the first beam-forming op- tic and the CCD camera). The refractive index of interest is now external to the flow cell and is that of air, nair. Following the same derivation as the internal case: d dλ ∆φ = 2πnair∆l λ2 (31) 19
  • 25. 2 FUNDAMENTAL THEORY Again, this implies: δ ∆φ ≈ 2πnair∆l δλ λ2 (32) The occasion is now taken to substitute this into Equation (17), which results in a direct expression for δ ∆ne f f . δ ∆ne f f = δλ λ ∆l L nair (33) Example: With the same values as before: δλ/λ = 3.2×10−6, an external path difference ∆l = 100 ×10−6 m, a propagation path L = 5 mm and nair ≈ 1, the resulting δ ∆ne f f ≈ 6.4 ×10−8. This would represent a relatively large disturbance, since this value would be on the same order as the standard deviation of the measured phase φ . However, an external path difference of even 100 ×10−6 could be an overestimate. Therefore, it is important to further investigate those causes in the system, which could lead to an external path difference. From the mere fact that fluctuations exist in the measurement signal, it is reasonable to assume that any external path difference would not remain constant (e.g. 100 ×10−6), but would also fluctuate. Here, a hypothesis could be me- chanical vibrations of the system, which could contribute to the continual fluctuation of the external path difference itself. In summary, it may be possible to consider the combination of laser instability and exter- nal propagation path differences as legitimate sources of fluctuations in the phase and final measurement signal of effective refractive index. Furthermore, it may be possible to link the changes in external propagation path difference to the physical effect of vi- brations, which could slightly alter the relative position of components in the biosensor. 20
  • 26. 2 FUNDAMENTAL THEORY 2.4 Fourier Analysis & Signal Processing As mentioned in the previous chapter, the extraction & detection of information from an environment or a system can be performed with sensors. The topic, sensors, has already been discussed to some detail, specifying mainly on the area of "biosensors". Biosensors have been described as electrical and/or optical systems. Therefore, the output of the sensors takes the form of electrical and optical signals. However, once a signal has been detected, can it immediately be identified as information? More specifically, does the detected signal contain any useful information? Signal analysis is a mathematical science for the transformation of signals in order to de- termine if, what, and how much information a signal might contain. The transform tech- niques, probability theory and many other mathematical procedures of signal analysis form the fundamental structure of all communication theory. The main method of sig- nal analysis applied in the development of the interferometric biosensor is the Fourier Transformation. This is described in detail in the following chapters, as well as possible sources of error and signal processing techniques to their compensation. 2.4.1 Fourier Representations of Functions It is stated that a function can be represented approximately over a given interval by a linear combination of members of an orthogonal set of functions, e.g. gn(x). This term cannot always be made an equality and hence the word "approximately" above. f (x) ≈ ∞ ∑ n=−∞ cngn(x) (34) Analogous to the dot product, where the result of two orthogonal vectors is zero, or- thogonal functions are those with a property, which states: over a given interval, a particular performed operation between two distinct members of the set yields zero. If we consider two functions g1(x) and g2(x), then they are orthogonal over the interval t0 to t0 + 1/f0 if: < g1(x)|g2(x) > = t0+1/f0 t0 g1(x)g2(x)dx = 0 (35) It is stated here without proof that the following set of harmonic time functions is a complete orthogonal set of functions over the interval t0 to t0 + 1/f0: cos(2πn f0t), sin(2πn f0t) where 0 ≤ n < ∞ 21
  • 27. 2 FUNDAMENTAL THEORY For convenience, when working with time functions, the opportunity is taken to define the function period T: T = 1/f0 (36) Here the previously mentioned term "signal" is reintroduced, s(t), and is a function to represent a time signal. Following this line of logic and applying Equation (34) & the or- thogonal harmonic time functions, any time function, or signal, s(t) can be represented by: s(t) = a0 cos(0) + n=∞ ∑ n=1 [an cos(2πn f0t) + bn sin(2πn f0t)] (37) for t0 < t < t0 + T The above expansion is known as the Fourier Series. If the index of the series n is now any positive or negative integer, it is possible to consider a complete set of complex harmonic exponentials. Over one period, with the same time interval of t0 to t0 + T, and in the form of Euler’s Identity, it can be written: ej2πn f0t = cos(2πn f0t) + j sin(2πn f0t) (38) As stated, the series expansion in Equation (37) applies for the time interval of t0 to t0 + T and the signal s(t) can therefore be expressed as a linear combination of complex coefficients over this interval. With this considered, Equation (37) can be stated in the following complex form: s(t) = ∞ ∑ n=−∞ cnej2πn f0t (39) Through the multiplication of Equation (39) by e−j2πn f0t and the integration of both sides, the complex coefficients cn are given by: cn = 1 T t0+T t0 s(t)e−j2πn f0t dt (40) With these coefficients one is able to plot the Complex Fourier Spectrum, which nor- mally takes the form of the magnitude and phase of cn plotted over the range of n multiplied by the fundamental frequency f0 (i.e. cn vs n f0). In this manner a different 22
  • 28. 2 FUNDAMENTAL THEORY representation of our original signal s(t) has been achieved for consideration with re- spect to frequency. In the case of the interferometric biosensor under development, the signal of interest is in the form of an interferogram and is a periodic signal. The analysis of the interfero- gram’s signal in the frequency domain is extremely crucial in the determination of the period of the interferogram, which in turn corresponds to the change in phase velocity of the optical beam as it propagates through a biological sample. Theses relationships have been explained in the section 2.3. 2.4.2 The Continuous Fourier Transformation Now that the basis for the Fourier representation of periodic time signals has been briefly explained, the question arises whether or not it is possible to achieve a Fourier representation for non-periodic signals. Since the Fourier Transformation is very widely known and studied (see [1, 7, 11]), the details of its derivation are not covered in this thesis. Instead, the determination of the Fourier Transform is explained in the following manner. Non-periodic signals can be thought of as particular instances of periodic signals whose period approaches infinity. In inverse relationship to the period, the fundamen- tal frequency therefore approaches zero. In effect the separation of the harmonics becomes smaller. Continuing with the limit as f0 approaches zero, the summation of the Fourier Series representation of s(t) becomes an integral. As mentioned in the previous section, our time signal s(t) was able to be represented with the Complex Fourier Spectrum in reference to a different variable, namely fre- quency (f0). For convenience the new representation in frequency is written as S(f). Analogous to a function, where a set of rules substitutes one number for another, trans- forms are sets of rules that substitute one function for another. In this case the transfor- mation is written as: S(f ) = ∞ −∞ s(t)e−j2π f0t dt (41) With t as a dummy variable for integration, the transform above defines how to ev- ery function of t, a new function of f is assigned. The above equation is the Fourier Transform and states that, given the Fourier transform of a function of time, the original 23
  • 29. 2 FUNDAMENTAL THEORY time function can always be uniquely recovered, meaning that either s(t) or S(f) can uniquely characterize a function. This uniqueness and recovery is accomplished with the further aid of the Inverse Fourier Transform: s(t) = ∞ −∞ S(f )ej2π f0t df (42) 2.4.3 Discrete Signals Simply defined, discrete signals are signals, which are not continuous in time. They are signals comprised of values at a defined interval along an axis, for example a transient axis (signal value sampled every second) or a spatial axis (signal value sampled every unit interval). Sampling is the process of converting a continuous axis into a discrete axis by only considering values at defined sampling intervals. For clarification, contin- uous signals have a value for every infinitesimal interval along an axis. For pen and paper analytical solutions of continuous signals are feasible, but such analysis often has little practical use in the development of systems. Signals must often be analyzed and processed in discrete steps, because the infinitesimal point-by-point analysis of a continuous signal would quite literally take forever. Fortunately, due to Nyquist’s Sampling Theorem, not all points are needed. Simply knowing enough values at discrete time points makes it possible to fill in the curve be- tween these points precisely. The limitation is having "enough" of these discrete values. In the case of time, Ts represents our sampling period. The Nyquist Theorem states that Ts must be less than 1/2 of the maximum frequency fmax in the signal [11]: Ts < 1/2 fmax or fs > 2 fmax In other words, the sampling frequency fs must be greater than twice the maximum frequency fmax of the signal being sampled. Twice the maximum frequency, 2 fmax, is known as the Nyquist Frequency. The proof for Nyquist’s Sampling Theorem will not be shown. Relating this to the interferometric biosensor the discrete axis is not a time axis, but rather a spatial axis represented by pixels. Each pixel can be thought of as an individual detector and therefore has a corresponding intensity value. The sampling requirements are fulfilled, when the period of the interferogram is greater than 2 pixels. 24
  • 30. 2 FUNDAMENTAL THEORY 2.4.4 The Discrete Fourier Transformation With the determination that signals are processed as discrete signals, it seems that the continuous Fourier Transform (CFT) is of less use. This is not so, it simply obtains a discrete form. An important mathematical tool for the software implementation of signal pro- cessing and analysis is the discrete Fourier transform (DFT). For a discrete signal x(nTs) it is possible to form a corresponding periodic signal xp(nTs) with a period NTs, as has been done with the Fourier representation of signals. xp(nTs) = ∞ ∑ r=−∞ x(nTs + rNTs) (43) The DFT of xp(nTs) is then defined as: Xp j 2πk NTs = N−1 ∑ n=0 xp(nTs)e−j 2πkn N (44) N ≡ Total sampled points Ts ≡ Sampling period n ≡ Index of discrete points in the signal r ≡ Number of repeated periods Notice the extreme similarity of the DFT to the CFT. Similarly the DFT has an inverse trans- formation, the IDFT, but this is not covered. In general Xp(j 2πk NTs ) is a complex function and is often written in simplified notation as X(k). X(k) = A(k)ejφ(k) ⇒ Xp j 2πk NTs = A 2πk NTs ejφ( 2πk NTs ) Information to be drawn from this can be the magnitude and phase of the signal. where A(k) = |X(k)| and φ(k) = arg[X(k)] Fast Fourier Transformation Of more importance to mention is the Fast Fourier Transformation (FFT). Again, this will not be covered in great detail, but it is necessary to know that the FFT is set of powerful algorithms used to efficiently calculate the DFT. In brief, the DFT involves N complex multiplications and N-1 complex additions for each value of X(k). Excluding the additions, the number of multiplications is N2 over the entire signal X(k). This alone is quite a large computational load. 25
  • 31. 2 FUNDAMENTAL THEORY The FFT, on the other hand, has a total number of multiplications over the entire signal of (N/2)log2N. Continuing the multiplication example, if a signal has 100 values, N=100, the direct DFT requires 10000 multiplications, while the FFT requires merely 350. In other words, almost 97% less calculations. Notice the special condition for the FFT implied by log2N. The result from this calculation must be rounded to the next highest integer. Therefore, any signal N must be a power of 2 in order to implement the FFT. Nevertheless the FFT can be applied to any finite duration signal by including an appropriate number of trailing zeros to fulfill this condition, extending the signal length to the next power of 2. This is known as Zero-Padding and is discussed later in greater detail. The FFT plays a crucial role in the processing and analysis of the interferogram sig- nal of the biosensor. Since the interferogram is sampled many times per second, the computer and software are placed under high demands. Therefore, as one step to maximize the algorithmic and computational efficiency the FFT, not the direct DFT, is performed in combination with zero-padding, where necessary. 2.4.5 Distortion Effects in Signal Analysis The following effects are possible sources of error (e.g. noise), which are associated with the sampling and Fourier transform of the interferogram. Please recall that random noise also exists in the Fourier representation, but it will not be discussed here, since it was already introduced in section 2.3.3. This section gives only a brief introduction to some distortion effects. For further information please refer to [11]. Aliasing When the sampling frequency is too low and the Nyquist theorem is not fulfilled, the result is aliasing. The name is derived from the fact that higher frequencies disguise themselves in the form of lower frequencies. The sampling frequency fs is directly re- lated to the CCD camera and its pixel sizes. The CCD camera is discussed in section 3.1 and the fulfillment of Nyquist’s theorem for the interferometric biosensor is dealt with in section 4.2. 26
  • 32. 2 FUNDAMENTAL THEORY Quantization Noise Quantization is the conversion of an analog signal (e.g. light interference pattern) to its digital approximation. The approximation arises from the fact that rounding errors occur, due to a finite encoding resolution, or number of quantization levels. These er- rors are considered noise and it is defined as the function of the time difference be- tween a quantized signal and the original signal, which give rise inaccurate encoding approximations of the original signal’s amplitude. Again, quantization is performed in combination with the CCD camera and its electronics. Stated by M. Kujawinska and J. Wojciak [10] the quantization is negligible for a resolution greater than 6 bits. Therefore, the 12-bit CCD camera resolution of the interferometric biosensor excluded quantiza- tion noise as a major factor. Oscillations & Leakage It is always the case that only a finite portion of a signal is to be analyzed and there- fore truncation error is always present. This truncation will lead to oscillations or ripples in the frequency domain when analyzed by a Fourier transformation. This behaviour is commonly known as Gibb’s Phenomenon and these ripples are known as Gibb’s Oscillations[1, 15]. Since signals of finite duration also have a finite energy, these ripples represent an un- wanted distribution of a partial amount of the signal’s energy spread over surrounding elements of a domain (e.g. frequency domain). In other words, the energy leaks out to the surroundings. Commonly, this is also referred to as leakage (e.g. DFT leakage). The simplified explanation for this results from the attempt to transform discontinuities, as is the case in "the corners" of a rectangular window. A solution is then to use a non- rectangular truncation or window function, which is discussed in section 2.4.6. 2.4.6 Signal Processing By means of signal processing it is often possible to minimize the effect of many distor- tions, and convert signals into desirable forms, where the wanted information is, e.g. more easily obtained or extracted. The following sections offer a theoretical explana- tion of how system related noise and signal analysis distortion effects may be dealt with to provide an optimum measurement signal. The actual algorithms and their imple- mentation is explained in further detail in section 3.2 and in A.2. 27
  • 33. 2 FUNDAMENTAL THEORY Window Functions One method for the reduction of Gibb’s oscillations is through the use of non- rectangular windows without discontinuities. Many window functions exist and they are frequently used for filtering applications, especially in digital filtering. Merely for exam- ple, two very common windows, mentioned here together due to their strong similarity, are the von Hann and the Hamming windows. wH(nT) =    α + (1 −α) cos 2πn N−1 f or |n| ≤ N−1 2 0 otherwise (45) vonHannWindow α = 0.50 HammingWindow α = 0.54 Figure 8: Rectangular window, von Hann window (α = 0.50) and Hamming window (α = 0.54) Windowing plays a role in the final determination of the period of the interferogram by reducing leakage. This reduction of leakage then reveals a more precise shape of the frequency peak in the FFT. Figure 9 attempts to illustrate this. Based on the findings in section 2.3.4, window functions may also play a key role in the reduction of random noise, referred to as phase uncertainty. This is only true, however, if their implementation is able to increase the SNR significantly enough to overcome the introduced uncertainty from their implementation, implied by Equation (19). 28
  • 34. 2 FUNDAMENTAL THEORY Figure 9: Possible representation of a leakage-distorted peak and its window-corrected peak in the FFT domain, which have sampled maximums at different frequencies. Zero Padding Zero padding is a method in signal processing to extend the length of a causal signal or spectrum by appending zeros to the end. The main goal of such an operation is to adjust the signal or spectrum length, such that the number of samples are a power of 2. When this is accomplished the signal can be analyzed using the FFT instead of the less efficient DFT. Given any function, f(x), zero-padding can be represented by the following relationship. Zeropadding[f (x)] =    f (x) |x| < N/2 0 otherwise (46) In addition to zero padding’s usefulness with the FFT, it is also often implemented as a method for spectral interpolation. In combination with the Fourier theorems, zero padding, for example in the time domain of periodic functions, yields an ideal band-limited interpolation in the frequency domain[1]. Zero-padding is, however, not a method to increase spatial resolution and hence is not an oversampling method. In the case of the interferometric biosensor oversampling could only take the form of a new camera with smaller pixels. Due to the efficiency of the FFT, its use with zero-padded sig- nals is a highly practiced and practical method for interpolating the spectra of signals with finite durations. In general, under ideal conditions of continuous signals, the general shape of the peak indicating the frequency or alternatively the period of the interferogram should be Gaussian in nature. From Figure 10 it is obvious that the DFT of the unprocessed interfer- 29
  • 35. 2 FUNDAMENTAL THEORY ogram signal is a very poor approximation of this Gaussian. Since the DFT of a discrete signal in one domain is a discrete signal in the other domain, the information between the discrete points is not immediately known. This could mean that the true maximum and its corresponding spatial frequency lies between two discrete points, except in cases of extreme coincidence. Figure 10: MatLab simulation of the a signal peak of the DFT from an interferogram signal with 230 points & the theoretical representation of a continuous Fourier method. Again, since the interferometric biosensor system is designed to be a highly sensitive measuring device, the attempt must be made to retrieve the precise period of the interferogram. From Figure 10 the estimated frequency of the peak is .23045, yielding a spatial period of nearly 4.34 [pixels] for the interferogram. After the implementation of zero padding Figure 11 shows the result of the FFT with 2048 points. In comparison with Figure 10, the Gaussian nature of the peak is more evident. Now the estimated frequency of the peak is .2315, yielding a spatial period of 4.32 [pixels] for the interferogram. Important to note is that the signal-to-noise ratio, SNR, of a zero-padded signal is dis- torted from the true signal SNR [1]. Therefore, there is a compromise with zero padding: 1. Precise determination of the period of the interferogram. 2. Precise determination of the SNR in the frequency domain. 30
  • 36. 2 FUNDAMENTAL THEORY Figure 11: MatLab simulation of the a signal peak of the DFT from an interferogram signal sam- pled with 230 points and zero-padded to 2048 points. Some questions arise: Should zero-padding be implemented? If yes, should it be ap- plied only to comply with the FFT criterion or should linear interpolation be applied as well? Additionally, how much zero-padding should be carried out? Theoretically the linear interpolation could be carried on indefinitely, but the price is an increased pro- cessing load and calculation time. Should 100 zeros be appended or 1 million? The latter question has an almost immediate answer. Figure 12 shows the convergence of the period calculation to an acceptable solution for the example case in this section. Figure 12: Solution convergence for the period of an interferogram. 31
  • 37. 2 FUNDAMENTAL THEORY Noise Averaging The standard deviation is a statistic used as a measure of the dispersion or variation in a data distribution. Quite often in measurements these variations are considered to corrupt the data distribution (e.g. signal) with noise. Therefore, the standard deviation increases when a signal becomes noisier. It is the task of filtering in signal processing to reduce the effects of this noise by means of attenuation, elimination, and other common practices, such as averaging. Two methods used in the processing of the measurement signal in the interferometric biosensor are briefly described below. Block Averaging This form of averaging requires the definition of a buffer size n and the sum of the val- ues in a full buffer divided by the buffer size. Through this the mean value of a small section or block of a larger sequence is obtained. The calculated mean values can then form a mean value sequence xk of the original sequence xi. Figure 13 illustrates this graphically. Figure 13: Graphical representation of the averaging of a data series in blocks of size n. Compared to other averaging methods block averaging is more calculation intensive. A mean value can only be calculated after n measurements and the next mean then only after the next n measurements, and so on. However, this method can be useful when it is implemented to reduce the number of data points in a series, thereby saving memory if the mean value sequence is recorded in a data file. Furthermore, it is often the case that the mean value sequence xk has a lower standard deviation than the original sequence xi. This will be explored in section 4.2.4. 32
  • 38. 2 FUNDAMENTAL THEORY Moving Average Filter In addition to, not in replacement of block averaging, would be moving-average filter- ing. This is one technique for the recursive averaging and smoothing of a measurement sequence. The method is demonstrated in Figure 14. Figure 14: Graphical representation of the moving average principle. This method depends only on the last calculated average and the newest measured value. Therefore, the data buffer must become filled only once, instead of the continu- ous emptying and re-filling of the buffer as in block averaging. Upon derivation, which is not shown here, the latest averaged value in the moving sequence is: xk = xk−1 + 1 n [x − xk−n] (47) 33
  • 39. 3 INTERFEROMETRIC SENSOR SYSTEM 3 Interferometric Sensor System This section provides more information about the individual hardware components and software algorithms implemented in the interferometric biosensor. Overviews, sample calculations and derivations may be provided for certain components or software al- gorithms, but this section will not explore the fundamental principles of how these com- ponents (i.e. lenses) or software packages (i.e. MATLAB) operate. 3.1 Detailed System Configuration In the introduction of this thesis a system overview in Figure 1 was provided as an aid in the further understanding of the principles of operation and theory involved in the interferometric biosensor. Figure 15 illustrates the actual system construction used for the test measurements, which are presented in section 4. Not all labeled components will be discussed. Instead, several important components (e.g. double slit, optical chip), which are not visible in Figure 15, are discussed in greater detail Figure 15: Actual system construction. 34
  • 40. 3 INTERFEROMETRIC SENSOR SYSTEM 3.1.1 Lasers & Light Sources Over the development of the interferometric biosensor the main light source has been a Helium-Neon laser. Current measurements are being taken to implement a new light source known as a super-luminescent light source. The choice of the light source de- pends greatly on the optical characteristics of the optical chip, since the coupling properties are related to the e.g. wavelength of the light source. Helium-Neon Laser The current light source for the interferometric biosensor is a Helium-Neon laser with a wavelength of 632.8 nm and a minimum power output of 2,0 mW. The laser in use is not a controlled light source, but Helium-Neon lasers are generally have a high wave- length stability. The worst case assumption from section 2.3.4 was a stability ∆λ/λ of 1×10−6. Furthermore, to avoid fluctuations in the output, power measurements were performed only after a warm-up time of > 30 minutes. These and other laser properties are summarized in the following table: Minimum power 2.33 mW Wavelength 632.8 nm Beam diameter 0.9 mm Total length 272 mm Table 1: Basic parameters of the Helium-Neon laser used in the development of the interfero- metric biosensor. Super-Luminescent Diodes (SLD) A super-luminescent diodes is not a laser or laser diode and is also different from a con- ventional LED. The specifics of a SLD are beyond the scope of this report, but generally known about SLDs is their shorter coherence length compared to lasers. SLDs also emit light, which consists of amplified spontaneous emissions and their beam divergence is comparable to Fabry-Perot laser diodes. Due to their high temperature sensitivity it is also necessary that their operation is temperature controlled. This is a main advantages for choosing a SLD light source. Others include the compactness of the light source and the avoidance of patent infringes with similar systems using the HeNe laser-light source. 35
  • 41. 3 INTERFEROMETRIC SENSOR SYSTEM 3.1.2 Lenses & Beam Forming The beam forming of the interferometric biosensor is quite simple and consists of 2 cylin- drical lenses with focal lengths of -50 mm and 200 mm respectively. The first lens (f1 = -50 mm) widens the HeNe-laser beam along the vertical axis, which also narrows the beam in the horizontal axis. The second lens compensates for this widening and serves to focus the beam at its focal length (f1 = 200 mm). The resulting beam profile is elliptical in nature and closely resembles a line to the eye. The main purpose of forming the beam is to better fit the beam profile to the dimensions of the grating of the optical chips. Without beam forming much of the laser light, with an original beam diameter of 0.9 mm, would not impinge upon the grating, which has a smaller width of 0.5 mm. Figure 16 shows an example beam profile after beam forming. In this case the light source had been a SLD with its corresponding beam- forming optical configuration, but the underlying purpose and results would be similar to the HeNe laser. Figure 16: Beam profile of a SLD source after beam forming. The intensity distribution is shown to be nearly Gaussian in both the vertical and horizontal axis. 36
  • 42. 3 INTERFEROMETRIC SENSOR SYSTEM 3.1.3 Flow Cells It is inside the flow cell, where the sampling materials first come into contact with the chip surface and the evanescent sensing field. The main properties of this component are summarized in the following table: Material Sylgard 170 Width ca. 1 mm Length ca. 7 mm Volume ca. 7 µl3 Flow cell separation 1.3 mm Table 2: Fundamental properties of the flow cells. Sylgard 170 is a black, silicon-based fluid, which is formed and heat treated into a soft and flexible material. Importance is the separation of the flow cells (middle to middle) of 1.3 mm. This distance d must be exactly match by the separation of the double slits. In addition, the velocity of the material and fluid drawn through these flow cells during a measurement is 1 µl per second. Figure 17 is an illustration of an actual flow cell, as well as a simulated design of the entire flow cell in its mount. Figure 17: A photograph of a demonstration flow cell made of transparent silicon and the com- puter design of the flow cell in its mount. 37
  • 43. 3 INTERFEROMETRIC SENSOR SYSTEM 3.1.4 Optical Chips The key components of the interferometric biosensor are the optical chips, since almost the entire design and development of the system is based on their implementation and characteristics. The chips are supplied by Unaxis Optics. Table 3 provides a summary of the relevant chip information. In addition, some information about a protective layer is given. This represents an alteration to the supplied optical chips and the arguments for this are discussed in the next subsection. Figure 18 illustrates this information and serves as a reference for the description of the protective layers and their function. Substrate AF 45 (n=1.52) 16×48×0.7 mm3 Waveguide Ta2O5 (n=2.1) thickness = 150 nm Protective layer SiO2 (n=1.46) thickness ≈ 510 nm Grating period 320 nm (depth ≈ 12 nm) Coupled-wave polarization TE (parallel to grating) Coupling angle ≈ 3° Penetration of the evanescent field 27.5 nm Power in the evanescent field 10.6% Table 3: Fundamental properties of the optical chips supplied by Unaxis Optics. Figure 18: An optical chip with protective layers and the placement of the flow cell. 38
  • 44. 3 INTERFEROMETRIC SENSOR SYSTEM Protective Layers A protective layer is an additional layer of glass (SiO2) above the region of the optical gratings. As the name implies, these layers protect the optical gratings, but the main advantage of having these glass layers is to ensure that the evanescent field never comes into contact with the silicon flow cell and, therefore, does not measure the refractive index of the silicon. In this case, it would not be possible to ensure that the flexible silicon would have the same shape, thickness and surface distribution in both flow cells (e.g. due to pressure changes). The advantage of relative measuring would be severely compromised, due to differing conditions of the individual flow cells. Instead, the soft silicon wraps and forms to the shape of the glass protective layers. The evanescent wave then only encounters the glass upon exiting the flow cells, and does this in both flow cells. Furthermore, the solid glass layer neither shifts nor alters shape due to internal or external factors. In this case the evanescent field is assumed to undergo the same change in phase velocity in both cells regarding its interaction with the glass protective layer. This better maintains the goal of relative measuring. Figure 18 attempts to demonstrate the propagation of the light and evanescent field in reference to the position of the flow cells and protective layers. Scattering & Spreading Since the interferometric biosensor is based on the sensing of material on the surface of the chips it is also important to understand the resulting influence of unwanted foreign material, such as dust and dirt, on this surface. If some foreign matter were to be present in the beam’s propagation path the effect could be a scattering along the surface, within the waveguide, as demonstrated by Figure 19. The scattered light would then propagate and meet the output coupling at many non-perpendicular angles. Instead of the beam profile as a concentrated point, the result would resemble a bent line as the light spreads into a curvature form. This could cause light of weakened intensities to enter the double slit at undesired angles, which might be detected as interference patterns at neighbouring pixel regions of the CCD camera. The results of such tests are presented in section 4.1. 39
  • 45. 3 INTERFEROMETRIC SENSOR SYSTEM Figure 19: Possible scattering effects due to foreign matter on the chip surface. 3.1.5 Double Slit The double slit is the component, which has undergone the most alteration and re- design in the interferometric biosensor, since it’s dimensions are dependent on both the flow cell and the optical chip. Figure 20 illustrates the version of the double slit used for the majority of measurements presented in this thesis. Figure 20: A non-proportional illustration of the double slit film used both for spatial filtering (36 µm) and for inducing diffraction and interference (30 µm). Remember that the slit separation d of 1.3 mm matches the separation of the flow cell. This is of critical importance to ensure that only the light is captured, which has passed through the flow cell (i.e. sensing regions) and has therefore undergone a rel- ative change in phase velocity. Since the width of the flow cells (ca. 1 mm) is much larger than the width of the output double slit (30 µm), there is some room remaining for adjustments. 40
  • 46. 3 INTERFEROMETRIC SENSOR SYSTEM Also of importance is the determination of the slit width w. Using a relationship of single- slit interference, Equation (49), it is possible to approximate the size of the interference region of an interferogram. In A.1 the size of the interference region for a double slit is approximated to be the width of the interferogram B minus the slit separation d. Table 4 shows the results of several sample calculations for slit widths of 30, 50 and 80 µm. Slit Width [µm] Period [µm] Width [mm] Overlap [mm] Contained Periods w p = λD/d B = 2λD/w B − d (B − d)/p 30 48.68 4.22 2.92 60.0 50 48.68 2.53 1.23 25.3 80 48.68 1.58 0.28 5.8 Table 4: Sample calculations: Number of periods contained in an interferogram detected at a distance D=100 mm, separation d=1.3 mm and λ= 632.8 nm. It is not surprising that a smaller slit width at the output double slit results in a wider interferogram. A wider interferogram can then be sampled by more points N. A larger N also fulfills an earlier requirement for the reduction of random noise components (section 2.3.4). In this case any slit width between 50 µm and 30 µm would be sufficient for analysis with the CCD camera at a distance of 10 cm. Furthermore, it is only the 30 µm slit that offer an acceptable slit width if the distance between the slit and the CCD camera is decrease. The number of periods is important for the visual approximate of the interferogram’s contrast, but this is not discussed in detail in this section. Note: An acceptable interferogram width or overlap region would be one that offers the possibility to sample with a large number of points, which is also divisible by 2 to fulfill the FFT requirement: For example, N=256 and a CCD camera pixel width of 14 um would require the interferogram to be ≈ 3.58 mm wide. A double slit serves a dual purpose in the current design of the interferometric biosensor. The input slit has already been mentioned in section 1.2. Since the sensing region of interest lies only within the flow cells, there is no need for coupled light in the optical chip in other regions. Such light gives rise to unwanted scattering effects, which is discussed in greater detail in section 3.1.4. Therefore, the slit width for the input double slit w should be as small as possible, while remaining smaller than the output double slit (e.g. w’ > 30 µm). Therefore, 36 µm was chosen, since it is the next largest slit width possible with the production methods of the slit present in Figure 20. 41
  • 47. 3 INTERFEROMETRIC SENSOR SYSTEM The early production methods of the double slit involved the development and ex- posure of high-quality film (e.g. lithographic) based on vector-graphic image formats. Such processes eventually resulted in double slits of very good quality (i.e. sharp slit edges, excellent contrast and few spots on the slits in the slit regions, etc.), but such slits could not be reliably produced at the wanted dimensions (e.g. 30 µm). Therefore, the winning solution, which had been quick, free and resulted in good slit quality, was a high-quality laser printing onto transparent film. Now, that the slit widths have proved to be sufficient for testing, future double slits will be laser cut from metal. Figure 21 serves the multiple purpose of demonstrating the chip placement, the in- put slit’s function, and the introduction of the main mount for the chip and the flow cell. For additional understanding the anticipated beams from the input double slit are shown, as well as their input coupling, prorogation path along the waveguide, the output couping and eventual arrival at the output double slit. 3.1.6 Optical Chip Mount The current mount for the optical chip has been designed for the quick replacement or adjustment of the optical chip between tests. Basically, the double slit film sits on small pins and is fixed against the mount surface to prevent if from shifting. The optical chip, which also sits upon the same pins, is lightly pressed against the double slit film by the soft silicon flow cell. Small braces surrounding the flow cell come into contact with the chip mount to prevent the application of excessive force against the chip. In reference to Figure 21 the light beam enters from the left into the labeled opening. Within there is an adjustable mirror. This places the final degree of freedom almost di- rectly in front of the chip and the optical grating. Upon reflection the beams is filtered into two parallel beams and is coupled into the chip waveguide after impinging upon the optical gratings. The beams propagate over the 9 mm distance, which separates the optical gratings. Any diffraction resulting from the input double slit is disregarded, since this distance is so small. The light then impinges upon the output grating and is diffracted by the second double slit to form the interferogram, detected by a CCD camera. 42
  • 48. 3 INTERFEROMETRIC SENSOR SYSTEM Figure 21: View the chip mount demonstrating the relative double slit and chip placement. 3.1.7 CCD Camera The device used for the current detection of the interferogram is a Stresing ILX 511 CCD camera. The output from CCDs or charge-coupled devices is a series of analog pulses, which represent the intensity distribution at a series of discrete locations, or pixels [8]. Very simplified, the operation of a CCD is controlled by a clock, which controls the timing of how long a pixel will collect a charge resulting from the intensity of an optical signal. This charge is then transfered and converted into a measurable voltage signal, often with the intent of further computer analysis. The relevant specifications for the current CCD camera are listed in the following table. Active sensor length 28.7 mm Pixel area ca. 14 × 200 um2 Max. exposure time ca. 6 seconds Resolution 12 bit (4096 levels) Clock speed 2.5 MHz Table 5: Basic characteristics of the Stresing ILX 511 CCD camera with 2048 pixels. 43
  • 49. 3 INTERFEROMETRIC SENSOR SYSTEM 3.1.8 Pump The pump is of course responsible for the introduction and flow of all samples and fluids into the flow cells. In actuality, the pump does not pump the samples at all. Instead the samples are drawn from their containers, directly into the flow cells. Pumping would then first involve the filling of the syringes, which are visible in Figure 22, and this may give rise to unwanted reactions and mixtures before the sample enter the flow cells. The pump is programmable and various parameters, most importantly the pumping or drawing velocity, are adjustable. The chosen velocity is 1 µl per second. Since a syringe has a volume of 250 ml once pumping or drawing cycle lasts approximately 4.2 minutes. In the current configuration the intake tubes have a diameter of 0.3 mm and the output tubes have a diameter of 0.8 mm. As visible in Figure 22, the pump resides outside of the optical system. This helps to reduce the effects of vibrations on the system due to the pump, as well as the ability to operate the pump and to change samples, without the disruption of the optical system. Figure 22: Photograph of the system pump and several beakers of water. 44
  • 50. 3 INTERFEROMETRIC SENSOR SYSTEM 3.2 System Algorithms The theory and principle relationships required for determining the measurement signal (i.e. phase and ∆ne f f ) have been discussed in detail in section 2. This section provides a brief overview of the software realization of the presented theory. 3.2.1 FFT-Based Measurement Algorithm In reference to Figure 23 the signal x(i) is the intensity array from the CCD camera (i.e. the interferogram signal) for the current processing step. At this point truncation of N samples and filtering (e.g. windowing) can be implemented. With zero padding, the in- terferogram signal can be analyzed by the FFT algorithm. The interferometric biosensor software then automatically determines the frequency of the maximum peak, which represents the period of the interferogram upon conversion (p = N/fmax). Next, the phase value at the calculated fmax is obtained. This phase value represents the phase of the interferogram for this processing step. With Equation (17), the value for the ∆ne f f is obtained. Both values (phase and ∆ne f f ) are recorded and plotted. This entire process is then repeated for every interferogram detection by the CCD camera, resulting in phase and ∆ne f f trends. Figure 23: The FFT-based algorithm for the determination of ∆ne f f from the analysis of the inter- ferogram, x(i) 45
  • 51. 3 INTERFEROMETRIC SENSOR SYSTEM 3.2.2 Fourier-Coefficient Correlation Algorithm This algorithm is an extended application of the original algorithm proposed by A. Brandenburg [2]. Essentially, this algorithm is a Fourier transformation broken into sev- eral steps. Overall the algorithm is a form of the complex Fourier series from Equation (39). Therefore, the actual Fourier analysis is not performed with the FFT, which is a dis- advantage of this algorithm. Referring to Figure 24 this should not be confused with the one-time FFT of the input signal x(i) in the calibration stage. However, an advantage of this algorithm is the added control at each stage of calculation. Again the input signal is truncated and possibly filtered to obtain x(i). Notice that the later steps of the algorithm require the period and envelope width of the interferogram. Therefore, in the calibration stage, x(i) is possibly zero padded and the FFT yields the period of the interferogram, exactly as in the FFT-based algorithm. If the parameters of the double slit are known (i.e. slit separation d and slit width w), then it is possible to calculate the interferogram width B from the period p: B = 2pd/b. Figure 24: The extended Fourier-based algorithm for the final determination of ∆ne f f . 46
  • 52. 3 INTERFEROMETRIC SENSOR SYSTEM Also performed only once in the calibration stage is the generation of a test signals or basis functions for their later correlation with the interferogram signal. The basis func- tions are Fourier based (i.e. comprised of cos and sin). The "index" parameter refers to the spatial position or pixel, i. The 1+cos term is a form of the chosen window function (e.g. von Hann). Important to note is that the implementation of a specialized window function at this point is a valid option, but not a requirement. All subsequent steps are repeatedly performed for every new instance of x(i). First the sin and cos components are multiplied with x(i) and summed to yield the complex and real components of the transformed interferogram signal. The phase is found by the negative arctan of the complex over the real components and the ∆ne f f is calculated with Equation (17), as has been done with the FFT-based algorithm. Not shown in the above algorithm is the capability to reconstruct the envelope function of the interferogram based on the point to point multiplication of the of the shifted basis functions with the interferogram signal [2]. Sk = N ∑ k=1 xisi−k Ck = N ∑ k=1 xici−k Therefore, the maximum of the distribution T(k) marks the maximum of the interfero- gram envelope. T(k) = S2 k + C2 k 47
  • 53. 3 INTERFEROMETRIC SENSOR SYSTEM 3.3 Software Development Environments Based on the methods and concepts introduced in section 2.3, it would be possible to carry out measurements with simple measurement software. This, however, is not practical for a number of reasons. Firstly, those sources of distortion discussed in section 2.4.5, which often disrupt the measuring signal. Secondly, the interferometric biosensor under development is designed to be a highly sensitive measuring device and each step of the signal analysis should be optimized with the appropriate algorithms. This has required the use of sophisticated development software. The main software environments and language used for the development of the interferometric biosen- sor are: LabVIEW, Visual C++ for C-programmed DLLs (Dynamic Linking Libraries), and MATLAB. This section provides a brief overview of each of these environments and their role in the final software product, the interferometric measurement program. 3.3.1 LabVIEW Laboratory Virtual Instrument Engineering Workbench (LabVIEW) is developed and dis- tributed by National Instruments. LabVIEW is said to be programmed in G, as it is a graphical programming languages. This operates on the principle of hierarchies and each entity in an hierarchy is referred to as a Virtual Instrument or VI. In turn these VIs are composed of graphical elements or other VIs. Due to its graphical nature LabVIEW uses the concept of nodes to connect each element and VI throughout the program or hierarchical structure. Each VI has two associated levels. One is the front panel and is in simple terms the GUI, with which the user interacts (i.e. graphs, buttons, controls, etc.). The second level is the block diagram, which contains all of the elements and connections to control the visible graphic elements. On this level it is also possible to integrate C and C++ code for the execution of custom commands or intense computations beyond the capabilities of LabVIEW’s pre-programmed routines. The common practice of linking LabVIEW and C is through the use of function libraries known as DLL files, however, it is LabVIEW itself, which represents the backbone of the interferometric measurement program. 48
  • 54. 3 INTERFEROMETRIC SENSOR SYSTEM 3.3.2 Visual C++ & Dynamic Linking Libraries Dynamic Linking Libraries or DLLs are common in several operating systems, which al- low the sharing of code modules or function libraries amongst applications. DLLs are then forms of compiled code, which are linked to applications only when a running program invokes a function call to the DLL. This offers the capability to share one DLL between many applications. Since DLLs are compiled code modules it is not possible to debug their operation without third-party software with a sophisticated debugging environment. Visual C++ was chosen for its ability to establish this dynamic link between a running LabVIEW program and an associated DLL in a special Windows debugging environ- ment. In addition Visual C++ contains templates for general 32-bit DLL creation, as well as many expandable features, such as version control software. With all of these fea- tures the C-based DLLs are responsible for the repetitive and intense number crunching required for the signal analysis in the interferometric biosensor. 3.3.3 MATLAB The Matrix Laboratory environment (MATLAB) is developed and distributed by Math- Works Inc. At its essence MATLAB is a shell-based, sophisticated matrix calculator, which can be extended through toolboxes. Toolboxes are themselves small programs or func- tion libraries often written in MATLAB or C. Since MATLAB is an open environment, it has grown as specialized toolboxes have been created, such as for signal processing, con- trol systems, symbolic mathematics, etc. MATLAB has become a standard for the simulation of mathematical processes, linear and non-linear systems. Therefore, it has been possible to simulate many of the physical phenomena associated with the interferometric biosensor (i.e. interferometry, scatter- ing, reflections, etc.) for the deeper understanding of measurement results beyond the capabilities of LabVIEW. Furthermore, MATLAB represents an almost ideal testing envi- ronment for many signal analysis and signal processing algorithms, before their imple- mentation in LabVIEW. Thus, MATLAB has no direct role in the interferometric measure- ment program, but has proved critical in researching the performance characteristics thereof. 49
  • 55. 3 INTERFEROMETRIC SENSOR SYSTEM 3.4 Interferometric Measurement Program Due to the complexity of demonstrating the LabVIEW graphical code as a complete element, all of the function-critical algorithms and additional features for the interfer- ometric measurement program are shown in A.2 with an accompanying description. Since the LabVIEW algorithms often depend on the DLL-files programmed in C, several small C examples are also shown. What is shown in this section are examples of the program screen, which are visible and usable by the program operator, along with the description of the main features and functionality of the measurement program. Since the graphical interfaces for both the FFT-based and the stepwise-Fourier algorithms are so similar, only the frontpanal views for the FFT-based software is demonstrated. Figure 25 represents the program view when a measurement is in progress. Figure 25: The main measurement screen for the FFT-based analysis algorithm. 50
  • 56. 3 INTERFEROMETRIC SENSOR SYSTEM Labeled in Figure 25 are also some highlighted functions of the interferometric mea- surement program, which are described below in more detail. • Interferogram - display of the interferogram, with the added ability to record any instant of the interferogram in a separate data file. This is useful for external simu- lation and demonstration. • Simultaneous measurements - an interferogram can be analyzed with varying set- tings (e.g. signal process and noise averaging) for comparison. The measurement data file records the values: time step, minimum of the interferogram, maximum, middle value, contrast, laser temperature, system temperature, two voltages from photo diodes, two phase values, two converted ∆ne f f values, and the spatial fre- quency of the interferogram from the FFT. • Measurement views - there are 4 measurement views (2 phase measurements and their corresponding ∆ne f f conversions), which can be viewed at varying scales. • Noise averaging - adjustment possibility of the block averaging and smoothing factors for the overall noise reduction of the measurement signal. • Parameter capturing - system parameter inputs (left) are recorded in a separate data file associated with the recored measurement data. The second main screen of the interferometric measurement program is the display of the magnitude and phase response from the FFT of the interferogram. This is the source of the program’s ability to provide 2 independent phase measurements, since 2 separate and pseudo-parallel FFT algorithms are performed on the chosen regions of the interferogram. In principle, the user must supply the program with the first pixel defining the begin- ning of the interferogram (e.g. from Figure 25 a possible input would be pixel 950). The user then inputs the length of the interferogram region (e.g. 350), which marks the last pixel at (e.g. 1300). Therefore, the FFT would be performed in this region (e.g. 950 to 1300). The rest of the analysis is performed automatically, without additional input inter- ference. Figure 26 demonstrates the analysis of the interferogram in Figure 25. 51
  • 57. 3 INTERFEROMETRIC SENSOR SYSTEM Figure 26: Software 2 52
  • 58. 4 MEASUREMENTS & RESULTS 4 Measurements & Results Now that most relevant theory, techniques and algorithms have been introduced it is possible to present their results and to discuss their benefits and successes. All measure- ments have been carried out with the Helium-Neon laser-light source with the wave- length 632.8 nm. This section presents the results from system measurements (i.e. opti- cal chip characteristics), signal processing of the interferogram (i.e. windowing, zero- padding), and noise analysis. Finally, test measurements with Glycerin are presented at the end of the section as a demonstration of the capabilities of the interferometric biosensor. 4.1 System & Chip Measurements During the initial development stages many components and devices underwent thor- ough testing and measurement. As the key component, the optical chips also under- went such testing. This section presents the observations and measurements from the coupling efficiency and scattering properties of the optical chips. 4.1.1 Coupling Effeciency In the early construction of the interferometric biosensor the addition of a protective layer (glass, SiO2) over each of the optical gratings was considered. In addition to those factors presented in section 3.1.4, one of the main deciding factors for their implemen- tation would be the comparison of the coupling efficiency of the chip, both with and without these protective layers. Therefore, the corresponding power measurements were carried out and Figure 27 provides an illustration of the measurement locations and quantities. The results from Table 6 are quite easy to interpret. Since the protective layer has a higher refractive index than air, the sum of the direct reflections (R1 + R2 + ...Rn) is greater. This also explains the smaller transmission power T, since some portion of the newly reflected light is coupled into the excited mode, as seen as the output power, A1 & A2. A1 is the output power of interest since it is this light that will eventually be de- tected by the CCD camera. Despite having only 3% of the total power, even this must often be filtered down by a factor of 10-1000, such that the camera doesn’t saturate. The conclusion is that the coupling efficiency of the chips is sufficient, even with the additional protective layers. 53
  • 59. 4 MEASUREMENTS & RESULTS Figure 27: Input and output coupling of the laser power at various locations for chips with and without an additional protective layer (glass, SiO2). Without Protective Layer With Protective Layer % Laser Power P0 % Laser Power P0 P0 2.33 [mW] 2.33 [mW] R1 + R2 + ...Rn 60.1% 62.0% T 31.2% 30.1% A1 2.6% 3.0% A2 2.9% 3.5% Table 6: Percent of total laser power at various input and output coupling locations from a Helium-Neon laser, wavelength 632.8 nm, polarization TE, input-coupling angle ≈ 3.4°. 4.1.2 Scattering & Spreading Measurements In response to those concerns expressed in section 3.1.4 steps were taken to observe and record the scattering and spreading of the coupled beam. If scattering was in fact due to the interaction of the coupled beam with foreign material on the chip surface then the following should be true: • An optical chip, with good coupling characteristics and without a curvature spreading of the output beam, could be forced to show signs of spreading if for- eign material would intentionally be placed on the chip surface. • If the scattering of light on the chip surface is a localized effect, then there should be propagation paths, where no or less foreign material is encountered. The result should be the elimination or reduction of the output spreading. 54