STERILITY TESTING OF PHARMACEUTICALS ppt by DR.C.P.PRINCE
Β
Review and development of techniques for studying cellular biophysics with high frequency ultrasound
1. Michael Alastair Butler
University of the West of Scotland
Review and development of techniques for studying cellular biophysics with
high frequency ultrasound
Thesis for the degree of a PhD in Physics
2. "I am enough of an artist to draw freely upon my imagination. Imagination is more important
than knowledge. For knowledge is limited, whereas imagination embraces the entire world,
stimulating progress, giving birth to evolution."
--Albert Einstein
βThe air was full of music. So full it seemed there was room for nothing else. And each
particle of air seemed to have its own music, so that as Richard moved his head he heard a
new and different music, though the new and different music fitted quite perfectly with the
music that lay beside it in the air...β
--Douglas Adams, Dirk Gentlyβs Holistic Detective Agency
βTo aim towards truth, to do science, one must reach beyond concepts, coloured by our
perception; one must reach towards existence and reality.β
-- Dβarcy Wentworth Thompson, On Growth and Form
4. Preface
βIf the doors of perception were cleansed, every thing would appear to man as it is, infinite.
For man has closed himself up, till he sees all things throβ narrow chinks of his cavern.β
--William Blake, The marriage of heaven and hell
β...in reasoning of all other things, he that takes up conclusions on the trust of Authors, and
doth not fetch them from the first Items in every Reckoning, (which are the significations of
names settled by definitions), loses his labour; and does not know any thing; but onely
beleeveth.
... they that have no science, are in better, and nobler condition... than men, that by mis-
reasoning, or by trusting them that reason wrong, fall upon false and absurd generall rules.β
--Thomas Hobbes, Leviathon
Although this work does indeed rest on the shoulders of many authors and will inevitably
involve mis-reasoning, I hope the years of labour involved in its production have been
worthwhile. Indeed, perhaps through my relative ignorance of many of the subjects covered
I will have gained a new perspective that escapes the certain self-aggregation within
science into narrow chinks. I hope this work will be stimulating for any reader, whether from
a physical, biological or non-scientific background.
Even if, as Hobbes states, βimagination and memory, are but one thingβ, a type of βdecaying
senseβ, imagination has clearly benefitted many authors. Thus, I cannot overstate the
influence of music and a wider knowledge of the natural and creative world on this work;
something I count as a strength even if it appears asinine to many within science.
Indeed, one obvious criticism of this work is that it is too eclectic; however, itβs purpose is
not to provide strict rules for study in bioacoustics at the cellular level, but merely to firmly
guide future work towards a wider perspective.
5. It would not have been possible without the support of family, friends and other researchers;
its completion will perhaps reduce their burdens more than my own. However, greater value
will lie in its utility to those who, like myself, are seeking answers to some of the most
difficult and poignant questions facing natural philosophers (or scientists, if you will) and
indeed artists; What is life? What is energy? What is matter? We do not, and may never,
fully know, and few answers will be found here. However, this work will have fulfilled its
potential if it is at all helpful to those seeking knowledge on paths less trodden.
6. Abstract
Biological cells have the ability to sense a wide variety of stimuli. Research suggests cells
are able to sense light, touch and even, to a certain extent, smell. This work investigates
whether they can sense sound.
Recent research has found that acoustic-frequency interactions and oscillations can be
present in cellular systems. However, relative to normal cellular activity, these oscillations
are slow and involve complex, large scale cellular and multi-cellular structures. Cellular
responses and development are controlled by biochemical interactions, involving much
faster dynamics. This thesis provides a basis for exploring the existence and potential
significance of fast, or high frequency, acoustic waves in cellular biology. It describes theory
and methods available for the study of molecular bio-acoustics, using cytoskeletal protein
dynamics as a model system.
Chapter 1 reviews the scientific foundation for understanding acoustic interactions with
cellular systems, particularly at high frequencies. The molecular basis of modern biology
and classical physics of acoustic waves are introduced, before detailing more advanced
theories of bulk material and chemical vibration. Finally, a framework for understanding
bioacoustics at the cellular level is presented and gaps in knowledge highlighted.
Chapter 2 reviews materials and methods that may be used for generating and sensing
high frequency acoustic waves. This includes transducer design, high frequency electrical
system design as well as techniques for characterising acoustic performance.
Chapter 3 describes experimental work investigating the use of thin film AlN and thin layers
of LiNbO3 crystal for producing broadband, high frequency transducers.
Chapter 4 presents acoustic testing of the transducers up to 140 MHz using high frequency
generation and acquisition equipment. This chapter also introduces preliminary work
designing a broadband acoustic spectrometry system for studying the effect of high
frequency sound on in vitro actin polymerisation.
7. Finally, Chapter 5 discusses the merits of the theoretical and experimental work presented
and suggests experimental work that may be beneficial for further advancing the subject.
8. Table of Contents
Part 1: Background and theory 11
Chapter 1. Biophysical theory 13
1.1 βLifeβs grandeurβ: Molecular cell biology 13
1.1.1 Cellular overview 15
1.1.2 The cytoplasm - Water, ions, gases and small molecules 16
1.1.2.1 Water 17
1.1.2.2 Ions 18
1.1.2.3 Gases & ATP 19
1.1.2.4 Lipids 19
1.1.2.5 Cytoplasm 21
1.1.3 Macromolecules 22
1.1.3.1 DNA/RNA 22
1.1.3.2 Proteins 25
1.1.4 Macromolecular structures 25
1.1.4.1 Extra-Cellular Matrix (ECM) 26
1.1.4.2 Cytoskeleton and motor proteins 28
1.1.4.3 Membrane proteins 28
1.1.5 Organelles & Cellular systems 29
1.1.6 Development, Physiology & Systems: 30
1.1.7 Conclusions 31
1.2 A sound perspective 35
1.2.1 Introduction to classical Acoustics 36
1.2.1.1 Frequency, wavelength and speed of sound 37
1.2.1.2 Types of acoustic wave 40
1.2.1.3 Multiple waves: Interference, phase and superposition 41
1.2.1.4 Confinement of waves 44
1.2.2 Acoustic Signalling in time 47
1.2.2.1 Signals and Noise 48
1.2.2.2 Signals: Acoustic and digital quantification 49
1.2.2.3 Signals: Fourier transform 51
1.2.2.4 Signals: Waveforms 52
1.2.3 Acoustic Signalling in Space 53
1.2.3.1 Near- and far-field effects 53
1.2.3.2 Material properties and transmission of sound 54
1.2.3.3 Reflection & refraction 56
1.2.3.4 Acoustic Impedance 59
1.2.3.5 Attenuation and absorption 62
1.2.4 High frequency ultrasound: Phonons 64
1.2.4.1 Properties of phonons 64
1.2.4.2 Phonons in complex systems 68
1.2.5 High frequency ultrasound: Molecular vibration 70
1.2.5.1 Molecular translation, rotation and vibration 70
1.2.5.2 Atomic wavefunctions: electronic vibrations 72
1.2.5.3 Vibration in single molecules (covalent bonds) 74
1.2.5.4 Single molecule acoustic and EM spectroscopy 77
1.2.5.5 Intermolecular interactions and phase state (non-covalent bonds) 80
1.2.5.6 Chemical reactions (breaking covalent bonds) 85
1.3 Molecular bioacoustics 88
1.3.1 An overview of bioacoustics 88
1.3.2 Mechanosensory Cell biology 89
1.3.3 Quantised processes in biology 92
1.4 What is missing? 94
1.4.1 Molecular bioacoustics 95
1.4.1.1 Acoustic amplitude and intensity 95
9. 1.4.1.2 Acoustic frequency and phase 96
1.4.1.3 Temporal and spatial biology 97
1.4.1.4 Acoustic absorption and emission 98
1.4.2 Biophysics 100
1.4.2.1 Entropy 100
1.4.2.2 Microfluidics 101
1.4.2.3 Infrasound and gravitation 101
1.4.2.4 Bio-electromagnetism 102
1.4.2.5 Systems biology 102
1.4.3 Next steps: experimental work in this project 103
Chapter 2. Techniques and experimental design 104
2.1 Acoustic signal generation & sensing 104
2.1.1 Electro-, magneto- and thermo- mechanical devices 104
2.2 Materials and techniques for the production of ultrasound 105
2.2.1 Piezoelectric crystals, ceramics and thin films 106
2.2.2 Other techniques 107
2.2.3 Laser Ultrasound 107
2.2.4 Phononic devices 109
2.2.5 Techniques used in this work 109
2.3 Transducer design 110
2.3.1 Transducer properties 110
2.3.2 Transducer layering 111
2.3.3 Transducer patterning: Microtechnology, nanotechnology and Integrated Circuits 111
2.3.4 High frequency design for bulk tranducers 111
2.3.4.1 Frequency 112
2.3.4.2 Substrate use 112
2.4 Experimental design 112
2.4.1 Design for an ultrasonic transducer for biophysical applications 112
2.4.2 Techniques for manufacture and characterisation of devices 113
2.4.3 Characterisation and testing 113
2.5 High frequency signal generation 114
2.5.1 Noise from non-acoustic sources 114
2.6 Conclusions 114
Part 2: Experimental work 115
Chapter 3. High frequency device development 116
3.1 Introduction 116
3.2 AlN device manufacture 116
3.2.1 Development of NaCl and Al substrates 117
3.2.1.1 NaCl: cleaving 119
3.2.1.2 NaCl: evaporation 119
3.2.1.3 Al: evaporation 120
3.2.1.4 AlN deposition onto NaCl/Al 121
3.2.2 Development of Al foil substrates 123
3.2.2.1 Characterisation of Al foil substrates 124
3.2.2.2 Al foil device re-development 126
3.2.3 Carbon Steel substrate 128
3.2.3.1 Characterisation of carbon steel substrates 129
3.2.4 Development of silicon monoxide patterning 130
3.2.5 Conclusion of AlN development 131
3.3 Lithium Niobate device development 132
3.3.1 Transducer manufacture 132
3.3.2 Characterisation of Lithium Niobate transducers 132
3.4 Conclusions 133
Chapter 4. High frequency signal generation and acquisition 134
4.1 Signal generation and acquisition 134
4.1.1 Low frequency generation and acquisition 134
4.1.2 Options for high frequency generation and acquisition 134
10. 4.2 High frequency System development 136
4.2.1 Continuous wave set-up 136
4.2.2 Pulse-echo set-up (unsynchronised) 137
4.2.3 Pulse-echo set-up (synchronised) 138
4.3 AlN device HF results 139
4.3.1 High Frequency: pulse-echo 139
4.3.2 Mid to High frequency (70 MHz): LiNbO3 Generation and AlN acquisition 139
4.3.3 Mid to High-frequency (75MHz): AlN Generation and LiNbO3 acquisition 140
4.4 LiNbO3 device HF results 141
4.4.1 High Frequency: pulse-echo 142
4.4.2 Mid to High frequency (80 MHz): L121 to L147 142
4.4.3 High frequency (140 MHz): L121 to L147 143
4.5 Additional work 144
4.5.1 Design for a high frequency ultrasound spectrometer 144
4.5.2 Design for actin polymerisation experiments 146
4.5.3 Design for electrolysis catalyst experiments 146
4.5.4 Design for other⦠146
4.6 Discussion 147
Part 3: Discussion, Further work and conclusion 148
Chapter 5. Discussion and conclusions 150
5.1 Discussion of results 150
5.2 Future work 150
5.2.1 High frequency transducer design 150
5.2.2 Chemical applications 150
5.2.3 Biological applications 150
5.2.4 Medical applications 150
5.2.5 Physical applications 150
5.3 Conclusions 151
11. Part 1: Background and theory
Introduction
Following the Greek tradition, the 6th
Century Roman philosopher Boethius split reality into
three types of music: musica mundana, the music of bodies within astronomy; musica
instumentalis, the music of inanimate objects such as musical instruments; musica humana,
the music that drives all living things1
. Indeed music has historically been understood in a
sense much broader than the purely cultural and social, one that encompassed all natural
philosophy, known now simply as science.
The primary motivation for this research is to find out whether living things truly do contain a
kind of βmusicβ or, more precisely, complex interactions of acoustic signals. Recent
research in biophysics has provided the groundwork for a mechanical understanding of cell
biology, but there is still need to research spatial and temporal aspects of such interactions;
in other words, where mechanical interactions occur and whether they involve rhythm or
frequency.
The paragraph above is perhaps an extreme example of how widely differing perspectives
(in this case philosophy and biophysics) often have a similar motivation or conceptual basis,
but are rarely discussed together within specialist literature. This Part of the thesis has
attempted to distill a number of disciplines into a single viewpoint, which could be pithily
described as molecular bioacoustics; in other words the role of acoustics in molecular
biology. This has been achieved primarily through the elucidation of technical language
(sections 1.1 and 1.2), subsequent re-integration of divergent perspectives of the same
phenomena (section 1.3) and description of practical methods to investigate the
phenomena (chapter 2).
Much interdisciplinary work attempts to integrate disciplines by working from a core
1 Book 1, Chapter 1 of (Boethius 520).
12. discipline, for example physics, and extending a projectβs reach to another discipline. This
method can avoid the criticism of βshallow knowledgeβ2
. However, it will miss integral
connections between disciplines, create duplication of research and, more worryingly,
entirely ignore metaphorical βelephants in the roomβ, i.e. fundamental gaps in knowledge
between disciplines. Molecular acoustics is one of the areas that has been largely ignored
within modern physics and chemistry, whilst non-chemical physical processes have been
largely ignored by molecular biology, with notable exceptions in both disciplines3,4,5,6
.
This work tries to avoid these issues by providing a comprehensive theoretical overview (in
the words of Hobbes7
, to βfetchβ¦ from the first items of every Reckoningβ) from biology to
physical chemistry, with particular focus upon molecular aspects of acoustics. Fundamental
gaps in knowledge are then addressed in the final section of Chapter 1 (section 1.4).
This is followed by a broad look in Chapter 2 at the applied techniques currently available to
engineer acoustic devices for the time and length scales of cellular biology. It details
practical methods to manufacture a device able to produce and receive high frequency
acoustic waves, including potential methods to control and measure high frequency signals
with electronic generation and acquisition equipment.
2 P66 of (Winkel, Ketsopoulou, and Churchouse 2015)
3 (Matheson 1971)
4 (Leighton 2007)
5 (Franze et al. 2011)
6 (Pelling and Horton 2008)
7 (Hobbes 1651)
13. Chapter 1. Biophysical theory
βThere is grandeur in this view of life, with its several powers, having been originally
breathed into a few forms or into one; and that, whilst this planet has gone cycling on
according to the fixed law of gravity, from so simple a beginning endless forms most
beautiful and most wonderful have been, and are being, evolved.β
--Charles Darwin, On the Origin of Species
This chapter is divided into four sections:
Section 1.1 introduces molecular cell biology by way of structure and size in biochemistry.
Section 1.2 introduces classical and quantum concepts of acoustics, with particular
reference to spatial and temporal interaction of sound waves and chemical vibration.
Section 1.3 introduces bioacousticsβ¦
Section Error! Reference source not found. introduces gaps in knowledgeβ¦
1.1 βLifeβs grandeurβ8
: Molecular cell biology
This section introduces some basic concepts of molecular cell biology. The key ideas to be
taken from this section are the chemical nature of biological processes and the innate
evolved intelligence present at the cellular level. In other words, βlifeβ is evident from
chemistry up, particularly in the functional organisation of cellular systems. This brief
overview cannot encompass the diversity and dynamics of cellular processes (see 9,10,11
for
8 This paraphrase of Darwinβs quote is taken from the title of (Gould 1996).
9 (Alberts, Johnson, and Lewis 2002)
10 (Wolpert 2001)
14. more information), but should provide context the more advanced concepts introduced later.
In this section, important cellular structures and processes are introduced based on size.
We will see in sections 1.2 and 1.3 how both spatial and temporal understanding of these
processes is important for comprehension of biophysical interactions.
Figure 1: Scale diagram of biological structures against frequency of an equivalent acoustic wavelength in a generalised
liquid (c=1000 ms
-1
). Note that acoustic frequencies are above the range of human hearing. Adapted from (Alberts,
Johnson, and Lewis 2002).
11 (Kandel, Schwartz, and Jessell 2000)
15. Figure 2: Generalised diagram of a cell, focussing upon structural elements (not to scale), reproduced from (H. Herrmann
et al. 2007). Acronyms are Microfilaments (MF), Microtubules (MT), Intermediate filaments (IF), Nuclear Pore Complex
(NPC), Inner/Outer Nuclear Membrane (I/ONM), Endoplasmic Reticulum (ER) and Microtubule-organising centre (MTOC).
1.1.1 Cellular overview
A biological cell is a tiny, very complex, self-reproducing life form (Figure 1). Indeed, single
cells are generally seen as the smallest living organisms, able to sense and influence their
environments with adaptive behaviours. The cell is the fundamental building block of the
biological sciences; living matter is almost universally based upon the reproduction,
development, patterning and programmed death of cells. For example, the fertilised egg - a
single cell - contains the instructions and resources to create an entirely new organism,
which can itself reproduce. In the words of the novelist Samuel Butler12
: βA hen is only an
eggβs way of making another eggβ.
12 From the introduction to developmental biology in (Wolpert 2001)
16. The cell is composed of a surrounding lipid membrane within which a number of specialised
organelles interact through an aqueous solution of molecules such as proteins, RNA and
DNA (Figure 2). Assemblies of these molecules provide a means for sensing on the cellular
surface and dynamic pathways within the cell, including systems for controlling energy,
movement, memory and the production of new structures.
These adaptive behaviours have arisen since cells have been diversifying, through internal
and environmental pressures, at the molecular level for billions of years13
. They are
inherently complex, non-linear and work far from equilibrium; unsurprisingly we still only
have a partial understanding of their evolution and even fewer clues about their origin14
.
Cell biologists have engaged with this problem of complexity by studying the molecular
basis for cellular behaviour. Therefore an understanding of the characteristics and
interactions of key molecules provides the framework for modern cellular biology.
1.1.2 The cytoplasm - Water, ions, gases and small molecules
13 (Alberts, Johnson, and Lewis 2002)
14(Grill and JΓΌlicher 2009)
17. Figure 3: Water (top), ATP (middle) and DPPC, a type of lipid (bottom). Hydrogen (white), nitrogen (blue), carbon (cyan),
oxygen (red) and phosphorus (gold) atoms are labelled; squares in the grid are 10Γ (1nm).
1.1.2.1 Water
Water is perhaps the most important molecule for life, providing the environment for many
cellular interactions, as well as βlubricationβ and a physical βtoolβ for larger molecules,15
. It is
also surprisingly complicated at the molecular level with various unusual properties. For
this reason, our understanding of aqueous interactions at the cellular level is still far from
complete16
.
As a dipolar molecule, water can respond to charged and dipolar structures. For example,
moving a charged object, such as a magnet, towards water can cause it to be repelled. For
this reason, ions or molecules with charged side groups are described as hydrophilic, or
βwater-likingβ; conversely apolar molecules, such as lipids, are hydrophobic. Indeed, in
osmoregulation these static charges cause water to flow from areas of low to high ionic
concentration.
Water also creates a dynamic network of hydrogen bonds, with a hydrogen atom sitting
between the oxygen atoms of two H2O molecules, i.e. H2Oβ β β HβOH. Hydrogen bonds are
15 (Hunt et al. 2007)
16 See review by (Ball 2008)
18. strong compared to charge interactions, producing effects such as surface tension and the
clustering of water molecules17
. There are a number of comprehensive reviews of water
which can provide more detailed information18
.
1.1.2.2 Ions
Figure 4: Example of the Huxley-Hodgkins description of nerve pulses, showing ionic flow across the nerve membrane.
Reproduced from (Lodish, Berk, and Zipursky 2000).
Ions are small, charged molecules and single elements, such as Na+, K+, [OH+] and Cl-,
which provide the electrochemical basis for cellular interactions. Dynamic interrelations of
17 (M. Chaplin 2006)
18 See http://www1.lsbu.ac.uk/water/water_structure_science.html for extensive references
19. these charges are important to a number of biological processes, from protein folding and
binding at the cellular level to neural firing. For example, if we take the Huxley-Hodgkins
model of nerve pulses (Figure 4), mass movement of charged ions across a lipid membrane
creates a difference in voltage, which can be modeled as an electrical circuit19
. The energy
for this reaction is stored by selectively pumping ions to create a large electrochemical
difference across the membrane.
It should be noted that ionic processes such as neural firing involve polar molecules rather
than free electrons; in other words they are electro-mechanical. Indeed, few well-
characterised chemical processes are purely electrical or purely mechanical; most involve a
combination of the two20
.
1.1.2.3 Gases & ATP
Gases and small molecules, such as NO, O2, CO2 and CH3, also provide important
metabolic and signalling functions, including oxidation and reduction to ions. In particular,
ATP (adenosine tri-phosphate) and other metabolites are involved in the chemical storage
of energy for use in reactions. Finally, reactions are modified within different cellular
compartments through the active regulation of acid and base concentration.
1.1.2.4 Lipids
19 This model is powerful, but potentially incorrect (Andersen, Jackson, and Heimburg 2009)
20 (Giraud 2003)
20. Figure 5: Simulations of the critical melting point of lipid bilayers from gel (red) to liquid (green) with different
concentrations of anaesthesia present (top) at different temperatures, experimentally validated by Heimberg (2007).
Reprinted from Wodzinska et al (2009).
Lipids provide the basic elements of cellular membranes (lipid bilayer and nuclear bilayer),
vesicles (small bubbles of membrane) as well as metabolic and signalling functions. There
are a diverse range of lipid structures, which are either hydrophobic or amphiphilic (a
combination of hydrophobic and hydrophilic structures). Although generally seen as a
container for active processes within the cell, work has shown that lipids and bilayers are
dynamically involved in many important processes. For example, the lipid composition of
lipid bilayers can affect protein activity including signalling, catalysis and spatial
21. positioning21
.
In the cell membrane, clusters of proteins within and attached to the membrane, known as
βlipid raftsβ can have important cellular functions. Blicher et al have proposed that bilayers
create ion channels, with similar properties to protein channels22
; this mechanism involves
the physical critical point between the solid and liquid states of the lipid membrane. The
control of critical points with biochemical signals (see Figure 5) could be important for a
number of biological processes. This is particularly true for mechanical and acoustic
signals, where transduction is greatly affected by the state of the carrying material.
1.1.2.5 Cytoplasm
In the traditional biochemical perspective, the cell cytoplasm is an aqueous buffer (enclosed
by a lipid membrane) of water, small molecules and ions through which larger molecules
diffuse to reaction sites, or in some circumstances are carried by cytoskeletal motors. More
recent viewpoints suggest that active cell elements can be spatially aggregated together
with specific architectures; for example in focal adhesions on the inner surface of the
membrane, lipid βraftsβ within the membrane or internal positioning of Microtubule organising
centres (MTOCs). Water and small molecules have been found to play a key role within
larger cellular structures, not just as a buffer but also as integral reactants, i.e. a βlubricant of
lifeβ23
. In other words we are finding greater structure within the cytoplasm and a more
interactive role for small molecules.
One of the questions of modern biophysics is to understand the extent of this role. For
example, some authors suggest that the natural clustering of water, its adaptability to
surfaces and structuring within the cytoplasm could combine to create long distance, self-
organising interactions24
. Although unverified25
, this would have important implications on
mechanical and chemical interactions within the cell, if proven correct. On the other hand,
diffusion in water continues to provide a strong general theory for biochemistry.
21(Scott and Pawson 2009)
22(Blicher et al. 2009)
23(Ball 2008)
24(Shepherd 2006)
25(Persson and Halle 2008)
22. It is clear that the dynamics of molecules will be influenced to some extent by cytoplasmic
organisation. Indeed, although the in vitro (i.e. in water) experiments described in this
thesis, which solvate molecules derived from cells with pure water solutions, would provide
useful models, they may not predict valid in vivo (i.e. in life) dynamics26
.
1.1.3 Macromolecules
Although the cytoplasm and lipid membrane will provide the environment for acoustic
transmission, the molecules likely to be involved in mechanical sensing or to have clear
acoustic properties are macromolecules, due to their structural and communicative
functions. These βlargeβ molecules include RNA, DNA and proteins, all of which have
complex, polymeric structures. They provide the structure and bulk apparatus for complex
reactions within the cell, including genetic, metabolic and signalling processes.
1.1.3.1 DNA/RNA
26 (Spiller et al. 2010)
23. Figure 6: Structural examples of DNA (human, PDB:1BNA) and RNA (synthetic, PDB:2N0R)
DNA and RNA provide the physical instructions for producing all complex structures
produced in living matter. Both can be used to replicate new copies of their structure, for
instance to provide genetic material for cell division, as well as physically coding for specific
amino acids, which are combined to produce proteins. Both molecules are polymeric
structures based upon four nucleic acid monomers, known as bases. Scaled examples of
the structures are shown in Figure 6.
24. Figure 7: Basic overview of DNA, RNA and protein interactions, with examples of cytoskeletal and cell membrane protein
functions. Sections collated from (Alberts, Johnson, and Lewis 2002), (Scita and Di Fiore 2010) and (Guck 2010).
Generally, in eukaryotic cells, DNA is confined within the cell nucleus. During transcription,
the double helical strands of DNA are separated at a particular section and messenger RNA
(mRNA) is constructed along the base sequence of the DNA section. The mRNA is then
modified, through cutting and splicing, into a structure with a specific function, typically
providing the blueprint for a protein. The mRNA is then transported from the nucleus to
another area of the cell for translation of the sequence into a protein structure; specific
sequences are included in the mRNA to direct transport as necessary. Translation can
occur in the designated area of the cell (e.g. spines of Neurons) or the resultant proteins
can be transported to for use (Figure 7).
DNA is found within the cell nucleus. This is surrounded by a dual lipid bilayer, with a lumen
between. Within the nucleus are various compartments, separated by protein structures
and chromatin proteins, around which wind strands of DNA.
As we shall see in section 1.3, mechanical forces can be transmitted throughout the cell
25. through cytoplasmic- and membrane-spanning proteins, whilst the regular structure of
molecules, such as the DNA helix, can give rise to specific mechanical vibrations. The
transmission of mechanical vibrations are likely to also be important for interactions with
RNA and DNA, as well as for the protein interactions focused upon in this thesis.
1.1.3.2 Proteins
Proteins are constructed from amino acids, the order of which are coded by the order of
bases in strands of mRNA, as described above; this order is the primary structure of a
protein. Proteins have a number of motifs created by specific combinations of amino acids,
including Ξ±-spirals and Ξ²-sheets. These secondary structures link together through
hydrogen bonds and stronger, disulfide bonds, to create a complex, 3-dimension tertiary
structure. The chemical and physical, or conformational, properties of these tertiary
structures, influence the dynamic interactions of proteins. Finally many proteins combine
together to create larger quaternary structures, which are described in section 1.1.4. It is
the mechanical properties of the tertiary and quaternary structures that are the focus of this
project and will be described in more detail in section 1.3.
1.1.4 Macromolecular structures
Proteins and other macromolecules, such as DNA and RNA, are often combined within the
cell to create larger quaternary structures. These structures can provide a function (e.g.
ribosomes) or a pathway for other interactions (e.g. signalling), including cascades,
feedback loops and changes in functional properties. To add to the complexity, many
molecules have multiple modes of interaction, non-specialised functions and are structurally
diverse27
. Studying the effect of these biochemical modifications is a major branch of
modern biology, as it underlies all cellular and physiological processes.
The chemical interactions themselves depend upon macromolecular structure and bonding
as well as environmental conditions (e.g. cytoplasm and small molecules). This can involve
both mechanical and electrical energy transfer. The interest of this work is to examine the
extent of mechanical influence on certain protein structures.
27 (Kung 2005)
26. Some structures are likely to be of particular importance for the transfer of mechanical
signals, such as the Extra-Cellular Matrix and the (internal) cytoskeleton of the cell. An
overview of a number of different protein structures, taken from the Protein Data Bank, can
be seen in Appendix 1.
1.1.4.1 Extra-Cellular Matrix (ECM)
As seen in Figure 2 the ECM is an external structure to which a cell can bind through
adhesion proteins such as integrins and hemidesmosomes. ECM can provide a means for
cellular support and communication as well as compartmentalization of tissue28
. It is
composed of polymeric proteins and other molecules. The ECM is an important component
in mechanosensory processes (section 1.3.2), since it can provide mechanical information
about tissue state.
28 (Hynes 2009)
27. Figure 8: Size, characteristics and in vivo examples of cytoskeletal proteins. Reproduced from (Alberts, Johnson, and
Lewis 2002).
28. 1.1.4.2 Cytoskeleton and motor proteins
Most mechanical interactions are believed to involve the cellular cytoskeleton. This
structural component gives support, allows dynamic movements within and at the surface of
the cell and is intrinsically involved in spatial and mechanical intra-cellular communication.
For example, cell motility in Amoebas is primarily based upon a disassembly and
reorganization of the actin cytoskeleton29
.
It consists of three main polymeric protein structures: microtubules, actin and intermediate
filaments. Each of these are built and disassembled from monomeric units (see Appendix 1
for a visualisation of microtubules and actin). Information on the structure of cytoskeletal
proteins is shown in Figure 8.
Three main types of molecular motors are used to move objects within the cell; Myosin,
kinesin and dynein. Myosin moves along actin filaments, whilst Kinesin and Dynein move
along microtubules, all powered by ATP. The structure of myosin can be seen in Appendix
1. These proteins provide transportation within the cell and the transfer of power externally,
e.g. in the contraction of muscles. The tension provided by these molecules across
cytoskeletal and ECM filaments will be particularly important for understanding transmission
of mechanical signals30
.
1.1.4.3 Membrane proteins
As can be seen in Figure 2 there are a number of protein structures that are embedded in
the lipid membrane surrounding cells. These membrane proteins or protein structures
include adhesion molecules, e.g. cadherins which help link cells together, receptors, which
pick up the presence of signalling or other molecules, and channels, which provide a means
to actively or passively transport molecules across the membrane. Many are depicted in the
Protein Data Bank visualisation in Appendix 1.
29 (Miao et al. 2003)
30 (Ganz et al. 2006)
29. Membrane proteins are therefore very important for cellular communication, including
mechanical transduction of signals. Indeed, signalling pathway proteins, which include
membrane proteins, are an extremely diverse part of the molecular machinery found in cell
biology. Chemical signalling molecules, or ligands, typically bind with receptor proteins,
causing a change in the proteins to allow the next steps in the signalling pathway.
1.1.5 Organelles & Cellular systems
The building blocks described in the previous sections are formed into a number of
organelles that provide specialized functions for cells. For example, mitochondria are the
primary organelles for energy production (e.g. ATP), endoplasmic reticulum provide the
primary sites for protein translation, whilst the Golgi apparatus is involved in lipid transport
and vesicle (small lipid structures) production.
These cellular systems are dynamic and use energy from ATP to produce specific
conditions. For example, many systems change pump ions and molecules across their
membranes to create a difference in pH, pressure or electrical potential. For example, we
have seen a change in potential in neuronal axons in Figure 4. The same is true for vesicles
and larger organelles, which can require specific conditions. Thus, biological systems (and
indeed other larger systems that involve structured energy input) very often involve
processes occurring away from equilibrium.
Together the organelles, cytoskeleton and macromolecules provide a means for more
complex behaviours. These behaviours, founded upon molecular signalling pathways,
include:
β’ Mitosis/Meiosis β i.e. cell division
β’ Apoptosis β Controlled cell death
β’ Molecular memory β Structural or dynamic methods for retention of information
β’ Cell differentiation β Structural or chemical reorganisation of a cell for a specific
function, e.g. axon development in neurons or cilia on inner hair cells. Cell
differentiation is typically directed by protein pathways that reference genetic (and
30. epigenetic31
) information found within the nucleus.
Due to the complexity of the underlying molecular pathways, these behaviours are not only
non-linear (as described in the introduction), but adaptive. In other words, it is particularly
important to comprehend that cellular and sub-cellular (i.e. biomolecular) systems are βliving
matterβ and will typically exhibit responses produced by a complex network of multi-modal
structures.
1.1.6 Development, Physiology & Systems:
Cell differentiation is an integral part of developmental biology. This area of biology
describes the interaction and patterning of individual cells to create multi-cellular structures
and organisms, although there is overlap with processes used by single-celled organisms32
.
In development, chemical and mechanical intra-cellular communication causes cell
differentiation, with specific structures of cells formed by spatial patterning of signalling
molecules (morphogens) and the plasticity of the cell (i.e. ability to further differentiate). For
example, a stem cell has high plasticity and can (hypothetically) be differentiated into any
cell type, whilst a neural crest cell already has identity as part of the ectoderm (the skin and
nervous system), but can be further differentiated into a skin cell or neuron.
Further patterning and differentiation produces the muscles, nervous system, organs and
skeletal anatomy of multi-cellular organisms, all directed by cellular communication and
signalling. Of particular note is the importance of mechanical signals in development33
as
well as the use of molecular oscillation and circadian rhythms for timing of developmental
processes34,35
.
Of course the biology and development of multi-cellular systems is strongly coupled with
environmental and evolutionary influences. For example, full development of the auditory
31 (Jones et al. 2008)
32 (Branda et al. 2005)
33 (Hoffman, Grashoff, and Schwartz 2011)
34 (Kruse and JΓΌlicher 2005a)
35 (Hasty, Hoffmann, and Golden 2010)
31. cortex is only accomplished after environmental stimulation36
(similar results are found in
other areas of the nervous system). Evolutionary similarities between organisms can allow
study of complex processes in simpler or more accessible multi-cellular system. These
model systems, such as E. Coli (bacterium), Saccharomyces cerevisie (Bakerβs yeast),
Drosophila Melanogaster (a fly) or C. Elegans (a worm) provide well-characterised systems
for research on new processes.
1.1.7 Conclusions
This extremely brief overview of important biological concepts cannot hope to cover the
breadth and complexity of the biological fundamentals upon which any investigation in
physical biology should be based. By necessity, attention has been focussed upon theory
relevant to work on acoustics in cellular biology. Greater detail on fundamental concepts
can be found in many of the books and articles within the bibliography, including cell
biology37
, developmental biology38
, neuroscience39
and evolutionary biology40
.
As we have seen, it is subtle differences at the cellular level, both internally and externally,
that drive development but also provide for diversity and adaptivity. Hence there is a wide
literature elucidating the biology of different organisms, as well as databases cataloguing
organism-specific genetics, proteomics, developmental biology and physiology. The wish to
combine specialised research in molecular biology into more system-wide models has
resulted in the recent research drive for pathway models and systems biology, which hope
to benefit from better holistic biophysical models.
The primary aim of this chapter was to illustrate how biological processes are driven from
the chemical level up through various levels of mechanisms; much successful research in
biology therefore takes into account both micro-scale (i.e. chemical and cellular) and macro-
scale (i.e developmental and environmental) pressures.
36 (Mann and Kelley 2011)
37 (Alberts, Johnson, and Lewis 2002)
38 (Wolpert 2001)
39 (Kandel, Schwartz, and Jessell 2000)
40 (Ridley 2003)
32. The chemical control of cellular development by the formation of spatial and temporal
patterns, for example chemical gradients and molecular cycles, is well characterised.
Section 1.3 will describe analogous mechanical processes being studied in
mechanobiology, with examples of micro- and macro-scale effects.
34. Figure 9: (Opposite page) Scale diagram showing wavelength vs. frequency, for acoustic waves (top, purple) with a
speed of sound of 2000 m/s, an idealised average for condensed matter, and EM waves (bottom, light purple), including
examples of systems of a similar size to the wavelength.
35. 1.2 A sound perspective
Sound covers a universal range of frequency. βIn space, no-one can hear you screamβ
since acoustic waves cannot usually travel in a perfect vacuum41
but they do have
important implications in astronomy. Where matter is less diffuse, for example around black
holes and stars, we can see acoustic waves. This in turn can provide information about the
systemβs dynamics42,43
. At the other end of the scale, we can measure the nuclear vibration
of molecules44
.
Our usual understanding of sound, auditory sound, is on the spectrum somewhere between
these two extremes (see Figure 9). Sound that is smaller in scale and repeats more quickly
than auditory sound is known as ultrasound. On the contrary, larger and slower sound is
known as infrasound. Indeed, sound around black holes is so large and repeats so slowly
that it is another 10 scales of magnitude to the bottom right of figure 1, at about 10-15
m and
1 fHz45
.
Section 1.2.1 will focus upon basic classical acoustic theory, which can be used to describe
sound in most situations, including many of the examples given above. A description of
acoustic signals, with relation to time and frequency, will be introduced in section 1.2.2
Finally, 1.2.3 will touch upon the spatial complexities of acoustic signal transduction in
distinct materials and environments.
Much like the absorption and emission of light in electromagnetics, classical acoustics
begins to break down when trying to describe its more subtle interactions with materials.
Indeed, at the molecular level the basic concepts of classical mechanics are incorrect46
, for
the fundamental interactions of sound as well as light.
41 (Altfeder, Voevodin, and Roy 2010)
42 (Fabian et al. 2006)
43 (W. J. Chaplin et al. 2011)
44 (Feuerstein et al. 2007)
45 (Fabian et al. 2006)
46 (Atkins and de Paula 2010)
36. Specialisms which take into account the intricacies of different molecular and chemical
systems are introduced in the following sections. The area of phononics is introduced in
section 1.2.4, providing a method for understanding acoustic waves in very structured
molecular systems. Section 1.2.5 introduces physical chemical theories of molecular
vibration, which provide a model for acoustic vibration within more diffuse molecular
systems.
However, many questions still remain unanswered about acoustics in molecular systems.
Further research will be required to provide a unified theory of acoustics that can take
account of dynamics at this level in all states of matter. As suggested in section 1.2.6, this
will necessitate integration of physical and chemical theories such that both solid- and
liquid-state dynamics are understood from a quantum electrodynamic perspective.
1.2.1 Introduction to classical Acoustics
Sound is a mechanical wave repeated at a given frequency. Generally we think of sound in
terms of pressure waves, which travel through a medium such as air or water. Sound
waves therefore rely upon the dynamics of the material through which they travel. These
dynamics are described by the particle motion of the material (a particle in this case is a
physical approximation of an area of space and does not directly correspond to a chemical
structure).
It should be clarified at the outset that pressure is a scalar quantity, in other words it is a
single value at a single point, with no directional information. For example, a barometer
detects a specific magnitude of pressure but not its motion. Particle motion, on the other
hand, is a vector quantity describing the displacement, velocity and acceleration of the
particle in space and time. These two concepts combine to fully describe a classical sound
wave and, in a simple system, one can be calculated from the other.
These concepts of classical acoustics provide models for sound waves based upon the bulk
or continuum material properties of a system; in other words, the atomic structure,
37. chemistry and molecular dynamics of a system are approximated into bulk material
properties (with clear interfaces) and bulk forces, whether these are scalar or vector. Thus,
these concepts can be used for any situation, from the sea to a car engine.
For most engineering problems within this work, an approximation of properties is more
than sufficient. For example, transduction and pulse-echo of high frequency signals
through water can be, in the main, understood in the purely classical terms described in this
section. In addition, the experimental techniques used in this work generate and measure
pressure rather than particle motion.
However, it should be noted that the primary focus of this work is biological applications. As
we have seen in section 1.1, biological processes involve the dynamic inter-relation of
complex, chemical systems. These can be far from equilibrium and have developed to react
directly to physical changes. Specific interaction of our (classically-described) devices with
such a system are unlikely to be easily approximated to a continuum, or at least will only
provide shallow insights if only examined from the perspective of bulk forces. Uncovering
the subtleties of bio-physical interactions will require an understanding of the molecular
dynamics of sound, from the motions of individual molecules to pressure waves in larger
molecular structure.
Therefore, although sections 1.2.1 to 1.2.3 introduce sound from the perspective of
classical pressure waves, a biophysical description requires understanding of the physical-
chemical basis for sound transduction. This will be introduced in section 1.2.4.
1.2.1.1 Frequency, wavelength and speed of sound
Waves are produced by objects which are oscillating; these structural vibrations also have
frequency and wavelength which, as we shall see later, can be described in terms of
waves.
Frequency, given in Hertz (Hz), describes the number of times a pressure wave or
oscillation repeats in one second. For example, sea waves may break at about 0.5 Hz, i.e.
one wave every two seconds, whilst the air column in a flute can vibrate with 1100 waves in
38. a second, producing the note Cβ― at 1.1 KHz.
The distance between the peak of each repeated wave is known as the wavelength. For
instance, the distance from the peak of an ocean wave to the next peak wave (e.g. Figure
10a) is a single wavelength. The height of the waves is known as the amplitude, with
higher peaks (and deeper troughs) having higher amplitudes.
This amplitude can be measured from peak to trough (peak to peak) or, if there is a central
reference (for example, 0), from this reference to the peak (0 to peak).
Wavelength is integrally related to the frequency and speed of the waves; waves travelling
at a set wavelength and speed, will align to a particular frequency.
Thus, as seen in Figure 10, ocean waves with a 10m wavelength, travelling at 10 ms-1
(m
per second), would always break at 1 Hz. This is captured in the equation:
π = π π Equation 1
where c is the speed of sound (m/s) - or in this case, the speed of ocean waves - Ο is
frequency (Hz) and Ξ» is wavelength (m).
Figure 10: Example of ocean waves with a wavelength of (a) 10m and (b) 5m and corresponding frequencies
39. Figure 11: An engine running at the same revolutions will have a shorter wavelength in air than water, but will
arrive at the ear at the same frequency (i.e. sound the same) due to the difference in the speed of sound of each
medium.
At the same speed of sound, c, shorter wavelengths equate to higher frequencies, longer
wavelengths to lower frequencies. From the examples given earlier, the high frequency
vibrations of nuclei have extremely short wavelengths, whilst low frequency waves around
black holes have extremely long wavelengths. Figure 9 shows the acoustic frequencies for a
spectrum of wavelengths at the set speed of c = 2000 ms-1
, an idealised figure for c within
condensed system such as living matter.
The relation may be simpler to understand in auditory sound, where c = 340 ms-1
in air at
15ΛC. For example (Figure 11), an accelerating engine has an increasingly higher hum,
which equates to faster revolutions and a shorter wavelength. If the engine was run at the
same frequency in water, where c = 1484 ms-1
, the same revolutions would equate to
longer wavelengths. As the sound waves reach your ear more quickly in water, you hear
these longer wavelengths as the same high hum.
Although the relation between acoustic wavelength and the size of biological systems is far
from simple (see sections 1.3 & 1.4), in general higher frequencies will be needed to access
the short wavelengths of molecular and cellular vibrations.
40. 1.2.1.2 Types of acoustic wave
Figure 12: Snapshots of a longitudinal wave (top) and transverse wave (bottom). Both waves travel from left to right.
However, in the top figure the pressure change (the compression and expansion of the particles) is left to right whilst in the
lower figure it is up and down (i.e. perpendicular)
We have used sea waves as a visual example of the basic properties of sound. However,
sea waves are actually features from two main classes of wave, longitudinal and transverse
waves.
Longitudinal waves are also known as compression or bulk waves. These are the type of
wave one usually associates with sound, since auditory sound travels as a longitudinal
wave through air; indeed transverse waves cannot usually travel through a gas. As in Figure
12, longitudinal waves travel in the same plane as the pressure change, creating a
compression wave through the medium.
Transverse waves are also known as shear waves. These wave travel in a direction
perpendicular to their direction of vibration, as in Figure 12. Thus the direction of vibration
can change, for example one could send a wave through a skipping rope upwards or
sideways, and so these waves can be polarised. Waves in the vertical plane are known as
SV-waves (Shear Vertical), wave in the horizontal as SH-waves. Transverse waves are
generally found in solid systems, but can travel in liquids, with attenuation dependent upon
41. the frequency, viscosity and shear modulus47
.
There are also a number of more complicated wave motions that combine aspects of both
transverse and longitudinal waves. For example, Water waves and Raleigh or Surface
Acoustic waves (SAW) travel on the surface of a solid, Lamb waves travel in a solid plate
and Love waves in layered solids. Different types of wave can also have different
transmission speeds. For example, in earthquakes, Raleigh waves (known as the
Secondary-waves or S-waves) arrive after compression waves (the βPrimaryβ, P-waves).
Due to the heterogeneous structures and conditions found at the cellular level, it is likely
that different structures would be more responsive to different wave types. For example,
lipid layers would be ideal for transverse and potentially Lamb waves in the correct
conditions, whilst would seem more likely for longitudinal waves to transmit through the
cytoplasm.
1.2.1.3 Multiple waves: Interference, phase and superposition
47 They, see (Joseph, Riccius, and Arney 1986).
42. Figure 13: Two waves (top) are combined (middle) producing a superposition (bottom). The blue wave is slightly higher
frequency than the red, creating constructive interference (large peaks) when in phase (0Β°) and destructive interference
(towards 0 between the peaks) when out of phase (180Β°). The pulsing frequency of the large peaks is the beat pattern.
When more than one wave is generated within a system, the waves interact with each other
in a particular set of ways. Firstly, for waves of the same frequency, the interactions are
based upon the alignment of the waves, known as the phase of the waves.
Waves that are in-phase have their peaks and troughs at the same time, producing a
combined wave with increased amplitude (Figure 13). This is known as constructive
interference.
Conversely waves that are out-of-phase (or anti-phase) have peaks and troughs at different
times, combining to create a wave with a smaller amplitude, known as destructive
interference. The level of destructive interference depends upon how far out-of-phase the
waves are aligned.
The alignment of the waves is often described by comparing the period of a single
wavelength, in other words a single cycle of the repeated wave motion, to the degrees or
radians of a circle. At 0Λ or 0Ο Radians, the waves will be in-phase, creating a wave with a
combined, larger amplitude. This is also true for waves that are one or more full cycles
43. ahead (e.g. any multiple of 360Λ or 2Ο Radians). At 180Λ or Ο Radians, the waves are fully
out-of-phase and combine to produce a wave with no amplitude, in other words a flat line.
At angles between these extremes, combined waves have intermediate amplitudes and
shift in time to one side.
Waves with different, but close, frequencies will shift in phase by this difference in
frequency between constructive and destructive interference. Over a long period, these
phase changes will produce a lower frequency repeated beat. The frequency of this beat
can be calculated by subtracting the frequency of one wave from the other:
π ππππ = π π β π π Equation 2
Note that a beat is equivalent to a specific frequency, and can produce tonal as well as
rhythmic effects. The combination of a number of such frequencies, known as
superposition, can produce distinct beat patterns. Indeed, superposition of frequencies can
create a single high-amplitude point in time, or conversely be used to break down a single
point into many constituent frequencies (Figure 14). This is the basis for understanding
wave-particle duality in both photons and quasi-particles, such as phonons (Section 1.2.4).
Figure 14: Superposition of a number of (infinite) waves with different amplitudes and frequencies to produce a single
point (or particle), or conversely the breakdown of the constituent frequencies of the single point. Reproduced from
(University of Reading 2015).
44. This is also the corner-stone of the Fourier transform (Section1.2.2.3), an important tool for
acoustics, that breaks a signal down into its fundamental frequencies. Using Fourier
analysis, frequencies with high amplitudes can be extracted from a waveform that at first
appears to be noise (Section 1.2.2.1), much like picking out a single conversation in a noisy
room.
1.2.1.4 Confinement of waves
Figure 15: Example of standing waves, with the fundamental frequency (top) and three harmonics. Transverse waves of
the same frequency travelling in opposite directions from each end of the string create peaks and nodes at points where
respectively in-phase and out-of-phase peaks crossover.
So far we have only considered freely travelling waves; in other words waves unaffected by
the system or material through which they travelling. Material properties become particularly
important when we consider waves confined in a structure; depending upon the context,
these can be described as vibrations, standing waves, modes or resonances.
The confinement of waves can be understood in terms of the interference of a travelling
transverse wave in a string-like structure, reflecting off a fixed boundary. If waves are being
45. excited at a set frequency from the left end of a string (e.g. top of Figure 15), the waves will
travel down the string and reflect off the fixed end. Since the reflection point is fixed, this will
invert the wave (e.g. the amplitude is required go through the zero-point at the end,
resulting in all energy βbouncingβ in the opposite direction), meaning that the two waves will
be fully out of phase (i.e. at zero) at the end of the string. As the reflected wave continues to
travel back along the string, it will move back into phase. For a set frequency, the point at
which the two waves are fully back in-phase will always be at the same point along the
string (e.g. in the middle of top of Figure 15).
The combined wave produced will vibrate (up and down in this example) at the in-phase
points (the anti-nodes) and be close to zero at the out-of-phase points (the nodes). In other
words, the two travelling waves create a superposition with a stationary pattern, emerging
from the fixed end.
In this example we have not considered any reflections from the excitation end of the string;
it is inferred (unrealistically) that there are no reflections. In such a case, the position of the
nodes and anti-nodes of the wave are only dependent upon the frequency of excitation and
could be found at any point along the string. In other words, the structure could potentially
vibrate with any frequency.
However, if you fix both ends of the string, waves will be reflected from both ends, creating
the same interference patterns at both ends. Frequencies that produce asymmetrical
superpositions (e.g. with an anti-node to one side) will destructively interfere with
themselves; in other word they will be quickly damped. However, symmetrical patterns (i.e.
with integer multiples of half wavelengths matching the length of the string, e.g. Figure 15)
will not destructively interfere and last for much longer time periods. In other words the size
and the shape of the structure determines what frequencies are allowed. Examples of such
standing waves or resonances are shown in Figure 15.
Take the taut βAβ (a musical note at ~440Hz) string between the scroll and bridge of a violin
as an example system. The fundamental frequency of the string (i.e. the βAβ) is the lowest
frequency standing wave possible, with a single anti-node between the two fixed ends of
46. the string (top of Figure 15). Typically the fundamental (f0) requires the least energy to be
excited and therefore is dominant resonance. The next three possible standing waves are
also shown in Figure 15; these are called harmonics of the system. In this example they are
linear (with 2, 3 and 4 anti-nodes), and have frequencies that are integer multiples of the
fundamental.
Many musical instruments rely upon harmonics to allow the production of different notes.
For instance the A string of a violin has linear harmonics, with the first harmonic is 2f0 (an A
one octave up), the second harmonic is 3f0 (the next E), 4f0 (the next A) and so on (Figure
16). On the other hand, the structure of a clarinet only creates odd harmonics (1f0, 3f0, 5f0,
7f0 etc.). However most instruments change frequency by shifting the resonant frequency of
the fundamental, for example by changing the length, thickness or tension of the string. To
put this into a biological perspective, an actin fibre of a certain length and tension would
have a specific fundamental frequency; changing the tension or length would modify the
frequency.
More complex structures, such as the bodies of musical instruments or speakers, can be
designed to transmit sound across a wide range of frequencies. Loudspeakers allow even
reproduction of sound by avoiding resonance at distinct frequencies; in other words they
have a flatter amplitude response across a range of frequencies (Figure 17 bottom).
Acoustic musical instruments however require multiple resonances to aide radiation of
sound at specific frequencies (Figure 17 top). For a bio-molecular example, a protein
receptor may be tuned to respond to specific vibrational frequency of a ligand or,
conversely, to respond equally to ligands vibrating over a wide range of frequencies.
Figure 16: Fundamental and harmonics of a violin string, showing amplitude (y-axis) against frequency (x-axis).
Thfundamental is the highest amplitude response, with linear harmonics above. Reproduced from (Gough 2000)
47. Figure 17: Acoustic spectra (i.e. amplitude vs. frequency plots) of 4 famous violins (top) and a Bose loudspeaker (bottom).
Reproduced from (Schleske 2015) and (Bose 2015).
It is important to note the number of factors that can impact the emission of a systemβs
fundamental frequency, including the speed of sound itself (Figure 11). Figure 9 shows the
relation of acoustic wavelength to the size of biological systems; for example, a 2 GHz
acoustic wave has a wavelength of a similar size to micron-sized structures. However, the
measurable resonances of these structures will depend upon the systemβs form, tension,
molecular mass and environment. For example, the lowest wavelength of sound produced
by most musical instruments is many times longer than the size of the instrument itself48
. As
we shall see in Chapter 2, it is possible to estimate the resonant frequency of a simple
structure (such as an un-damped transducer), but more complex structures require
additional modelling.
1.2.2 Acoustic Signalling in time
So far, we have only described the interactions of waves in general terms. Sound is
48 Based upon the size and typical lowest frequency of a violin, bass, flute and timpani.
48. quantified more precisely in terms of its amplitude over a period of time and its dynamic
movements in space. This section will focus upon changes in time, which are typically
plotted using a time-amplitude graph (e.g. Figure 18), whilst the next section will focus upon
spatial dynamics.
1.2.2.1 Signals and Noise
The aim of this project is to design devices to gauge the impact of nanoscale acoustic
stimuli upon biological systems; thus the acoustic signals must contain certain
characteristics which can affect these systems. In other words, the signals must
communicate some information to biological structures. Acoustic signals can contain a
number of features which may be important for biological response; the focus of this project
was to investigate the impact of one of these features, frequency.
So far, we have described idealised acoustic systems, where structures and waves are
tuned to specific frequencies. However, many objects and acoustic waves are not formed
with specific acoustic properties. Structures or acoustic waves without clear resonant
features typically have spectra made up of a broad range (or broadband) of sound
frequencies, much like white light is a combination of light across the spectrum of colour
(the spectra of the loudspeaker in Figure 17 would be an example). Many frequencies add
up to a band of sound that is approximately called noise, but is really a complicated
combination of many, often harmonically unrelated, frequencies. In other words, noise is a
set of patterns in time produced through the combination (i.e. superposition and
interference) of multiple frequencies.
There are a number of types of idealised statistical descriptions of βnoiseβ, involving distinct
spreads of sound across a broad band of frequencies. However, in practice, a pure βnoiseβ
is very unlikely except in linearly stochastic systems (e.g. electrical noise). Most acoustically
excited structures or waves will have some harmonic (i.e. clear, single frequency) features.
For example, flow through a tidal channel produces noise similar to idealised βBrownianβ
49. noise, but has sharp and broad peaks at various points across the spectrum49
.
Since a signal can also be made up multiple frequencies, whether a sound is defined as
βnoiseβ or a signal is very dependant upon the context in which one views it. Methods that
quantify how a signal changes in time provide a means to analyse the frequency content of
a signal (Section 1.2.2.2). Harmonic features of emitted signals can then be uncovered
using Fourier analysis (Section 1.2.2.3) whilst structural vibration can be modelled using
normal mode and molecular dynamic analyses (see section 1.2.4). In this way, particular
harmonic features of a noisy signal can be extracted and linked to a specific source.
1.2.2.2 Signals: Acoustic and digital quantification
Producing clear acoustic signals generally requires working between two different types of
signals in time, a continuous mechanical, analog signal and a sampled, digital signal
(Figure 18). Acoustic signals are by definition periodic, in that the period of a wavelength
must be repeated. For example, a single wavelength of sound would produce a solitary
wave (or soliton) rather than an acoustic wave50
.
Both continuous and sampled signals can be analysed. Continuous signal analysis is
conducted in real-time (i.e. directly on the signal itself) and is therefore primarily useful if the
content of the signal is well known or the method of analysis can be determined prior to
measurement or during measurement. Sampled data on the other hand can be analysed
after measurement, allowing more time to characterise a signal and greater flexibility to
conduct additional study. However, digital sampling comes with a number of limitations,
which will be detailed in this section.
49 (Urick 1983)
50From superposition theory, there may, however, be higher frequency acoustic content
within the soliton itself.
50. Figure 18: Sampling a real continuous 11Hz acoustic wave (red) at 10Hz (black dots) would create a 1Hz artefact (blue).
The x-axis shows a sample count (in time for the red line) and the y-axis shows amplitude.
Figure 19: Examples of different phased sinusoidal signals (dotted lines) all at the Nyquist frequency (fs/2) of the
sampling rate (fs, shown as circles). The red signal is in-phase with the sampling time and will be measured correctly,
whilst the other two signals are measured with a reduced amplitude (green) and zero amplitude (blue).
Digital signals are always based upon a specific period, known as the sample rate (or fs).
For example, the typical sample rate of a CD is 44 kHz, meaning that there is a separation
of 22ΞΌs (i.e. the division of a second into 44β000 discrete points) between each sample (e.g.
the black dots in Figure 18).
This inherent period provides a maximum frequency that can be received or transmitted by
this discrete representation. For example, at least two points are needed to measure a
wave changing in time (e.g. the circles on the red wave in Figure 19), thus the highest
frequency is half the sampling frequency, known as the Nyquist frequency (fs/2 - 22 kHz in
the case of a CD).
However, close to the Nyquist frequency, changes in phase can create reductions in
51. amplitude (Figure 19), whilst changes in frequency can create frequency artefacts (Figure
18). Clear frequencies above this frequency can create false signals below it, analogous to
the beat phenomenon discussed earlier (Figure 18). This is known as aliasing in signal
processing and thus it is important to include anti-aliasing low bandpass filters, which only
allow frequencies below the systemβs Nyquist frequency to be sampled. Clear frequencies
just below the Nyquist frequency will also lose some amplitude/frequency information.
Therefore, equipment should be chosen to have a sample rate greater than twice the
frequency of interest. This becomes particularly important in Chapter 3, where acoustic
testing is restricted to ~140MHz by equipment with a sampling rate of 300 MHz.
1.2.2.3 Signals: Fourier transform
Fourier transform (FT) is the primary method for analysing the amplitude of a signal to find
frequency components or, vice versa, producing an amplitude signal from known frequency
components. The process is typically described as a transform from the time-domain to the
frequency-domain. In other words it takes time-amplitude data (e.g. Figure 18) and
produces frequency-amplitude data (e.g. Figure 16 and Figure 17). Tonal signals will be
seen as clear amplitude spikes at their specific frequency, even if the time-amplitude data
appears noisy.
The method uses the same theory behind interference and superposition of waves to
calculate the relative magnitude of a range of frequencies, including a calculation of the
relative phase of each frequency. The process is similar to using a number of filters to
analyse a set range of frequencies. A biological analogue would be the cochlear, which
uses the structure of hair cell bundles to βtuneβ to particular frequencies; i.e. a filter focuses
upon a subset of frequencies. The response of an FT will depend upon the set up of these
filters, known as frequency bins. These are linearly spread in frequency from 0 to the
sample rate. Thus, a larger number of bins provide greater frequency resolution (i.e. each
bin analyses a narrower range of frequencies).
Typically, we use a faster, approximated version of the process called the Fast Fourier
Transform (FFT). This is a discrete-time (i.e. using sampled data, not continuous)
52. calculation that transforms a set length of time-amplitude data into the same length of
frequency-amplitude bins. The length of analysed data is known as the FFT size (e.g. a
2048-point FFT). Larger FFT sizes will therefore improve frequency resolution but require
longer sample periods, reducing time resolution.
Due to the Nyquist frequency, the frequency-amplitude data is symmetrical around the
central point (i.e. the sample at fs/2). For example, data that has been collected with a low-
pass filter at the Nyquist frequency will show frequency-amplitude bins without aliasing in
the first half of the data (i.e. samples 1 to 1025 in a 2048-point FFT)51
. This is particularly
useful for analysis of data with unknown frequency content. This is important, as we have
measured data without a low-pass filter within this project from a known input. This has
subsequently allowed analysis above the Nyquist frequency of under-sampled data using
aliasing artefacts (Chapter 4).
1.2.2.4 Signals: Waveforms
FFTs provide a means to separate broadband noise from signals with tonal features.
Analysis over time, either using time- or frequency-domain data, can then help clarify
temporal changes in a signal. For example, a plume of bubbles will produce a broadband
frequency signal at any given point, but amplitude information would change over time
depending upon the size of the plume. Similarly, the tone created by an engine hum will
increase in frequency over time as the engine is accelerated.
A specific amplitude response of a signal is known as the waveform, and together with
frequency (and its phase) provides the information content of acoustic signals. Digital and
analog signals have similar basic waveforms, including continuous, oscillatory, pulsed and
complex shapes (Figure 20). We have already introduced oscillatory signals, in the form of
basic sinusoidal acoustic waves, both travelling through a medium (longitudinal/transverse
waves) and within a structure (standing waves or modes). Continuous waveforms are
equivalent to an AC signal in electronics, whilst pulses produce a discrete energy over a
51 A band-pass between fs/2 and fs would also allow measurement of frequencies within
this band without artefacts (but losing data below fs/2).
53. short period. Harmonics change the shape of the waveform, whilst more complex signals
(such as vocal or musical utterances) combine a number of these features into a single
waveform. A change in static pressure (i.e. f) in Figure 20) is equivalent to DC in electronics.
Figure 20: Figure showing time-domain signal (left) against FFT transform (right) for various waveforms. A) Sinewave at
20Hz b) sinewave at 100Hz, c) sinewave at the Nyquist, d) pulse at 20Hz, e) harmonics above 20Hz, f) DC signal (i.e.
constant pressure) at 0.5 units amplitude, g) random noise, h) random noise with a 20Hz signal.
1.2.3 Acoustic Signalling in Space
This section will cover overview the main processes involved in spatial changes to acoustic
signals. First, an example of spatial interference of signals is described in terms of near-
and far-field effects of a single point source. The importance of the elastic properties of
materials in the transmission of waves is then introduced. Finally, methods for using these
properties to describe the redirection of signals at the boundaries of materials and
absorption within materials are described, including reflection, refraction and the influence
of acoustic impedance.
1.2.3.1 Near- and far-field effects
56. Figure 21: The relation between shear stress from an ideal fluid to elastic solid. Reproduced from (Finnemore and
Franzini 2002)
Newtonian fluids are defined as those that change viscosity linearly with changes in
velocity, although they often do not respond linearly to temperature and pressure (Figure
21). It should be noted that biology involves complex fluids, so dynamic flows are likely to
involve fluids with non-Newtonian or plastic characteristics 55
as well as viscoelastic
materials, which exhibit both elastic and viscous properties depending upon their spatial
and temporal excitation and/or changes in test conditions (i.e. temperature and pressure).
In addition, since viscosity is a bulk description, fluid dynamics (and thus acoustic
transmission) is likely to be very different at the molecular level.
1.2.3.3 Reflection & refraction
Material structure is integral to the reflection, scattering and absorption of waves. When
waves travelling through a material encounter a material surface with different properties,
for example from air to water, a certain amplitude of the waves will be reflected. For
example, auditory sound will reflect back and forth, or echo, in an empty room with flat, solid
surfaces. This is described as specular reflection, or mirror-like.
Reflection also depends upon the size and structure of the materials. If the auditory wave
encounters uneven objects with bumps about the size of the wavelength, such as furniture
55 (Tritton 1992)
57. and curtains, it will be reflected in a number of directions, or scattered. This is more
precisely described as diffusive reflection, since scattering is a general term that can also
denote changes within a medium.
Figure 22: Specular and diffusive reflection
The amount of diffusive reflection can be quantified by describing the roughness of the
surface. This is dependent upon the difference in height of the surface (i.e. vertical
roughness) and the horizontal length over which this height roughness is correlated (the
correlation length). In applied practice, the approximation of roughness is very dependent
upon the distance measured and the spatial sampling used; this shows similar restrictions
on resolution and scaling to the effect of sample rate and FFT size on temporal
measurements (1.2.2.3). It is therefore important to know what frequency range of incident
wave is of interest, in order to assess what corresponding length-scale of roughness will be
important.
Figure 23: High and low frequency reflection
58. In general, roughness with a small correlation length with respect to acoustic wavelength
will only affect higher frequencies, whilst having minimal effect on lower frequencies (Figure
23). In other words there will be clear diffusive reflection if:
πΏ
π
β« 1
where πΏ is the correlation length of the surface roughness and π is the wavelength56
. This is
particularly important for understanding high frequency ultrasound in living matter. For
example, an ultrasonic signal with a relatively low frequency and long wavelength (e.g.
mm), such as medical ultrasound, will only be significantly affected by inhomogeneities
greater than a few millimetres in size. At this wavelength surfaces can be imaged at
millimetre resolution by picking up simple specular reflections using a wide array of
microphones.
Higher frequency ultrasound (e.g. at ΞΌm wavelengths) will be affected by structures at nm
and ΞΌm size. As frequency increases, acoustic waves will begin to be influenced by ever-
smaller surface differences.
Reflection is particularly important in this project to estimate the likelihood of reflected
signals and standing waves occurring within the cuvette of the acoustic spectrometer
(Section 4.5.1). Both effects could change the amplitude levels found in the cuvette,
affecting the results. Although this would be less important as frequencies increased (as
attenuation is increased, see 1.2.3.5), it would be a factor for initial experiments at lower
frequencies.
When a sound wave is not reflected, but transmitted into a material with a different speed of
sound, there will also be changes in the waveβs direction. The angle of incidence is the
angle relative to the normal (i.e. 90Β°) of the incident wave (i.e the wave coming towards the
boundary), whilst the angle of refraction gives the shift in direction of the refracted wave.
For a planar surface, the angle is solely dependent upon change in the speed of sound;
56 (Ainslie 2010)
59. however for rough surfaces or materials designed to produce a negative refraction (Figure
24), the angle of transmission is dependent upon the surface structure.
Figure 24: Refraction across a boundary between two materials with a different speed of sound (left). Negative refraction
(right) can be achieved from changing the surface structure of the boundary.
The amount a wave is reflected or transmitted at a surface can be estimated using surface
roughness and a property of the materials called acoustic impedance.
1.2.3.4 Acoustic Impedance
The extent of reflection or refraction is dependant upon the difference in acoustic
transmission between the two materials at the boundary. An idealised perfect reflector will
reflect all of an acoustic signal, whereas a layer with perfectly matching properties would
transmit the entire signal. A heterogeneous material will involve a number of layers that vary
between these two extremes.
Transmission and reflection properties are calculated using a property called characteristic
acoustic impedance, notated as π!. This provides a material specific value, calculated from
the density and speed of sound of the material. For example, for water:
π πππππ = π πππππ π πππππ = 1.483 kg m-2
s-1
Equation 8
where π is the density of the material and π is the speed of sound. Materials with large
differences in acoustic impedance will have greater reflection (with a 180Β° phase change,
60. as discussed earlier), and thus the transfer of acoustic waves between the two materials
will be impeded. Since each type of wave described in 1.2.1.2 has a different speed of
sound, the impedance value will also change (e.g. ZL and ZS for longitudinal and
shear/transverse).
The reflection (π ) and transmission (π) of a wave at normal incidence (i.e. at 90Β° to the
surface) to the boundary are given by:
πΉ =
(π π!π π)
(π π!π π)
Equation 9
π» =
π π π
(π π!π π)
Equation 10
where π! and π! are the impedance of the first and second material respectively.
Reflection and transmission at an angle of incidence other than 90Β° depends upon the
phase state of the boundary materials (i.e. gas, liquid or solid) and type of acoustic wave.
Methods for calculation of coefficients in different cases are overviewed in Chapter 7 of
(Cheeke 2002).
The typical speed of sound (for longitudinal waves only for gases and liquids), density and
characteristic acoustic impedance (in MRayl, which are equal to MPa s mβ1
) for materials of
importance to this work are summarised in Table 1.
61. Material VL / VS (10
3
m/s) Ο (10
3
kg/m
3
) ZL / ZS
(MRayl or kg m
-2
s
-1
)
Gases
Air (20ΛC) 0.344 0.001293 0.000429
Hydrogen (0ΛC) 1.284 0.0000899 0.000115
Liquids
Water (20ΛC) 1.48 1 1.483
Water (25ΛC) 1.496 0.998 1.494
Water (30ΛC) 1.509 1 1.509
Sea Water (25ΛC) 1.531 1.025 1.569
Lipids (gel/liquid) 0.01-0.3* variable* variable*
Solids/glasses
Al 6.42 / 3.04 2.70 17.33 / 8.21
AlN 11.3 / 6.09 3.26 35.8 /
Epoxy 2.70 / 1.15 1.21 3.25 / 1.39
Glass (Pyrex) 5.65 / 3.28 2.25 13.1 / 7.62
Gold 3.24 / 1.20 19.7 63.8 / 23.6
Lithium Niobate
36Λ Y-cut
7.33 / 3.97 4.7 34.0 / 18.7
Salt 4.78 2.17 10.37
Silicon Dioxide 5.97 2.20 13.1
Silicon 8.43 2.34 19.7
Table 1: Speed of sound (VL), Density (Ο) and Acoustic impedance (ZL) for longitudinal waves for various materials, taken
from (Cheeke 2002), * from (Griesbauer, Wixforth, and Schneider 2009), Italics = approximate
62. Figure 25: Examples of the complex relations of organic materials to physical characteristics. Left, Glycerine and Water,
compressibility vs. T vs. Pressure. Right, the lipid DPPC Density vs. Speed of sound squared at different angular
frequencies. Reproduced from (Hill et al. 2004) & (Mosgaard, Jackson, and Heimburg 2012).
As noted before, values for compressibility and speed of sound will vary with temperature
and pressure (Figure 25, left). The values in Table 1 are also for specific molecular
structures. To put this in context, bio-molecules are structurally heterogenous and are often
further modified by cellular processes. For example, there are many types of lipid as well as
modifications of each of these lipids. This in turn will influence their physical characteristics
(Figure 25, right). Thus the use of these parameters in molecular biology may be
problematic and will very much depend upon the context. On the other hand, averaged
values can be useful for approximated systems, such as tissue or simplified models57
,
though caution is always advisable when simplifying biological systems.
1.2.3.5 Attenuation and absorption
The attenuation of sound is the reduction of sound amplitude over a certain distance and is
caused by a combination of absorption and scattering. Absorption is conversion to another
type of energy, e.g. heat, whilst scattering is a change in the direction of the sound due to
57 (Leighton 2007)
63. reflection off inhomogeneities within the medium. Within a fluid, attenuation depends
primarily on viscosity58
, mainly through absorption due to the compression and relaxation of
the chemical structures. Absorption is particularly important at higher frequencies,
particularly from MHz upwards59,60
, where chemical structure plays a greater role.
Figure 26: Absorption profiles of pure water and sea water, including plots of the contribution to absorption from
Magnesium sulphate and Boric acid. Reproduced from (Robinson 2015).
The absorption of sound in pure water and sea water can be seen in Figure 26. The
inclusion of other chemical structures (e.g. salts) can increase or decrease absorption in
specific frequency ranges. For example, boric acid and magnesium sulphate contribute to
low frequency absorption in seawater61
.
As mentioned in section 1.2.1.1 and as we shall see in the next section, the frequency
characteristics of chemical structures are extremely high. It is important to note that
absorption typically results in the production of heat. As we shall see in later sections, some
types of heat (i.e. dissipated energy in a molecular system) are synonymous with aspects of
58 (Finnemore and Franzini 2002)
59 (Matheson 1971)
60 (Robinson 2015)
61 (Ainslie 1998)
64. high frequency ultrasound.
1.2.4 High frequency ultrasound: Phonons
High frequency ultrasound covers a vast frequency range studied by a plethora of
specialisations that span from classical physics to quantum chemistry. The next two
sections will overview why high frequency acoustics is important in order to understand the
interaction of sound with molecular systems, such as biological cells. First, the concept of
phonons will be introduced, including a description of experimental work that may allow for
their existence in biological environments. This overlaps with the interactions at the level of
molecules, which will be described in terms of physical chemical mechanisms in section
1.2.5. Finally, this will be linked to methods from condensed matter physics, which may be
useful for clarification of the impact of state and physical self-organisation, whilst a
molecular understanding of flow may provide a means to integrate fluid dynamical models
at a cellular level. Specific pathways by which these phenomena may interact with
biological systems will then be explored in the following section, 1.3.
1.2.4.1 Properties of phonons
Figure 27: A simple spectrum showing high frequency ultrasound waves, moving from acoustic to thermal
At frequencies higher than GHz, the ability to produce and measure ultrasonic signals
65. accurately using conventional methods becomes more difficult, with signal attenuation
increasing through scattering and absorption (see 1.2.3.5). Understanding interactions in
terms of phonons, a type of acoustic wave, can help improve analysis for simple systems.
For example, understanding of viscosity and heat loss effects on attenuation of acoustic
waves in solid systems is fully described in terms of phonon interactions62
.
As shown in Figure 27, as acoustic waves approach frequencies in the THz they are
typically viewed as a particle/wave of heat. These are described in a similar manner to
photons, which are individual particles/waves of electromagnetic radiation (e.g. light), and
are therefore called phonons. The idea of sound (and light) being both a wave and particle
is a key concept in quantum physics, based upon the superposition of waves into a single
point (a particle) or, vice versa, the breakdown of a particle into multiple waves (see Figure
14). For example, a phonon travelling (like a particle) across a crystal could be described as
a superposition of multiple normal modes. Phonons are known as collective excitations or
quasiparticles, since they are emergent properties of a system rather than elementary
particles (for example, they do not have mass). The direct relation of phonons to thermal
motion is a particularly important concept in molecular acoustics, as we shall see in Error!
Reference source not found..
Figure 28: Available acoustic phonon modes in a one-dimensional system L in length. Reproduced from Georgia State
University, 2012.
62 (Cheeke 2002)
66. Figure 29: Transverse acoustic and optical phonon modes. Optical modes are created by differing collective motions of
atoms with different masses (e.g. supposing the red and blue points in the lower plot are different elements). Reproduced
from Kent State University, 2000.
Phononics deals with the fundamental, quantum interactions of acoustic waves. These are
typically studied using cold, ordered, solid state systems are used since they have much
simpler dynamics than disordered systems. The response of such systems to excitation, for
instance by increasing the temperature, is understood to be quantised, with only discrete
quantities of energy allowable above the ground state. This holds true for electromagnetic
energy as well as emergent energy states such as phonons.
As the energy in a systemβs environment reduces, for instance by reducing temperature, the
dynamics of the system will move towards lower energy states until it reaches a ground
state of zero-point energy. This is a state where there is no energy in the system except
fundamental background fluctuations; in other words the system is very quiet. Thus at cold
temperatures you can record the primary vibrations of a system above this ground state. As
the temperature rises, more energy is available to the system and higher energy phonons
are excited.
Phonons are βnormal modesβ of a system, specifically waves which occupy the whole
volume of the system63
. Harmonic βacoustic-likeβ energy states actually arise naturally at
the nanoscale; using the violin analogy, the whole string (i.e. structure) will βringβ with a
standing wave at each phononic mode. Energy at this level is separated into discrete
quantum states, analogous to the discrete acoustic harmonics in classical acoustics
(1.2.1.4). Harmonic frequencies of 1f, 2f, 3f and so on correspond to wavefunctions
(represented by the symbol π) with allowable energy levels of
!
!
βπ,
!
!
βπ,
!
!
βπ,
!
!
βπ and so
on. In other words, the energy level, π¬ π, where n=1, 2, 3 etc., is equal to:
63 (Shrivastava 1990)
67. π π = (π +
π
π
)βπ Equation 11
where π is the classical description of angular frequency of an oscillating spring and β =
!
!!
i.e. Planckβs constant divided by 2π. These are integer quantum levels (n=1, 2, 3 etc. shown
in Figure 28) above a ground state of
!
!
βπ zero-point energy. Therefore a discrete multiple
of energy (i.e. βπ) is required to excite a system to the next energy mode. In other words,
the difference in energy required is:
ππ = βπ =
ππ
ππ
Equation 12
If the system involves more than one phonon mode at the same time, it can have a complex
wavefunction that is a.superposition of multiple modes (Section 1.2.1.3).
This type of quantisation is the basis for quantum harmonic oscillators; as we shall see in
the next section (Error! Reference source not found.), different chemical motions will
exhibit a linear combination of such allowable energy modes. Note that phonons and
acoustic waves are dependent upon transmission by matter, i.e. are bulk approximations of
the fundamental chemical dynamics described in the next section.
The frequency of a harmonic oscillator is given by:
π =
π
ππ
π Equation 13
Placing the rearranged frequency equation π = 2ππ into ππ = βπ =
ππ
ππ
Equation 12 provides Bohrβs frequency condition:
ππ = ππ = πππ Equation 14
Where v is the frequency of the wavefunction and π£ is the wavenumber (i.e. the inverse
wavelength,
!
!
). This is particularly useful for understanding the frequency of spectra (often
68. given in wavenumber units) created by atomic and molecular dynamics (section Error!
Reference source not found.).
1.2.4.2 Phonons in complex systems
In three-dimensions, there are corresponding phonon modes in each dimension. For
example, the lowest 3D vibration mode would be equivalent to the lowest 1D mode (bottom
of Figure 28) in all 3 directions (i.e. x,y,z); for a (general, non-phonon) spherical particle this
can be visualised as ball shaped mode.
Since phonons emerge from the structural features of a transmitting material, they can be
transverse or longitudinal (Figure 12), just like classical acoustic waves. In addition,
differences in atomic mass can create different frequency collective motions within the
same system, creating optical (as opposed to acoustic) phonons (Figure 29). Within a
system there can therefore be many types of phonon that combine these features, e.g.
longitudinal-acoustic (LA), transverse-acoustic (TA), longitudinal-optical (LO), transverse-
optical (TO)64
. Typically wavefunctions for each type of phonon are distinct and can occur
simultaneously.
In general, phonon theory (also described as THz acoustics, microwave acoustics and
quantum acoustics) is used to describe acoustic waves in (typically solid) structures, with
frequencies up to low THz frequencies. Theoretical work continues to challenge our
conception of vibration at these frequencies with work on quasi-particle and particle
interactions, between phonons, solitons, excitons, plasmons, photons etc., contributing to
the dialogue.
As temperatures increase above 0K (absolute zero)65
, multiple phonon modes interact to
produce more complex waveforms, much like timbre differences in musical instruments.
Superpositions between these modes emerge from the system (section 1.2.1.3), creating
the βrandomβ thermal motion seen at the macroscale, i.e. Brownian motion and specific
64 (Kress and de Wette 2013)
65 Theoretically this does not exist as all systems will have some zero-point energy which could
be infinitely reduced (from lectures at the DTC-CM, St. Andrews).