Cognitive Psychology
Lesson 7 - Spring 2019
The hearing brain
Professor Valentina Bazzarin
USAC Reggio Emilia
The hearing brain
The human auditory system is
able to detect a huge range of
changes in air pressure, from
around 0,00002 to over 100
Pascals. However the role of
hearing brain is not merely to
detect such changes. Its role
consists in constructing an
internal model of the world
that can be interpreted and
acted upon.
Sounds originates from
motion or vibration of an
object
AIVA, an artificial
intelligence that has
been trained in the
art of music
composition https://www.ted.com/talks/pierre_barre
au_how_ai_could_compose_a_personalized
_soundtrack_to_your_life
Sensory
information
Sensory
experience
This model is constructed
not only using sensory
information but also from
previous sensory
experience.
The hearing brain extracts
“constancy” out of an
infinitely varying array
of sensory input and it
will actively interpret
the sensory input.
Can you hear
colors?
https://www.youtube.com/watch?v=xj7vuk
ZT9sI
The nature of
sound
http://www.szynalski.com/tone-generator/
Pure tones
Pitch
Loudness
Pure tones: sounds with a
sinusoid waveform (when
pressure change is plotted
against time).
Pitch: the perceived
property of sounds that
enables them to be ordered
from low to high.
Loudness: the perceived
intensity of the sound.
Frequency
Fundamental frequency
The lowest frequency
component of a complex sound
that determines the perceived
pitch.
Missing fundamental
phenomenon
If the fundamental frequency
of a complex sound is
removed, then the pitch is
not perceived to change (the
brain reinstates it).
Timbre
Timbre
The perceptual quality of
a sound enables us to
distinguish between
different musical
instruments
Does Music Change a Child's Brain?
https://www.youtube.com/watch?v=M2sqXbwlaWw
http://www.nbcnews.com/id/28423422/ns/health-he
alth_care/t/cant-hear-holiday-parties-blame-your-br
ain/
From ear to brain
Cochlea: part of the inner ear that converts liquid born sound into
neural impulse
Basilar membrane: A membrane within the cochlea containing tiny hair
cells linked to neural receptors.
Primary auditory cortex: the main cortical area to receive
auditory-based thalamic input.
Belt and parabelt regions: parts of secondary auditory cortex
Tonotopic organization: orderly mapping between sound frequency and
position on cortex
Cognitive
musicology
Cognitive musicology is a
branch of cognitive science
concerned with
computationally modeling
musical knowledge with
the goal of understanding
both music and cognition
(source wikipedia)
Sound design
https://www.ted.com/talks/julian_treasure_the_4_ways_
sound_affects_us#t-327518
https://drjonesmusic.me/2018/01/29/music-a
nd-the-brain-online-discussion-1-spring-201
8/
Basic processing of auditory information
Sparse scanning: in fRMI a short
break in scanning to enable sounds to
be presented in relative silence.
Head-related transfer function
(HRTF): an internal model of how
sounds get distorted by the unique
shape of one’s own ears and head.
https://www.youtube.com/watch?v=IJ7dCkWd
PC0
Planum temporale
Planum temporale: A part of
auditory cortex (posterior to
primary auditory cortex) that
integrates auditory
information with non auditory
information, for example to
enable sounds to be separated
in space.
Music perception
http://jakemandell.com/tonedeaf/
Amusia
Amusia: an auditory
agnosia in which music
perception is affected
more than the perception
of other sounds.
Congenital amusia (tone
deafness): a developmental
difficulty in perceiving
pitch relationship.
Function of music
Function of music: Pinker
(1977) argued that
language was the precursor
to music and the latter
does not have an adaptive
function.
Perception and
creativity
https://www.ted.com/talks/blaise_aguer
a_y_arcas_how_computers_are_learning_t
o_be_creative
Voice perception
Voice, like faces, convey
a large amount of socially
relevant information about
the people around us. It
is possible to someone’s
sex, size, age and mood
from their voice. Physical
changes related to sex,
size and age affect the
vocal apparatus in
systematic ways.
Speech
perception
Pure word deafness: type of
auditory agnosia in which
patients are able to identify
environmental sounds and music
but not speech
Spectrogram: plots the
frequency of sound (on the
Y-axis) over time (on the X-axis)
with the intensity of the sound
represented by how dark it is
Key concepts (1/2)
● Hearing involves extracting features out of the sensory signal
that may be useful for segregating the input into different
“objects”;
● Cells within the secondary auditory cortex may have different
degrees of specialization for the content of the sound (what) Vs
the location of the sound (where) → dorsal (where) / ventral
(what) pathway in the temporal lobes (left for speech);
● Music perception involves a number of different mechanism and
they have partially separate neural substrates as revealed by
fMRI and lesion-based studies;
● Evidences of a specialized region in the (right) temporal lobe
that is specialized for the recognition of voices
Key concepts (2/2)
● Speech recognition involves extracting categorical
information from sensory input that can vary infinitely.
This may be achieved via acoustic processing and possibly
via motor processing;
● Speech recognition (and speech repetition) may involve
both a ventral “what” route and a dorsal “how” route for
unfamiliar words and verbatim repetition (possibly
corresponding to the use of the “articulatory loop”)

Cognitive psychology l7 spring2019

  • 1.
    Cognitive Psychology Lesson 7- Spring 2019 The hearing brain Professor Valentina Bazzarin USAC Reggio Emilia
  • 2.
    The hearing brain Thehuman auditory system is able to detect a huge range of changes in air pressure, from around 0,00002 to over 100 Pascals. However the role of hearing brain is not merely to detect such changes. Its role consists in constructing an internal model of the world that can be interpreted and acted upon. Sounds originates from motion or vibration of an object
  • 3.
    AIVA, an artificial intelligencethat has been trained in the art of music composition https://www.ted.com/talks/pierre_barre au_how_ai_could_compose_a_personalized _soundtrack_to_your_life
  • 4.
    Sensory information Sensory experience This model isconstructed not only using sensory information but also from previous sensory experience. The hearing brain extracts “constancy” out of an infinitely varying array of sensory input and it will actively interpret the sensory input.
  • 5.
  • 6.
  • 7.
    Pure tones Pitch Loudness Pure tones:sounds with a sinusoid waveform (when pressure change is plotted against time). Pitch: the perceived property of sounds that enables them to be ordered from low to high. Loudness: the perceived intensity of the sound.
  • 8.
    Frequency Fundamental frequency The lowestfrequency component of a complex sound that determines the perceived pitch. Missing fundamental phenomenon If the fundamental frequency of a complex sound is removed, then the pitch is not perceived to change (the brain reinstates it).
  • 9.
    Timbre Timbre The perceptual qualityof a sound enables us to distinguish between different musical instruments
  • 10.
    Does Music Changea Child's Brain? https://www.youtube.com/watch?v=M2sqXbwlaWw http://www.nbcnews.com/id/28423422/ns/health-he alth_care/t/cant-hear-holiday-parties-blame-your-br ain/
  • 11.
    From ear tobrain Cochlea: part of the inner ear that converts liquid born sound into neural impulse Basilar membrane: A membrane within the cochlea containing tiny hair cells linked to neural receptors. Primary auditory cortex: the main cortical area to receive auditory-based thalamic input. Belt and parabelt regions: parts of secondary auditory cortex Tonotopic organization: orderly mapping between sound frequency and position on cortex
  • 12.
    Cognitive musicology Cognitive musicology isa branch of cognitive science concerned with computationally modeling musical knowledge with the goal of understanding both music and cognition (source wikipedia)
  • 13.
  • 14.
    Basic processing ofauditory information Sparse scanning: in fRMI a short break in scanning to enable sounds to be presented in relative silence. Head-related transfer function (HRTF): an internal model of how sounds get distorted by the unique shape of one’s own ears and head. https://www.youtube.com/watch?v=IJ7dCkWd PC0
  • 15.
    Planum temporale Planum temporale:A part of auditory cortex (posterior to primary auditory cortex) that integrates auditory information with non auditory information, for example to enable sounds to be separated in space.
  • 16.
  • 17.
    Amusia Amusia: an auditory agnosiain which music perception is affected more than the perception of other sounds. Congenital amusia (tone deafness): a developmental difficulty in perceiving pitch relationship.
  • 18.
    Function of music Functionof music: Pinker (1977) argued that language was the precursor to music and the latter does not have an adaptive function.
  • 19.
  • 20.
    Voice perception Voice, likefaces, convey a large amount of socially relevant information about the people around us. It is possible to someone’s sex, size, age and mood from their voice. Physical changes related to sex, size and age affect the vocal apparatus in systematic ways.
  • 21.
    Speech perception Pure word deafness:type of auditory agnosia in which patients are able to identify environmental sounds and music but not speech Spectrogram: plots the frequency of sound (on the Y-axis) over time (on the X-axis) with the intensity of the sound represented by how dark it is
  • 22.
    Key concepts (1/2) ●Hearing involves extracting features out of the sensory signal that may be useful for segregating the input into different “objects”; ● Cells within the secondary auditory cortex may have different degrees of specialization for the content of the sound (what) Vs the location of the sound (where) → dorsal (where) / ventral (what) pathway in the temporal lobes (left for speech); ● Music perception involves a number of different mechanism and they have partially separate neural substrates as revealed by fMRI and lesion-based studies; ● Evidences of a specialized region in the (right) temporal lobe that is specialized for the recognition of voices
  • 23.
    Key concepts (2/2) ●Speech recognition involves extracting categorical information from sensory input that can vary infinitely. This may be achieved via acoustic processing and possibly via motor processing; ● Speech recognition (and speech repetition) may involve both a ventral “what” route and a dorsal “how” route for unfamiliar words and verbatim repetition (possibly corresponding to the use of the “articulatory loop”)