SlideShare a Scribd company logo
1 of 1
Download to read offline
The	
  EEG	
  Signature	
  of	
  Same	
  Words	
  Spoken	
  by	
  Different	
  Speakers	
  
S.	
  Elahi,	
  Professor	
  J.	
  Brennan	
  
Department	
  of	
  LinguisBcs	
  at	
  the	
  University	
  of	
  Michigan	
  
Results	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
•  A:	
  n1	
  –	
  early	
  stages	
  of	
  acousBc	
  processing	
  à	
  possible	
  acousBc	
  priming	
  for	
  same	
  talk	
  	
  
•  B:	
  p2	
  –	
  later	
  stages	
  of	
  acousBc	
  processing	
  à	
  lack	
  of	
  acousBc	
  priming	
  for	
  diff	
  talker	
  
•  C:	
  n400	
  –	
  semanBc	
  processing	
  à	
  lack	
  of	
  semanBc	
  priming	
  for	
  non	
  repeated	
  words	
  
	
  	
  	
  	
  	
  n400	
  –	
  semanBc	
  processing	
  à	
  reduced	
  for	
  both	
  repeBBon	
  condiBons	
  
•  D:	
  possible	
  p300	
  à	
  unexpected	
  sBmuli	
  processing	
  –	
  with	
  diff	
  talker	
  
Conclusion	
  
	
  
PaMerns	
  we	
  see	
  at	
  this	
  point	
  and	
  Bme:	
  	
  
	
  
•  repeBBon	
  effects	
  for	
  the	
  same	
  talker	
  appear	
  earlier,	
  ~100	
  milliseconds	
  
•  repeBBon	
  effects	
  for	
  different	
  talkers	
  do	
  not	
  appear	
  later,	
  >300	
  milliseconds	
  
	
  
S4muli	
  Condi4ons	
  and	
  Hypothesized	
  Priming	
  Effect	
  
	
  
R:	
  repeat;	
  NR:	
  non-­‐repeat;	
  	
  
+:	
  feature	
  is	
  repeated	
  in	
  sBmuli	
  pair;	
  –:	
  feature	
  is	
  not	
  repeated	
  in	
  sBmuli	
  pair	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
Data	
  CollecBon	
  
	
  
Materials	
  
	
  
Experiment	
  
ParBcipants	
  
Our	
  study	
  obtained	
  data	
  from	
  X	
  parBcipants:	
  each	
  subject	
  was	
  right-­‐handed,	
  learned	
  English	
  as	
  his	
  or	
  her	
  first	
  
language,	
  and	
  possessed	
  no	
  severe	
  neurological	
  impairments	
  or	
  disorders.	
  Sessions	
  lasted	
  for	
  an	
  hour	
  and	
  a	
  half	
  
and	
  all	
  took	
  place	
  in	
  the	
  ComputaBonal	
  NeurolinguisBcs	
  Lab	
  in	
  Lorch	
  Hall.	
  	
  
	
  
Methods	
  
Introduc4on	
  
	
  
•  Hearing	
  a	
  word	
  affects	
  an	
  individual’s	
  understanding	
  of	
  the	
  word	
  that	
  follows.	
  For	
  
example,	
  if	
  one	
  was	
  to	
  hear	
  “dog”,	
  a	
  repeated	
  uMerance	
  “dog”	
  would	
  be	
  processed	
  
quicker	
  than	
  another	
  word.	
  Priming	
  effects	
  occur	
  at	
  each	
  linguisBc	
  level	
  of	
  speech	
  
percepBon-­‐acousBcally,	
  phoneBcally,	
  phonologically,	
  lexically,	
  and	
  conceptually.	
  	
  
•  Furthermore,	
  we	
  understand	
  a	
  given	
  word	
  with	
  apparent	
  effortlessness	
  even	
  when	
  
spoken	
   by	
   different	
   speakers.	
   Though	
   the	
   semanBcs	
   of	
   a	
   given	
   word	
   remains	
   the	
  
same,	
  the	
  phoneBc	
  and	
  acousBc	
  aspect	
  of	
  the	
  word	
  varies	
  with	
  speaker	
  –	
  yet	
  we	
  are	
  
sBll	
  able	
  to	
  understand	
  that	
  “dog”	
  means	
  dog.	
  	
  
	
  
•  At	
   issue	
   is	
   how	
   we	
   recognize	
   words	
   given	
   acousBc	
   and	
   phoneBc	
   variaBon.	
   The	
  
prototype	
  theory	
  states	
  that	
  we	
  possess	
  the	
  same	
  mental	
  representaBon	
  for	
  a	
  given	
  
object	
  and	
  compare	
  any	
  new	
  sBmuli	
  of	
  that	
  object	
  to	
  a	
  single	
  “prototype”	
  we	
  have	
  
constructed	
   in	
   our	
   head.	
   AlternaBvely,	
   the	
   exemplar	
   theory	
   proposes	
   that	
   we	
  
categorize	
  objects	
  by	
  comparison	
  of	
  any	
  new	
  sBmuli	
  with	
  instances	
  of	
  that	
  word	
  we	
  
have	
  stored	
  in	
  memory.	
  
	
  
•  Recent	
  research	
  indicates	
  that	
  more	
  rapid	
  processing	
  occurs	
  when	
  the	
  same	
  word	
  is	
  
repeated	
   by	
   the	
   same	
   speaker	
   as	
   opposed	
   to	
   when	
   it	
   is	
   repeated	
   by	
   different	
  
speakers.	
  We	
  use	
  electroencephalography	
  (EEG)	
  to	
  record	
  charges	
  in	
  scalp	
  electrical	
  
potenBals	
   caused	
   by	
   neural	
   acBvity	
   while	
   the	
   parBcipants	
   perform	
   an	
   experiment	
  
regarding	
  this	
  effect.	
  We	
  compare	
  the	
  Bme-­‐course	
  of	
  Event-­‐Related	
  Poten4als	
  (ERP)	
  
between	
  condiBons	
  to	
  test	
  for	
  what	
  stage	
  of	
  processing	
  is	
  being	
  primed.	
  	
  
Word	
   Non-­‐Word	
  
Same	
  	
  
Speaker	
  
Different	
  
Speaker	
  
Same	
  	
  
Speaker	
  
Different	
  
Speaker	
  
R	
   NR	
   R	
   NR	
   R	
   NR	
   R	
   NR	
  
Acous4cs	
   +	
   (+)	
   –	
   –	
   +	
   (+)	
   –	
   –	
  
Phone4cs	
   +	
   –	
   (–)	
   –	
   +	
   –	
   (–)	
   –	
  
Phonology	
   +	
   –	
   +	
   –	
   +	
   –	
   +	
   –	
  
Seman4cs	
   +	
   –	
   +	
   –	
   –	
   –	
   –	
   –	
  
We	
  used	
  electroencephalography	
  (EEG)	
  to	
  record	
  electrical	
  
acBvity	
  in	
  the	
  brain.	
  We	
  situated	
  each	
  subject	
  in	
  a	
  nylon	
  
cap	
  containing	
  61	
  acBvely	
  amplified	
  electrodes	
  evenly	
  
distributed	
  across	
  the	
  scalp.	
  These	
  electrodes	
  measured	
  
electrical	
  potenBal,	
  which	
  reflected	
  the	
  current	
  generated	
  
by	
  the	
  summary	
  of	
  corBcal	
  neurons	
  in	
  the	
  brain.	
  	
  
We	
  injected	
  electrolyte	
  gel	
  
to	
  minimize	
  impedance	
  
while	
  it	
  is	
  being	
  monitored.	
  
Low	
  impedances	
  means	
  the	
  
electrical	
  connecBon	
  
between	
  the	
  skin	
  and	
  
electrode	
  are	
  stabilized.	
  	
  
	
  Hearing	
  threshold	
  
was	
  idenBfied	
  for	
  
each	
  subject	
  using	
  
a	
  1000	
  Hz	
  tone.	
  The	
  
volume	
  was	
  set	
  to	
  
be	
  45	
  dB	
  above	
  this	
  
threshold.	
  	
  
The	
  task	
  of	
  the	
  parBcipants	
  was	
  to	
  iden4fy	
  if	
  the	
  s4mulus	
  was	
  a	
  
word	
  or	
  a	
  non-­‐word	
  as	
  quickly	
  and	
  accurately	
  as	
  possible.	
  This	
  
indicaBon	
  was	
  made	
  by	
  pressing	
  either	
  the	
  lei	
  or	
  right	
  buMon	
  on	
  a	
  
gamepad	
  console.	
  Subjects	
  were	
  presented	
  with	
  a	
  total	
  of	
  700	
  
words,	
  separated	
  into	
  secBons	
  containing	
  50	
  words	
  each.	
  Each	
  word	
  
was	
  separated	
  from	
  each	
  other	
  by	
  an	
  inter-­‐sBmulus	
  interval	
  that	
  
ranged	
  from	
  400	
  -­‐600	
  milliseconds.	
  The	
  image	
  to	
  the	
  lei	
  depicts	
  the	
  
brain	
  signals	
  we	
  monitor	
  during	
  the	
  experiment.	
  	
  	
  
Abstract	
  
	
  
Speech	
   percepBon	
   research	
   shows	
   that	
   individuals	
   with	
   normal	
   language	
  
processing	
  rapidly	
  understand	
  the	
  meaning	
  of	
  a	
  spoken	
  word	
  despite	
  the	
  fact	
  
the	
  arBculaBon	
  and	
  acousBcs	
  of	
  these	
  words	
  differ	
  -­‐	
  someBmes	
  significantly	
  -­‐	
  
across	
  speakers	
  of	
  the	
  same	
  dialect.	
  While	
  the	
  semanBcs	
  of	
  the	
  word	
  spoken	
  by	
  
two	
  separate	
  speakers	
  would	
  remain	
  the	
  same,	
  the	
  phonology,	
  phoneBcs,	
  and	
  
acousBcs	
  of	
  the	
  word	
  are	
  different	
  among	
  the	
  two.	
  In	
  order	
  to	
  contribute	
  to	
  the	
  
mapping	
   of	
   this	
   capacity	
   for	
   "speech	
   normalizaBon",	
   we	
   test	
   the	
   Bming	
   of	
  
repeBBon	
   effects	
   within	
   and	
   across	
   speakers	
   using	
   electroencephalography.	
  
This	
  allows	
  us	
  to	
  idenBfy	
  the	
  Bming	
  of	
  cogniBve	
  processes	
  that	
  allow	
  the	
  brain	
  
to	
  recognize	
  that	
  the	
  same	
  word	
  uMered	
  by	
  two	
  different	
  speakers	
  are,	
  in	
  fact,	
  
the	
   same	
   word.	
   We	
   used	
   electroencephalography	
   (EEG)	
   to	
   record	
   charges	
   in	
  
electrical	
  potenBals	
  that	
  are	
  caused	
  by	
  neural	
  acBvity	
  while	
  the	
  human	
  subjects	
  
listened	
   to:	
   words	
   or	
   non-­‐words,	
   either	
   repeated	
   or	
   no	
   repeated,	
   and	
   if	
  
repeated,	
  by	
  the	
  same	
  speaker	
  or	
  by	
  different	
  speakers.	
  The	
  results	
  from	
  this	
  
experiment	
   will	
   indicate	
   how	
   quickly	
   we	
   recognize	
   two	
   words	
   (that	
   are	
   the	
  
same)	
  are	
  the	
  same	
  
Experimental	
  Design	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
No	
  
rep	
  
grassx	
   dogA	
   catA	
   boMlex	
  
Diff	
  
talk	
  
grassx	
   dogA	
   dogB	
   boMlex	
  
Same	
  
talk	
  
grassx	
   dogA	
   dogA	
   boMlex	
  
Hypothesis	
  and	
  Objec4ves	
  	
  
	
  
Our	
  objecBve	
  is	
  to	
  discover	
  at	
  what	
  point	
  in	
  4me	
  is	
  the	
  difference	
  in	
  acous4cs	
  and	
  
phone4cs	
  of	
  repeated	
  words,	
  spoken	
  by	
  different	
  speakers,	
  factored	
  out	
  seman4c	
  
aUribute	
  of	
  the	
  word	
  manifests	
  within	
  our	
  minds.	
  We	
  do	
  this	
  by	
  looking	
  for	
  what	
  brain	
  
signals	
  were	
  sensiBve	
  when	
  words	
  were	
  repeated	
  (either	
  by	
  the	
  same	
  speaker	
  or	
  by	
  
different	
  speakers)	
  compared	
  to	
  when	
  they	
  were	
  not	
  repeated.	
  	
  
	
  
We	
  predict	
  that	
  early	
  components	
  will	
  be	
  reduced	
  for	
  within-­‐speaker	
  priming	
  due	
  to	
  
repeBBon	
  effects	
  on	
  acousBc	
  and/or	
  phoneBc	
  processing.	
  In	
  contrast,	
  we	
  predict	
  that	
  
between-­‐speaker	
  priming	
  will	
  only	
  show	
  aMenuaBon	
  on	
  later	
  components,	
  reflecBng	
  
lexical	
  and/or	
  conceptual	
  priming.	
  

More Related Content

Similar to UROP Poster pdf

Marcus Byrd Poster V8.pptx
Marcus Byrd Poster V8.pptxMarcus Byrd Poster V8.pptx
Marcus Byrd Poster V8.pptxMarcus Byrd
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)IJERD Editor
 
Presentation on language and the brain
Presentation on language and the brainPresentation on language and the brain
Presentation on language and the brainAhmad Mashhood
 
Neurolinguistics
Neurolinguistics Neurolinguistics
Neurolinguistics PS Deb
 
Natural Language Processing with Python
Natural Language Processing with PythonNatural Language Processing with Python
Natural Language Processing with PythonBenjamin Bengfort
 
ענת ברנע נוירופידבק
ענת ברנע   נוירופידבקענת ברנע   נוירופידבק
ענת ברנע נוירופידבקBioFeedbackIsrael
 
Analyzing Human and Machine Performance In Resolving Ambiguous Spoken Sentences
Analyzing Human and Machine Performance In Resolving Ambiguous Spoken SentencesAnalyzing Human and Machine Performance In Resolving Ambiguous Spoken Sentences
Analyzing Human and Machine Performance In Resolving Ambiguous Spoken SentencesHussein Ghaly
 
The Neurophysiology of Speech
The Neurophysiology of SpeechThe Neurophysiology of Speech
The Neurophysiology of SpeechTing-Shuo Yo
 
Neural correlates of second language word learning: minimal instruction produ...
Neural correlates of second language word learning: minimal instruction produ...Neural correlates of second language word learning: minimal instruction produ...
Neural correlates of second language word learning: minimal instruction produ...Gabriel Guillén
 
Neurophysiological dynamics of phrase structure building during sentence proc...
Neurophysiological dynamics of phrase structure building during sentence proc...Neurophysiological dynamics of phrase structure building during sentence proc...
Neurophysiological dynamics of phrase structure building during sentence proc...María Riega Talledo
 
Unit 1 speech processing
Unit 1 speech processingUnit 1 speech processing
Unit 1 speech processingazhagujaisudhan
 
vdocuments.mx_language-comprehension.ppt
vdocuments.mx_language-comprehension.pptvdocuments.mx_language-comprehension.ppt
vdocuments.mx_language-comprehension.pptSyedNadeemAbbas6
 
Engineering Intelligent NLP Applications Using Deep Learning – Part 2
Engineering Intelligent NLP Applications Using Deep Learning – Part 2 Engineering Intelligent NLP Applications Using Deep Learning – Part 2
Engineering Intelligent NLP Applications Using Deep Learning – Part 2 Saurabh Kaushik
 

Similar to UROP Poster pdf (20)

PROPOR2020_Barreiroetal
PROPOR2020_BarreiroetalPROPOR2020_Barreiroetal
PROPOR2020_Barreiroetal
 
Marcus Byrd Poster V8.pptx
Marcus Byrd Poster V8.pptxMarcus Byrd Poster V8.pptx
Marcus Byrd Poster V8.pptx
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)
 
Presentation on language and the brain
Presentation on language and the brainPresentation on language and the brain
Presentation on language and the brain
 
Neurolinguistics
Neurolinguistics Neurolinguistics
Neurolinguistics
 
Natural Language Processing with Python
Natural Language Processing with PythonNatural Language Processing with Python
Natural Language Processing with Python
 
Language Processing
Language ProcessingLanguage Processing
Language Processing
 
NLP
NLPNLP
NLP
 
NLP
NLPNLP
NLP
 
ענת ברנע נוירופידבק
ענת ברנע   נוירופידבקענת ברנע   נוירופידבק
ענת ברנע נוירופידבק
 
Psycholinguistics
PsycholinguisticsPsycholinguistics
Psycholinguistics
 
Analyzing Human and Machine Performance In Resolving Ambiguous Spoken Sentences
Analyzing Human and Machine Performance In Resolving Ambiguous Spoken SentencesAnalyzing Human and Machine Performance In Resolving Ambiguous Spoken Sentences
Analyzing Human and Machine Performance In Resolving Ambiguous Spoken Sentences
 
The Neurophysiology of Speech
The Neurophysiology of SpeechThe Neurophysiology of Speech
The Neurophysiology of Speech
 
Neural correlates of second language word learning: minimal instruction produ...
Neural correlates of second language word learning: minimal instruction produ...Neural correlates of second language word learning: minimal instruction produ...
Neural correlates of second language word learning: minimal instruction produ...
 
Thesis_revised200216 (4)
Thesis_revised200216 (4)Thesis_revised200216 (4)
Thesis_revised200216 (4)
 
Neurophysiological dynamics of phrase structure building during sentence proc...
Neurophysiological dynamics of phrase structure building during sentence proc...Neurophysiological dynamics of phrase structure building during sentence proc...
Neurophysiological dynamics of phrase structure building during sentence proc...
 
Unit 1 speech processing
Unit 1 speech processingUnit 1 speech processing
Unit 1 speech processing
 
Su2012 ss lg week one full pp
Su2012 ss lg week one full ppSu2012 ss lg week one full pp
Su2012 ss lg week one full pp
 
vdocuments.mx_language-comprehension.ppt
vdocuments.mx_language-comprehension.pptvdocuments.mx_language-comprehension.ppt
vdocuments.mx_language-comprehension.ppt
 
Engineering Intelligent NLP Applications Using Deep Learning – Part 2
Engineering Intelligent NLP Applications Using Deep Learning – Part 2 Engineering Intelligent NLP Applications Using Deep Learning – Part 2
Engineering Intelligent NLP Applications Using Deep Learning – Part 2
 

UROP Poster pdf

  • 1. The  EEG  Signature  of  Same  Words  Spoken  by  Different  Speakers   S.  Elahi,  Professor  J.  Brennan   Department  of  LinguisBcs  at  the  University  of  Michigan   Results                                                               •  A:  n1  –  early  stages  of  acousBc  processing  à  possible  acousBc  priming  for  same  talk     •  B:  p2  –  later  stages  of  acousBc  processing  à  lack  of  acousBc  priming  for  diff  talker   •  C:  n400  –  semanBc  processing  à  lack  of  semanBc  priming  for  non  repeated  words            n400  –  semanBc  processing  à  reduced  for  both  repeBBon  condiBons   •  D:  possible  p300  à  unexpected  sBmuli  processing  –  with  diff  talker   Conclusion     PaMerns  we  see  at  this  point  and  Bme:       •  repeBBon  effects  for  the  same  talker  appear  earlier,  ~100  milliseconds   •  repeBBon  effects  for  different  talkers  do  not  appear  later,  >300  milliseconds     S4muli  Condi4ons  and  Hypothesized  Priming  Effect     R:  repeat;  NR:  non-­‐repeat;     +:  feature  is  repeated  in  sBmuli  pair;  –:  feature  is  not  repeated  in  sBmuli  pair                         Data  CollecBon     Materials     Experiment   ParBcipants   Our  study  obtained  data  from  X  parBcipants:  each  subject  was  right-­‐handed,  learned  English  as  his  or  her  first   language,  and  possessed  no  severe  neurological  impairments  or  disorders.  Sessions  lasted  for  an  hour  and  a  half   and  all  took  place  in  the  ComputaBonal  NeurolinguisBcs  Lab  in  Lorch  Hall.       Methods   Introduc4on     •  Hearing  a  word  affects  an  individual’s  understanding  of  the  word  that  follows.  For   example,  if  one  was  to  hear  “dog”,  a  repeated  uMerance  “dog”  would  be  processed   quicker  than  another  word.  Priming  effects  occur  at  each  linguisBc  level  of  speech   percepBon-­‐acousBcally,  phoneBcally,  phonologically,  lexically,  and  conceptually.     •  Furthermore,  we  understand  a  given  word  with  apparent  effortlessness  even  when   spoken   by   different   speakers.   Though   the   semanBcs   of   a   given   word   remains   the   same,  the  phoneBc  and  acousBc  aspect  of  the  word  varies  with  speaker  –  yet  we  are   sBll  able  to  understand  that  “dog”  means  dog.       •  At   issue   is   how   we   recognize   words   given   acousBc   and   phoneBc   variaBon.   The   prototype  theory  states  that  we  possess  the  same  mental  representaBon  for  a  given   object  and  compare  any  new  sBmuli  of  that  object  to  a  single  “prototype”  we  have   constructed   in   our   head.   AlternaBvely,   the   exemplar   theory   proposes   that   we   categorize  objects  by  comparison  of  any  new  sBmuli  with  instances  of  that  word  we   have  stored  in  memory.     •  Recent  research  indicates  that  more  rapid  processing  occurs  when  the  same  word  is   repeated   by   the   same   speaker   as   opposed   to   when   it   is   repeated   by   different   speakers.  We  use  electroencephalography  (EEG)  to  record  charges  in  scalp  electrical   potenBals   caused   by   neural   acBvity   while   the   parBcipants   perform   an   experiment   regarding  this  effect.  We  compare  the  Bme-­‐course  of  Event-­‐Related  Poten4als  (ERP)   between  condiBons  to  test  for  what  stage  of  processing  is  being  primed.     Word   Non-­‐Word   Same     Speaker   Different   Speaker   Same     Speaker   Different   Speaker   R   NR   R   NR   R   NR   R   NR   Acous4cs   +   (+)   –   –   +   (+)   –   –   Phone4cs   +   –   (–)   –   +   –   (–)   –   Phonology   +   –   +   –   +   –   +   –   Seman4cs   +   –   +   –   –   –   –   –   We  used  electroencephalography  (EEG)  to  record  electrical   acBvity  in  the  brain.  We  situated  each  subject  in  a  nylon   cap  containing  61  acBvely  amplified  electrodes  evenly   distributed  across  the  scalp.  These  electrodes  measured   electrical  potenBal,  which  reflected  the  current  generated   by  the  summary  of  corBcal  neurons  in  the  brain.     We  injected  electrolyte  gel   to  minimize  impedance   while  it  is  being  monitored.   Low  impedances  means  the   electrical  connecBon   between  the  skin  and   electrode  are  stabilized.      Hearing  threshold   was  idenBfied  for   each  subject  using   a  1000  Hz  tone.  The   volume  was  set  to   be  45  dB  above  this   threshold.     The  task  of  the  parBcipants  was  to  iden4fy  if  the  s4mulus  was  a   word  or  a  non-­‐word  as  quickly  and  accurately  as  possible.  This   indicaBon  was  made  by  pressing  either  the  lei  or  right  buMon  on  a   gamepad  console.  Subjects  were  presented  with  a  total  of  700   words,  separated  into  secBons  containing  50  words  each.  Each  word   was  separated  from  each  other  by  an  inter-­‐sBmulus  interval  that   ranged  from  400  -­‐600  milliseconds.  The  image  to  the  lei  depicts  the   brain  signals  we  monitor  during  the  experiment.       Abstract     Speech   percepBon   research   shows   that   individuals   with   normal   language   processing  rapidly  understand  the  meaning  of  a  spoken  word  despite  the  fact   the  arBculaBon  and  acousBcs  of  these  words  differ  -­‐  someBmes  significantly  -­‐   across  speakers  of  the  same  dialect.  While  the  semanBcs  of  the  word  spoken  by   two  separate  speakers  would  remain  the  same,  the  phonology,  phoneBcs,  and   acousBcs  of  the  word  are  different  among  the  two.  In  order  to  contribute  to  the   mapping   of   this   capacity   for   "speech   normalizaBon",   we   test   the   Bming   of   repeBBon   effects   within   and   across   speakers   using   electroencephalography.   This  allows  us  to  idenBfy  the  Bming  of  cogniBve  processes  that  allow  the  brain   to  recognize  that  the  same  word  uMered  by  two  different  speakers  are,  in  fact,   the   same   word.   We   used   electroencephalography   (EEG)   to   record   charges   in   electrical  potenBals  that  are  caused  by  neural  acBvity  while  the  human  subjects   listened   to:   words   or   non-­‐words,   either   repeated   or   no   repeated,   and   if   repeated,  by  the  same  speaker  or  by  different  speakers.  The  results  from  this   experiment   will   indicate   how   quickly   we   recognize   two   words   (that   are   the   same)  are  the  same   Experimental  Design                         No   rep   grassx   dogA   catA   boMlex   Diff   talk   grassx   dogA   dogB   boMlex   Same   talk   grassx   dogA   dogA   boMlex   Hypothesis  and  Objec4ves       Our  objecBve  is  to  discover  at  what  point  in  4me  is  the  difference  in  acous4cs  and   phone4cs  of  repeated  words,  spoken  by  different  speakers,  factored  out  seman4c   aUribute  of  the  word  manifests  within  our  minds.  We  do  this  by  looking  for  what  brain   signals  were  sensiBve  when  words  were  repeated  (either  by  the  same  speaker  or  by   different  speakers)  compared  to  when  they  were  not  repeated.       We  predict  that  early  components  will  be  reduced  for  within-­‐speaker  priming  due  to   repeBBon  effects  on  acousBc  and/or  phoneBc  processing.  In  contrast,  we  predict  that   between-­‐speaker  priming  will  only  show  aMenuaBon  on  later  components,  reflecBng   lexical  and/or  conceptual  priming.