SlideShare a Scribd company logo
1 of 38
Crowdsourcing	
  Stereotypes?	
  
The	
  Case	
  of	
  Linguis8c	
  Bias	
  in	
  Descrip8ons	
  of	
  People	
  
Jahna	
  O?erbacher	
  
Program	
  in	
  Social	
  Informa8on	
  Systems	
  
•  Please	
  find	
  the	
  Google	
  Doc	
  here:	
  
•  h?ps://docs.google.com/forms/d/
1LKTfQO9kcnzqfqMd1nA1NGAH0lb1bog-­‐
MxbqfF-­‐3kAw/viewform	
  
•  Tiny	
  URL:	
  
•  h?ps://8nyurl.com/pyjdj5s	
  
	
  
	
  
Mini	
  experiment	
  
1	
  
Who	
  am	
  I?	
  Tag	
  this	
  image.	
  
Metadata	
  genera8on	
  via	
  games	
  (GWAP)	
  
3	
  
Metadata	
  genera8on	
  via	
  games	
  (GWAP)	
  
4	
  
•  Promises	
  
•  Iden8fy	
  new	
  content	
  
•  Make	
  content	
  searchable	
  
•  Create	
  alt-­‐text	
  for	
  visually	
  impaired	
  users	
  
•  Eventually	
  train	
  machines	
  to	
  do	
  this	
  
•  Crowdsourcing	
  enables	
  this	
  on	
  a	
  large	
  scale	
  
•  Challenges	
  
•  Metadata	
  quality	
  
• Accuracy,	
  completeness,	
  consistency	
  
Metadata	
  genera8on	
  via	
  GWAP	
  
5	
  
•  Promises	
  
•  Iden8fy	
  new	
  content	
  
•  Make	
  content	
  searchable	
  
•  Create	
  alt-­‐text	
  for	
  visually	
  impaired	
  users	
  
•  Eventually	
  train	
  machines	
  to	
  do	
  this	
  
•  Crowdsourcing	
  enables	
  this	
  on	
  a	
  large	
  scale	
  
•  Challenges	
  
•  Metadata	
  quality	
  
• Accuracy,	
  completeness,	
  consistency	
  
•  What	
  happens	
  when	
  the	
  crowd	
  is	
  asked	
  to	
  label	
  images	
  
depic5ng	
  people?	
  
Metadata	
  genera8on	
  via	
  GWAP	
  
6	
  
Back	
  to	
  basics:	
  the	
  ESP	
  Game	
  
7	
  
Images	
  of	
  “doctors”	
  
Hat,	
  Surgeon,	
  Doctor,	
  
Operate,	
  Green,	
  Face	
  
Hat,	
  Ugly,	
  Talk	
  
Chair,	
  Eyes,	
  Smile,	
  
Cap,	
  Face,	
  
Astronaut,	
  White	
  
Nurse,	
  Doctor	
  
Doctor,	
  Earrings,	
  
Photo,	
  Lips,	
  Paper,	
  
Talk,	
  Black,	
  Speaker,	
  
Desk,	
  Face,	
  Student	
  
Photo,	
  Guy,	
  Gray,	
  
Door,	
  Chinese,	
  
Ears,	
  Grey,	
  Nerd,	
  
Black,	
  Asian,	
  
China,	
  Doctor	
  
8	
  
Images	
  of	
  “teachers”	
  
Teacher,	
  
Blackboard,	
  	
  
Board,	
  Books,	
  Hair,	
  	
  
Blue,	
  Book	
  
Teacher,	
  	
  
Blackboard,	
  Lecture,	
  	
  
Red,	
  Write,	
  Stairs,	
  	
  
Black,	
  School,	
  Hair,	
  	
  
Math,	
  White,	
  
Professor	
  
Teacher,	
  Paper,	
  	
  
Office,	
  Clock,	
  Classroom,	
  
School,	
  Work,	
  Grade,	
  	
  
White,	
  Kids,	
  Class	
  
Teacher,	
  Teeth,	
  
Smile,	
  Classroom,	
  
Face,	
  School,	
  Hair,	
  
Curly,	
  Lady	
  
9	
  
•  Even	
  when	
  we	
  are	
  being	
  “polite”	
  social	
  stereotypes	
  influence	
  us,	
  
and	
  our	
  language	
  reveals	
  this	
  
•  	
  Linguis8c	
  bias	
  
•  “A	
  systema8c	
  asymmetry	
  in	
  the	
  way	
  one	
  uses	
  language,	
  as	
  a	
  
func8on	
  of	
  the	
  social	
  group	
  of	
  the	
  person(s)	
  being	
  
described.”	
  [Beukeboom,	
  2013]	
  
•  Have	
  a	
  cogni8ve	
  origin…but	
  social	
  consequences	
  
•  Convey	
  stereotypes	
  in	
  a	
  very	
  subtle	
  way	
  
•  Linguis8c	
  Expectancy	
  Bias	
  (LEB)	
  [Mass	
  et	
  al.,	
  1989]	
  
•  Tendency	
  to	
  describe	
  other	
  people	
  and	
  situa8ons	
  that	
  are	
  
expectancy	
  consistent	
  (i.e.,	
  stereotype-­‐congruent)	
  with	
  more	
  
abstract,	
  interpre8ve	
  language…	
  
•  And	
  those	
  who	
  are	
  not	
  consistent	
  with	
  expecta8ons	
  (i.e.,	
  
stereotype-­‐incongruent)	
  with	
  more	
  tangible,	
  concrete	
  
language.	
  
Stereotypes?	
  
10	
  
11	
  
Example:	
  Linguis8c	
  Expectancy	
  Bias	
  
Doctor	
  
Surgeon	
  
Intelligent	
  
Serious	
  	
  
Doctor	
  
Nurse	
  
Experiment	
  
Smiley	
  
Talking	
  	
  
Doctor	
  
Nurse	
  
Student	
  
Studying	
  
Listening	
  
More	
  Expected	
  
More	
  abstract	
  /	
  interpreXve	
  language	
  
Less	
  	
  Expected	
  
More	
  concrete	
  language	
  
Why	
  would	
  it	
  ma?er?	
  
12	
  
Doctor	
  
Surgeon	
  
Intelligent	
  
Serious	
  	
  
Doctor	
  
Nurse	
  
Student	
  
Studying	
  
Listening	
  
Why	
  would	
  it	
  ma?er?	
  
13	
  
Doctor	
  
Surgeon	
  
Intelligent	
  
Serious	
  	
  
Doctor	
  
Nurse	
  
Student	
  
Studying	
  
Listening	
  
Abstract	
  language	
  is	
  powerful!	
  
•  Implies	
  stability	
  over	
  Xme,	
  across	
  
situaXons	
  
•  Message	
  recipients	
  interpret	
  	
  
messages	
  differently	
  	
  
[Wigboldus	
  et	
  al.,	
  2000]	
  
•  Abstract:	
  during	
  qualiXes	
  
•  Concrete:	
  transient	
  	
  
characterisXcs	
  
•  These	
  biases	
  contribute	
  to	
  the	
  	
  
maintenance	
  and	
  transmission	
  	
  
of	
  stereotypes,	
  because	
  abstract	
  	
  
informaXon	
  is	
  resistant	
  to	
  	
  
disconfirmaXon	
  
•  LEB	
  is	
  pervasive	
  in	
  human	
  communica8on,	
  but	
  has	
  rarely	
  
been	
  studied	
  outside	
  of	
  a	
  laboratory	
  	
  
[Hunt,	
  2011]	
  
•  How	
  to	
  study	
  this?	
  Two	
  characteris8cs	
  of	
  the	
  language	
  used	
  
to	
  label	
  images	
  
•  Abstract	
  
• How	
  oken	
  are	
  adjec8ves	
  used?	
  
• Describe	
  how	
  someone	
  is	
  versus	
  who	
  she	
  is	
  or	
  what	
  
she	
  is	
  doing	
  (e.g.,	
  intelligent	
  vs.	
  studying)	
  
•  Subjec8ve	
  
• How	
  oken	
  is	
  evalua8ve	
  language	
  used?	
  
• Injects	
  our	
  interpreta8on	
  	
  of	
  the	
  person	
  (e.g.,	
  
intelligent,	
  beau8ful,	
  gentle,	
  rockstar,	
  loser,	
  ugly,	
  
stupid)	
  	
  
Linguis8c	
  biases	
  in	
  image	
  metadata	
  
14	
  
Images	
  of	
  “doctors”:	
  adjec8ves	
  
Hat,	
  Surgeon,	
  Doctor,	
  
Operate,	
  Green,	
  Face	
  
Hat,	
  Ugly,	
  Talk	
  
Chair,	
  Eyes,	
  Smile,	
  
Cap,	
  Face,	
  
Astronaut,	
  White	
  
Nurse,	
  Doctor	
  
Doctor,	
  Earrings,	
  
Photo,	
  Lips,	
  Paper,	
  
Talk,	
  Black,	
  Speaker,	
  
Desk,	
  Face,	
  Student	
  
Photo,	
  Guy,	
  Gray,	
  
Door,	
  Chinese,	
  
Ears,	
  Grey,	
  Nerd,	
  
Black,	
  Asian,	
  
China,	
  Doctor	
  
15	
  
Images	
  of	
  “doctors”:	
  subjec8ve	
  words	
  
Hat,	
  Surgeon,	
  Doctor,	
  
Operate,	
  Green,	
  Face	
  
Hat,	
  Ugly,	
  Talk	
  
Chair,	
  Eyes,	
  Smile,	
  
Cap,	
  Face,	
  
Astronaut,	
  White	
  
Nurse,	
  Doctor	
  
Doctor,	
  Earrings,	
  
Photo,	
  Lips,	
  Paper,	
  
Talk,	
  Black,	
  Speaker,	
  
Desk,	
  Face,	
  Student	
  
Photo,	
  Guy,	
  Gray,	
  
Door,	
  Chinese,	
  
Ears,	
  Grey,	
  Nerd,	
  
Black,	
  Asian,	
  
China,	
  Doctor	
  
16	
  
•  100k	
  images	
  collected	
  from	
  the	
  live	
  game	
  	
  
[von	
  Ahn	
  &	
  Dabbish,	
  2004]	
  
•  46,392	
  images	
  depic8ng	
  one	
  or	
  more	
  people	
  
Small	
  ESP	
  Dataset	
  
17	
  
•  Q1:	
  Do	
  we	
  observe	
  differences	
  across	
  gender	
  with	
  respect	
  to	
  
the	
  use	
  of	
  abstract	
  and	
  subjec8ve	
  language?	
  
•  Analysis	
  1:	
  all	
  images	
  regardless	
  of	
  context	
  	
  
•  Method:	
  automated	
  linguis8c	
  analysis	
  
•  18,916	
  images	
  of	
  men	
  
•  14,628	
  images	
  of	
  women	
  
•  Q2:	
  Do	
  we	
  observe	
  differences	
  across	
  gender	
  with	
  respect	
  to	
  
the	
  a?ributes	
  of	
  the	
  subject(s)	
  that	
  are	
  described?	
  
•  Analysis	
  2:	
  images	
  in	
  par8cular	
  professional	
  contexts	
  
•  Method:	
  manual	
  content	
  analysis	
  to	
  iden8fy	
  labels	
  
describing	
  
• Physical	
  appearance	
  
• Disposi8on	
  or	
  character	
  
• Occupa8on	
  
	
  
Research	
  ques8ons	
  and	
  methods	
  
18	
  
Analysis	
  1:	
  method	
  
19	
  
ESP	
  Game	
  Dataset	
  
100k	
  images	
  
LIWC	
  categories:	
  
“humans,	
  friends,	
  
family”	
  
Use	
  hyponyms	
  of	
  
“man”	
  /	
  “male”	
  and	
  
“woman”	
  /	
  “female”	
  
to	
  label	
  gender	
  
STEP	
  1:	
  
Find	
  images	
  	
  
of	
  men	
  and	
  	
  
women	
  
Analysis	
  1:	
  method	
  
20	
  
ESP	
  Game	
  Dataset	
  
100k	
  images	
  
LIWC	
  categories:	
  
“humans,	
  friends,	
  
family”	
  
Use	
  hyponyms	
  of	
  
“man”	
  /	
  “male”	
  and	
  
“woman”	
  /	
  “female”	
  
to	
  label	
  gender	
  
STEP	
  1:	
  
Find	
  images	
  	
  
of	
  men	
  and	
  	
  
women	
  
Part-­‐of-­‐speech	
  
tagging	
  
CLAWS	
  C5	
  
Manual	
  error	
  analysis:	
  
AdjecXves	
  (1.45%)	
  
Women	
  are	
  more	
  
oken	
  described	
  with	
  
adjec8ves	
  
STEP	
  2:	
  
Find	
  labels	
  	
  
that	
  are	
  	
  
adjecXves	
  
Analysis	
  1:	
  method	
  
21	
  
ESP	
  Game	
  Dataset	
  
100k	
  images	
  
LIWC	
  categories:	
  
“humans,	
  friends,	
  
family”	
  
Use	
  hyponyms	
  of	
  
“man”	
  /	
  “male”	
  and	
  
“woman”	
  /	
  “female”	
  
to	
  label	
  gender	
  
STEP	
  1:	
  
Find	
  images	
  	
  
of	
  men	
  and	
  	
  
women	
  
Part-­‐of-­‐speech	
  
tagging	
  
CLAWS	
  C5	
  
Manual	
  error	
  analysis:	
  
AdjecXves	
  (1.45%)	
  
Women	
  are	
  more	
  
oken	
  described	
  with	
  
adjec8ves	
  
STEP	
  2:	
  
Find	
  labels	
  	
  
that	
  are	
  	
  
adjecXves	
  
SubjecXvity	
  Lexicon	
  
(Wilson	
  et	
  al.	
  2005)	
  
Women	
  are	
  more	
  
oken	
  described	
  with	
  
subjec8ve	
  adjec8ves	
  
STEP	
  3:	
  
Find	
  	
  
subjecXve	
  
adjecXves	
  
Analysis	
  1:	
  method	
  
22	
  
Examples	
  from	
  Wilson	
  and	
  colleagues’	
  (2005)	
  SubjecXvity	
  Lexicon	
  
Analysis	
  1:	
  findings	
  
23	
  
Strongly	
  subjec8ve	
  adjec8ves	
  
24	
  
Men	
  (N	
  =	
  18,916)	
   Women	
  (N	
  =	
  14,628)	
  
Happy	
  (385)	
   Sexy	
  (2,425)	
  
Ugly	
  (225)	
   Happy	
  (549)	
  
Sad	
  (201)	
   Ugly	
  (254)	
  
Angry	
  (132)	
   Sad	
  (241)	
  
Drunk	
  (124)	
   Cute	
  (117)	
  
Scary	
  (107)	
   BeauXful	
  (84)	
  
Funny	
  (103)	
   Fun	
  (67)	
  
Cute	
  (88)	
   Drunk	
  (58)	
  
Mad	
  (74)	
   Scary	
  (51)	
  
Fun	
  (63)	
   Likle	
  (40)	
  
2	
  describe	
  
physical	
  	
  
appearance	
  
5	
  describe	
  
physical	
  	
  
appearance	
  
•  There	
  is	
  a	
  tendency	
  to	
  label	
  images	
  of	
  women	
  with	
  
•  More	
  abstract	
  language	
  (adjec8ves)	
  
•  More	
  evalua8ve	
  language	
  (subjec8ve	
  adjec8ves)	
  
•  Limita8ons	
  of	
  Analysis	
  1	
  
•  Automated	
  analysis	
  of	
  the	
  language	
  used	
  
•  Compared	
  all	
  images	
  regardless	
  of	
  their	
  contexts	
  
•  Need	
  to	
  see	
  if	
  a	
  robust,	
  manual	
  analysis	
  also	
  
suggests	
  a	
  gender-­‐based	
  linguis8c	
  bias	
  
Analysis	
  1:	
  summary	
  
25	
  
Analysis	
  2:	
  method	
  
26	
  
STEP	
  1:	
  
Find	
  images	
  	
  
In	
  similar	
  	
  
contexts	
  
STEP	
  2:	
  
Hand	
  code	
  
each	
  label	
  /	
  	
  
image	
  pair	
  
STEP	
  3:	
  
Compare	
  	
  
proporXons	
  of	
  	
  
labels	
  in	
  each	
  	
  
category	
  across	
  
	
  gender	
  
IdenXfy	
  images	
  with	
  labels	
  concerning	
  6	
  
occupaXons:	
  athlete,	
  doctor,	
  singer,	
  soldier	
  /	
  Army,	
  
teacher,	
  waiter	
  /	
  waitress	
  
1.	
  Could	
  this	
  word	
  be	
  used	
  to	
  describe	
  the	
  
physical	
  appearance	
  of	
  [an	
  athlete]?	
  
2.	
  Could	
  this	
  word	
  describe	
  the	
  disposiXon	
  
or	
  character	
  of	
  [an	
  athlete]?	
  
3.	
  Does	
  this	
  word	
  describe	
  something	
  
about	
  the	
  occupaXon	
  of	
  [athlete]?	
  
Women	
  are	
  associated	
  
with	
  more	
  labels	
  
concerning	
  appearance;	
  
fewer	
  concerning	
  
occupa8on	
  
•  Coders	
  (3)	
  
•  Myself	
  +	
  two	
  Mechanical	
  Turkers	
  
•  Cohen’s	
  Kappa	
  –	
  interjudge	
  agreement	
  
•  Physical	
  appearance:	
  κ	
  =	
  0.69	
  	
  
•  Disposi8on:	
  κ	
  =	
  0.78	
  
•  Occupa8on:	
  κ	
  =	
  0.38	
  
•  Judge	
  1’s	
  labels	
  were	
  used	
  
	
  
Analysis	
  2:	
  coding	
  process	
  /	
  agreement	
  
27	
  
Analysis	
  2:	
  findings	
  
28	
  
Analysis	
  2:	
  findings	
  
29	
  
•  Both	
  analyses	
  suggest	
  that	
  ESP	
  Game	
  players	
  label	
  
images	
  of	
  men	
  and	
  women	
  a	
  bit	
  differently	
  
•  Labels	
  chosen	
  reflect	
  predominant	
  social	
  
stereotypes	
  
•  Women	
  are	
  noted	
  as	
  being	
  (and	
  should	
  be)	
  
physically	
  a?rac8ve	
  
•  Women	
  in	
  stereotype-­‐congruent	
  occupa8ons	
  
(singer,	
  teaching,	
  waitress)	
  were	
  less	
  likely	
  to	
  be	
  
described	
  with	
  professional	
  vocabulary	
  
Gender-­‐based	
  linguis8c	
  bias	
  
30	
  
•  Technologies	
  are	
  cultural	
  ar8facts.	
  It’s	
  worth	
  
reflec8ng	
  on	
  the	
  values	
  we	
  want	
  them	
  to	
  have:	
  
•  Technical	
  (e.g.,	
  efficient,	
  low-­‐cost	
  genera8on	
  of	
  
image	
  metadata)	
  
•  Human	
  (e.g.,	
  gender,	
  racial	
  equality)	
  
•  Technology’s	
  influence,	
  par8cularly	
  on	
  young	
  
people	
  
•  E.g.,	
  Google	
  search	
  
So	
  what?	
  
31	
  
Google	
  image	
  search:	
  teacher	
  
32	
  
•  I	
  have	
  iden8fied	
  a	
  problem,	
  but	
  I	
  haven’t	
  told	
  you	
  if	
  
we	
  should	
  solve	
  it	
  and,	
  if	
  so,	
  how!	
  
•  Food	
  for	
  thought	
  
•  Does	
  linguis8c	
  bias	
  in	
  metadata	
  bother	
  us?	
  
•  If	
  so,	
  can	
  we	
  build	
  a	
  be?er	
  game,	
  resul8ng	
  in	
  
be?er	
  metdata?	
  
Limita8ons	
  of	
  this	
  work	
  
33	
  
•  Moving	
  from	
  observa8on	
  to	
  experimenta8on	
  
•  Current	
  study	
  was	
  a	
  secondary	
  data	
  analysis	
  
• No	
  control	
  over	
  
• Players	
  (gender,	
  age,	
  social	
  status)	
  
• S8mulus	
  (image	
  content)	
  
•  Currently	
  iden8fied	
  some	
  parameters	
  we	
  need	
  to	
  
study	
  and	
  how	
  (e.g.,	
  characteris8cs	
  of	
  language)	
  
•  System	
  will	
  allow	
  us	
  to	
  examine	
  more	
  systema8cally	
  
how	
  each	
  factor	
  impacts	
  the	
  linguis8c	
  biases	
  we	
  
observe	
  	
  
•  Players	
  
•  S8mulus	
  
•  Social	
  cues	
  players	
  receive	
  
Next	
  steps	
  
34	
  
Step	
  1:	
  Collect	
  demographics	
  
35	
  
Step	
  2:	
  Interfaces	
  with	
  various	
  cues	
  
36	
  
•  Contact	
  me:	
  
•  jahna.o?erbacher@ouc.ac.cy	
  
•  This	
  presenta8on	
  was	
  based	
  on	
  the	
  following	
  recent	
  
papers:	
  
•  O?erbacher,	
  J.	
  2015.	
  Crowdsourcing	
  Stereotypes:	
  
Linguis8c	
  Bias	
  in	
  Metadata	
  Generated	
  via	
  GWAP.	
  In	
  
Proceedings	
  of	
  the	
  Conference	
  on	
  Human	
  Factors	
  in	
  
Compu8ng	
  Systems	
  (ACM	
  CHI’15).	
  ACM	
  Press:	
  New	
  York.	
  
•  O?erbacher,	
  J.	
  2015.	
  Linguis8c	
  Bias	
  in	
  Collabora8vely	
  
Produced	
  Biographies:	
  Crowdsourcing	
  Social	
  
Stereotypes?	
  In	
  Interna8onal	
  AAAI	
  Conference	
  of	
  
Weblogs	
  and	
  Social	
  Media.	
  AAAI	
  Press:	
  Palo	
  Alto,	
  CA.	
  	
  
	
  
Thank	
  you!	
  
37	
  

More Related Content

Viewers also liked

Personal Image Consulting
Personal Image ConsultingPersonal Image Consulting
Personal Image Consulting
robertsol
 
Star power, props, costumes
Star power, props, costumesStar power, props, costumes
Star power, props, costumes
iain bruce
 
Building a professional image
Building a professional imageBuilding a professional image
Building a professional image
Brad Nickel
 
Developing a professional image
Developing a professional imageDeveloping a professional image
Developing a professional image
linachoice
 
Lesson 9 Stereotypes
Lesson 9 StereotypesLesson 9 Stereotypes
Lesson 9 Stereotypes
Patrickwolak
 
Representation of gender and stereotypes
Representation of gender and stereotypesRepresentation of gender and stereotypes
Representation of gender and stereotypes
Liz Davies
 

Viewers also liked (17)

Professional Athletes On Social Media
Professional Athletes On Social MediaProfessional Athletes On Social Media
Professional Athletes On Social Media
 
The Truth About: Black People and Their Place in World History, by Dr. Leroy ...
The Truth About: Black People and Their Place in World History, by Dr. Leroy ...The Truth About: Black People and Their Place in World History, by Dr. Leroy ...
The Truth About: Black People and Their Place in World History, by Dr. Leroy ...
 
Personal Image Consulting
Personal Image ConsultingPersonal Image Consulting
Personal Image Consulting
 
Men in black
Men in blackMen in black
Men in black
 
Star power, props, costumes
Star power, props, costumesStar power, props, costumes
Star power, props, costumes
 
African american images in the media
African american images in the mediaAfrican american images in the media
African american images in the media
 
Topic 9 handling difficult guests
Topic 9 handling difficult guestsTopic 9 handling difficult guests
Topic 9 handling difficult guests
 
Monologue
MonologueMonologue
Monologue
 
Building a professional image
Building a professional imageBuilding a professional image
Building a professional image
 
Image Management
Image ManagementImage Management
Image Management
 
Prejudice (Social Psychology)
Prejudice (Social Psychology)Prejudice (Social Psychology)
Prejudice (Social Psychology)
 
Developing a professional image
Developing a professional imageDeveloping a professional image
Developing a professional image
 
African American Culture
African American CultureAfrican American Culture
African American Culture
 
Dramatic monologue
Dramatic monologueDramatic monologue
Dramatic monologue
 
Lesson 9 Stereotypes
Lesson 9 StereotypesLesson 9 Stereotypes
Lesson 9 Stereotypes
 
Representation of gender and stereotypes
Representation of gender and stereotypesRepresentation of gender and stereotypes
Representation of gender and stereotypes
 
Props media
Props mediaProps media
Props media
 

Similar to J otterbacher stereotypes

Edet 637 Dual Coding Theory
Edet 637 Dual Coding TheoryEdet 637 Dual Coding Theory
Edet 637 Dual Coding Theory
guestb8ed61
 
2 3Discussion Board 2 Learning StylesPersonalityAfter .docx
2  3Discussion Board 2 Learning StylesPersonalityAfter .docx2  3Discussion Board 2 Learning StylesPersonalityAfter .docx
2 3Discussion Board 2 Learning StylesPersonalityAfter .docx
felicidaddinwoodie
 
Aac October 2008
Aac   October 2008Aac   October 2008
Aac October 2008
Jantioco1
 

Similar to J otterbacher stereotypes (20)

Modeling health related topics in an online forum designed for the deaf & har...
Modeling health related topics in an online forum designed for the deaf & har...Modeling health related topics in an online forum designed for the deaf & har...
Modeling health related topics in an online forum designed for the deaf & har...
 
Carroll presentation ant hsinchu 2018
Carroll presentation ant hsinchu 2018Carroll presentation ant hsinchu 2018
Carroll presentation ant hsinchu 2018
 
Lecture: Semantic Word Clouds
Lecture: Semantic Word CloudsLecture: Semantic Word Clouds
Lecture: Semantic Word Clouds
 
Deep Learning for Natural Language Processing: Word Embeddings
Deep Learning for Natural Language Processing: Word EmbeddingsDeep Learning for Natural Language Processing: Word Embeddings
Deep Learning for Natural Language Processing: Word Embeddings
 
AstraZeneca - The promise of graphs & graph-based learning in drug discovery
AstraZeneca - The promise of graphs & graph-based learning in drug discoveryAstraZeneca - The promise of graphs & graph-based learning in drug discovery
AstraZeneca - The promise of graphs & graph-based learning in drug discovery
 
The Complexity of Data: Computer Simulation and “Everyday” Social Science
The Complexity of Data: Computer Simulation and “Everyday” Social ScienceThe Complexity of Data: Computer Simulation and “Everyday” Social Science
The Complexity of Data: Computer Simulation and “Everyday” Social Science
 
Visual-Semantic Embeddings: some thoughts on Language
Visual-Semantic Embeddings: some thoughts on LanguageVisual-Semantic Embeddings: some thoughts on Language
Visual-Semantic Embeddings: some thoughts on Language
 
ASEE2012 Presentation: iKNEER User Study
ASEE2012 Presentation: iKNEER User StudyASEE2012 Presentation: iKNEER User Study
ASEE2012 Presentation: iKNEER User Study
 
Design of learning experiences for science teaching & faculty development - W...
Design of learning experiences for science teaching & faculty development - W...Design of learning experiences for science teaching & faculty development - W...
Design of learning experiences for science teaching & faculty development - W...
 
Building an engagement toolkit: How you can understand your customers, evalua...
Building an engagement toolkit: How you can understand your customers, evalua...Building an engagement toolkit: How you can understand your customers, evalua...
Building an engagement toolkit: How you can understand your customers, evalua...
 
Edet 637 Dual Coding Theory
Edet 637 Dual Coding TheoryEdet 637 Dual Coding Theory
Edet 637 Dual Coding Theory
 
Reading disabilities not
Reading disabilities notReading disabilities not
Reading disabilities not
 
2 3Discussion Board 2 Learning StylesPersonalityAfter .docx
2  3Discussion Board 2 Learning StylesPersonalityAfter .docx2  3Discussion Board 2 Learning StylesPersonalityAfter .docx
2 3Discussion Board 2 Learning StylesPersonalityAfter .docx
 
Better Questions, Better Results!
Better Questions, Better Results!Better Questions, Better Results!
Better Questions, Better Results!
 
Aac October 2008
Aac   October 2008Aac   October 2008
Aac October 2008
 
Attaining the Unattainable? Reassessing Claims of Human Parity in Neural Mach...
Attaining the Unattainable? Reassessing Claims of Human Parity in Neural Mach...Attaining the Unattainable? Reassessing Claims of Human Parity in Neural Mach...
Attaining the Unattainable? Reassessing Claims of Human Parity in Neural Mach...
 
Essential Questions
Essential Questions Essential Questions
Essential Questions
 
Why use video by david mann
Why use video by david mannWhy use video by david mann
Why use video by david mann
 
Understanding Autism: UT Arlington New Teacher Webinar
Understanding Autism: UT Arlington New Teacher WebinarUnderstanding Autism: UT Arlington New Teacher Webinar
Understanding Autism: UT Arlington New Teacher Webinar
 
15. political discourseinthenewskb
15. political discourseinthenewskb15. political discourseinthenewskb
15. political discourseinthenewskb
 

More from Han Woo PARK

4차산업혁명 린든달러 비트코인 알트코인 암호화폐 가상화폐 등
4차산업혁명 린든달러 비트코인 알트코인 암호화폐 가상화폐 등4차산업혁명 린든달러 비트코인 알트코인 암호화폐 가상화폐 등
4차산업혁명 린든달러 비트코인 알트코인 암호화폐 가상화폐 등
Han Woo PARK
 
박한우 교수 프로파일 (31 oct2017)
박한우 교수 프로파일 (31 oct2017)박한우 교수 프로파일 (31 oct2017)
박한우 교수 프로파일 (31 oct2017)
Han Woo PARK
 
박한우 영어 이력서 Curriculum vitae 경희대 행사 제출용
박한우 영어 이력서 Curriculum vitae 경희대 행사 제출용박한우 영어 이력서 Curriculum vitae 경희대 행사 제출용
박한우 영어 이력서 Curriculum vitae 경희대 행사 제출용
Han Woo PARK
 
페이스북 댓글을 통해 살펴본 대구·경북(TK) 촛불집회
페이스북 댓글을 통해 살펴본 대구·경북(TK) 촛불집회페이스북 댓글을 통해 살펴본 대구·경북(TK) 촛불집회
페이스북 댓글을 통해 살펴본 대구·경북(TK) 촛불집회
Han Woo PARK
 
세계산학관협력총회 Watef 패널을 공지합니다
세계산학관협력총회 Watef 패널을 공지합니다세계산학관협력총회 Watef 패널을 공지합니다
세계산학관협력총회 Watef 패널을 공지합니다
Han Woo PARK
 

More from Han Woo PARK (20)

소셜 빅데이터를 활용한_페이스북_이용자들의_반응과_관계_분석
소셜 빅데이터를 활용한_페이스북_이용자들의_반응과_관계_분석소셜 빅데이터를 활용한_페이스북_이용자들의_반응과_관계_분석
소셜 빅데이터를 활용한_페이스북_이용자들의_반응과_관계_분석
 
페이스북 선도자 탄핵촛불에서 캠폐인 이동경로
페이스북 선도자 탄핵촛불에서 캠폐인 이동경로페이스북 선도자 탄핵촛불에서 캠폐인 이동경로
페이스북 선도자 탄핵촛불에서 캠폐인 이동경로
 
WATEF 2018 신년 세미나(수정)
WATEF 2018 신년 세미나(수정)WATEF 2018 신년 세미나(수정)
WATEF 2018 신년 세미나(수정)
 
세계트리플헬릭스미래전략학회 WATEF 2018 신년 세미나
세계트리플헬릭스미래전략학회 WATEF 2018 신년 세미나세계트리플헬릭스미래전략학회 WATEF 2018 신년 세미나
세계트리플헬릭스미래전략학회 WATEF 2018 신년 세미나
 
Disc 2015 보도자료 (휴대폰번호 삭제-수정)
Disc 2015 보도자료 (휴대폰번호 삭제-수정)Disc 2015 보도자료 (휴대폰번호 삭제-수정)
Disc 2015 보도자료 (휴대폰번호 삭제-수정)
 
Another Interdisciplinary Transformation: Beyond an Area-studies Journal
Another Interdisciplinary Transformation: Beyond an Area-studies JournalAnother Interdisciplinary Transformation: Beyond an Area-studies Journal
Another Interdisciplinary Transformation: Beyond an Area-studies Journal
 
4차산업혁명 린든달러 비트코인 알트코인 암호화폐 가상화폐 등
4차산업혁명 린든달러 비트코인 알트코인 암호화폐 가상화폐 등4차산업혁명 린든달러 비트코인 알트코인 암호화폐 가상화폐 등
4차산업혁명 린든달러 비트코인 알트코인 암호화폐 가상화폐 등
 
KISTI-WATEF-BK21Plus-사이버감성연구소 2017 동계세미나 자료집
KISTI-WATEF-BK21Plus-사이버감성연구소 2017 동계세미나 자료집KISTI-WATEF-BK21Plus-사이버감성연구소 2017 동계세미나 자료집
KISTI-WATEF-BK21Plus-사이버감성연구소 2017 동계세미나 자료집
 
박한우 교수 프로파일 (31 oct2017)
박한우 교수 프로파일 (31 oct2017)박한우 교수 프로파일 (31 oct2017)
박한우 교수 프로파일 (31 oct2017)
 
Global mapping of artificial intelligence in Google and Google Scholar
Global mapping of artificial intelligence in Google and Google ScholarGlobal mapping of artificial intelligence in Google and Google Scholar
Global mapping of artificial intelligence in Google and Google Scholar
 
박한우 영어 이력서 Curriculum vitae 경희대 행사 제출용
박한우 영어 이력서 Curriculum vitae 경희대 행사 제출용박한우 영어 이력서 Curriculum vitae 경희대 행사 제출용
박한우 영어 이력서 Curriculum vitae 경희대 행사 제출용
 
향기담은 하루찻집
향기담은 하루찻집향기담은 하루찻집
향기담은 하루찻집
 
Twitter network map of #ACPC2017 1st day using NodeXL
Twitter network map of #ACPC2017 1st day using NodeXLTwitter network map of #ACPC2017 1st day using NodeXL
Twitter network map of #ACPC2017 1st day using NodeXL
 
페이스북 댓글을 통해 살펴본 대구·경북(TK) 촛불집회
페이스북 댓글을 통해 살펴본 대구·경북(TK) 촛불집회페이스북 댓글을 통해 살펴본 대구·경북(TK) 촛불집회
페이스북 댓글을 통해 살펴본 대구·경북(TK) 촛불집회
 
Facebook bigdata to understand regime change and migration patterns during ca...
Facebook bigdata to understand regime change and migration patterns during ca...Facebook bigdata to understand regime change and migration patterns during ca...
Facebook bigdata to understand regime change and migration patterns during ca...
 
세계산학관협력총회 Watef 패널을 공지합니다
세계산학관협력총회 Watef 패널을 공지합니다세계산학관협력총회 Watef 패널을 공지합니다
세계산학관협력총회 Watef 패널을 공지합니다
 
2017 대통령선거 후보수락 유튜브 후보수락 동영상 김찬우 박효찬 박한우
2017 대통령선거 후보수락 유튜브 후보수락 동영상 김찬우 박효찬 박한우2017 대통령선거 후보수락 유튜브 후보수락 동영상 김찬우 박효찬 박한우
2017 대통령선거 후보수락 유튜브 후보수락 동영상 김찬우 박효찬 박한우
 
2017년 인포그래픽스 과제모음
2017년 인포그래픽스 과제모음2017년 인포그래픽스 과제모음
2017년 인포그래픽스 과제모음
 
SNS 매개 학습공동체의 학습네트워크 탐색 : 페이스북 그룹을 중심으로
SNS 매개 학습공동체의 학습네트워크 탐색 : 페이스북 그룹을 중심으로SNS 매개 학습공동체의 학습네트워크 탐색 : 페이스북 그룹을 중심으로
SNS 매개 학습공동체의 학습네트워크 탐색 : 페이스북 그룹을 중심으로
 
2016년 촛불집회의 페이스북 댓글 데이터를 통해 본 하이브리드 미디어 현상
2016년 촛불집회의 페이스북 댓글 데이터를 통해 본 하이브리드 미디어 현상2016년 촛불집회의 페이스북 댓글 데이터를 통해 본 하이브리드 미디어 현상
2016년 촛불집회의 페이스북 댓글 데이터를 통해 본 하이브리드 미디어 현상
 

J otterbacher stereotypes

  • 1. Crowdsourcing  Stereotypes?   The  Case  of  Linguis8c  Bias  in  Descrip8ons  of  People   Jahna  O?erbacher   Program  in  Social  Informa8on  Systems  
  • 2. •  Please  find  the  Google  Doc  here:   •  h?ps://docs.google.com/forms/d/ 1LKTfQO9kcnzqfqMd1nA1NGAH0lb1bog-­‐ MxbqfF-­‐3kAw/viewform   •  Tiny  URL:   •  h?ps://8nyurl.com/pyjdj5s       Mini  experiment   1  
  • 3. Who  am  I?  Tag  this  image.  
  • 4. Metadata  genera8on  via  games  (GWAP)   3  
  • 5. Metadata  genera8on  via  games  (GWAP)   4  
  • 6. •  Promises   •  Iden8fy  new  content   •  Make  content  searchable   •  Create  alt-­‐text  for  visually  impaired  users   •  Eventually  train  machines  to  do  this   •  Crowdsourcing  enables  this  on  a  large  scale   •  Challenges   •  Metadata  quality   • Accuracy,  completeness,  consistency   Metadata  genera8on  via  GWAP   5  
  • 7. •  Promises   •  Iden8fy  new  content   •  Make  content  searchable   •  Create  alt-­‐text  for  visually  impaired  users   •  Eventually  train  machines  to  do  this   •  Crowdsourcing  enables  this  on  a  large  scale   •  Challenges   •  Metadata  quality   • Accuracy,  completeness,  consistency   •  What  happens  when  the  crowd  is  asked  to  label  images   depic5ng  people?   Metadata  genera8on  via  GWAP   6  
  • 8. Back  to  basics:  the  ESP  Game   7  
  • 9. Images  of  “doctors”   Hat,  Surgeon,  Doctor,   Operate,  Green,  Face   Hat,  Ugly,  Talk   Chair,  Eyes,  Smile,   Cap,  Face,   Astronaut,  White   Nurse,  Doctor   Doctor,  Earrings,   Photo,  Lips,  Paper,   Talk,  Black,  Speaker,   Desk,  Face,  Student   Photo,  Guy,  Gray,   Door,  Chinese,   Ears,  Grey,  Nerd,   Black,  Asian,   China,  Doctor   8  
  • 10. Images  of  “teachers”   Teacher,   Blackboard,     Board,  Books,  Hair,     Blue,  Book   Teacher,     Blackboard,  Lecture,     Red,  Write,  Stairs,     Black,  School,  Hair,     Math,  White,   Professor   Teacher,  Paper,     Office,  Clock,  Classroom,   School,  Work,  Grade,     White,  Kids,  Class   Teacher,  Teeth,   Smile,  Classroom,   Face,  School,  Hair,   Curly,  Lady   9  
  • 11. •  Even  when  we  are  being  “polite”  social  stereotypes  influence  us,   and  our  language  reveals  this   •   Linguis8c  bias   •  “A  systema8c  asymmetry  in  the  way  one  uses  language,  as  a   func8on  of  the  social  group  of  the  person(s)  being   described.”  [Beukeboom,  2013]   •  Have  a  cogni8ve  origin…but  social  consequences   •  Convey  stereotypes  in  a  very  subtle  way   •  Linguis8c  Expectancy  Bias  (LEB)  [Mass  et  al.,  1989]   •  Tendency  to  describe  other  people  and  situa8ons  that  are   expectancy  consistent  (i.e.,  stereotype-­‐congruent)  with  more   abstract,  interpre8ve  language…   •  And  those  who  are  not  consistent  with  expecta8ons  (i.e.,   stereotype-­‐incongruent)  with  more  tangible,  concrete   language.   Stereotypes?   10  
  • 12. 11   Example:  Linguis8c  Expectancy  Bias   Doctor   Surgeon   Intelligent   Serious     Doctor   Nurse   Experiment   Smiley   Talking     Doctor   Nurse   Student   Studying   Listening   More  Expected   More  abstract  /  interpreXve  language   Less    Expected   More  concrete  language  
  • 13. Why  would  it  ma?er?   12   Doctor   Surgeon   Intelligent   Serious     Doctor   Nurse   Student   Studying   Listening  
  • 14. Why  would  it  ma?er?   13   Doctor   Surgeon   Intelligent   Serious     Doctor   Nurse   Student   Studying   Listening   Abstract  language  is  powerful!   •  Implies  stability  over  Xme,  across   situaXons   •  Message  recipients  interpret     messages  differently     [Wigboldus  et  al.,  2000]   •  Abstract:  during  qualiXes   •  Concrete:  transient     characterisXcs   •  These  biases  contribute  to  the     maintenance  and  transmission     of  stereotypes,  because  abstract     informaXon  is  resistant  to     disconfirmaXon  
  • 15. •  LEB  is  pervasive  in  human  communica8on,  but  has  rarely   been  studied  outside  of  a  laboratory     [Hunt,  2011]   •  How  to  study  this?  Two  characteris8cs  of  the  language  used   to  label  images   •  Abstract   • How  oken  are  adjec8ves  used?   • Describe  how  someone  is  versus  who  she  is  or  what   she  is  doing  (e.g.,  intelligent  vs.  studying)   •  Subjec8ve   • How  oken  is  evalua8ve  language  used?   • Injects  our  interpreta8on    of  the  person  (e.g.,   intelligent,  beau8ful,  gentle,  rockstar,  loser,  ugly,   stupid)     Linguis8c  biases  in  image  metadata   14  
  • 16. Images  of  “doctors”:  adjec8ves   Hat,  Surgeon,  Doctor,   Operate,  Green,  Face   Hat,  Ugly,  Talk   Chair,  Eyes,  Smile,   Cap,  Face,   Astronaut,  White   Nurse,  Doctor   Doctor,  Earrings,   Photo,  Lips,  Paper,   Talk,  Black,  Speaker,   Desk,  Face,  Student   Photo,  Guy,  Gray,   Door,  Chinese,   Ears,  Grey,  Nerd,   Black,  Asian,   China,  Doctor   15  
  • 17. Images  of  “doctors”:  subjec8ve  words   Hat,  Surgeon,  Doctor,   Operate,  Green,  Face   Hat,  Ugly,  Talk   Chair,  Eyes,  Smile,   Cap,  Face,   Astronaut,  White   Nurse,  Doctor   Doctor,  Earrings,   Photo,  Lips,  Paper,   Talk,  Black,  Speaker,   Desk,  Face,  Student   Photo,  Guy,  Gray,   Door,  Chinese,   Ears,  Grey,  Nerd,   Black,  Asian,   China,  Doctor   16  
  • 18. •  100k  images  collected  from  the  live  game     [von  Ahn  &  Dabbish,  2004]   •  46,392  images  depic8ng  one  or  more  people   Small  ESP  Dataset   17  
  • 19. •  Q1:  Do  we  observe  differences  across  gender  with  respect  to   the  use  of  abstract  and  subjec8ve  language?   •  Analysis  1:  all  images  regardless  of  context     •  Method:  automated  linguis8c  analysis   •  18,916  images  of  men   •  14,628  images  of  women   •  Q2:  Do  we  observe  differences  across  gender  with  respect  to   the  a?ributes  of  the  subject(s)  that  are  described?   •  Analysis  2:  images  in  par8cular  professional  contexts   •  Method:  manual  content  analysis  to  iden8fy  labels   describing   • Physical  appearance   • Disposi8on  or  character   • Occupa8on     Research  ques8ons  and  methods   18  
  • 20. Analysis  1:  method   19   ESP  Game  Dataset   100k  images   LIWC  categories:   “humans,  friends,   family”   Use  hyponyms  of   “man”  /  “male”  and   “woman”  /  “female”   to  label  gender   STEP  1:   Find  images     of  men  and     women  
  • 21. Analysis  1:  method   20   ESP  Game  Dataset   100k  images   LIWC  categories:   “humans,  friends,   family”   Use  hyponyms  of   “man”  /  “male”  and   “woman”  /  “female”   to  label  gender   STEP  1:   Find  images     of  men  and     women   Part-­‐of-­‐speech   tagging   CLAWS  C5   Manual  error  analysis:   AdjecXves  (1.45%)   Women  are  more   oken  described  with   adjec8ves   STEP  2:   Find  labels     that  are     adjecXves  
  • 22. Analysis  1:  method   21   ESP  Game  Dataset   100k  images   LIWC  categories:   “humans,  friends,   family”   Use  hyponyms  of   “man”  /  “male”  and   “woman”  /  “female”   to  label  gender   STEP  1:   Find  images     of  men  and     women   Part-­‐of-­‐speech   tagging   CLAWS  C5   Manual  error  analysis:   AdjecXves  (1.45%)   Women  are  more   oken  described  with   adjec8ves   STEP  2:   Find  labels     that  are     adjecXves   SubjecXvity  Lexicon   (Wilson  et  al.  2005)   Women  are  more   oken  described  with   subjec8ve  adjec8ves   STEP  3:   Find     subjecXve   adjecXves  
  • 23. Analysis  1:  method   22   Examples  from  Wilson  and  colleagues’  (2005)  SubjecXvity  Lexicon  
  • 25. Strongly  subjec8ve  adjec8ves   24   Men  (N  =  18,916)   Women  (N  =  14,628)   Happy  (385)   Sexy  (2,425)   Ugly  (225)   Happy  (549)   Sad  (201)   Ugly  (254)   Angry  (132)   Sad  (241)   Drunk  (124)   Cute  (117)   Scary  (107)   BeauXful  (84)   Funny  (103)   Fun  (67)   Cute  (88)   Drunk  (58)   Mad  (74)   Scary  (51)   Fun  (63)   Likle  (40)   2  describe   physical     appearance   5  describe   physical     appearance  
  • 26. •  There  is  a  tendency  to  label  images  of  women  with   •  More  abstract  language  (adjec8ves)   •  More  evalua8ve  language  (subjec8ve  adjec8ves)   •  Limita8ons  of  Analysis  1   •  Automated  analysis  of  the  language  used   •  Compared  all  images  regardless  of  their  contexts   •  Need  to  see  if  a  robust,  manual  analysis  also   suggests  a  gender-­‐based  linguis8c  bias   Analysis  1:  summary   25  
  • 27. Analysis  2:  method   26   STEP  1:   Find  images     In  similar     contexts   STEP  2:   Hand  code   each  label  /     image  pair   STEP  3:   Compare     proporXons  of     labels  in  each     category  across    gender   IdenXfy  images  with  labels  concerning  6   occupaXons:  athlete,  doctor,  singer,  soldier  /  Army,   teacher,  waiter  /  waitress   1.  Could  this  word  be  used  to  describe  the   physical  appearance  of  [an  athlete]?   2.  Could  this  word  describe  the  disposiXon   or  character  of  [an  athlete]?   3.  Does  this  word  describe  something   about  the  occupaXon  of  [athlete]?   Women  are  associated   with  more  labels   concerning  appearance;   fewer  concerning   occupa8on  
  • 28. •  Coders  (3)   •  Myself  +  two  Mechanical  Turkers   •  Cohen’s  Kappa  –  interjudge  agreement   •  Physical  appearance:  κ  =  0.69     •  Disposi8on:  κ  =  0.78   •  Occupa8on:  κ  =  0.38   •  Judge  1’s  labels  were  used     Analysis  2:  coding  process  /  agreement   27  
  • 31. •  Both  analyses  suggest  that  ESP  Game  players  label   images  of  men  and  women  a  bit  differently   •  Labels  chosen  reflect  predominant  social   stereotypes   •  Women  are  noted  as  being  (and  should  be)   physically  a?rac8ve   •  Women  in  stereotype-­‐congruent  occupa8ons   (singer,  teaching,  waitress)  were  less  likely  to  be   described  with  professional  vocabulary   Gender-­‐based  linguis8c  bias   30  
  • 32. •  Technologies  are  cultural  ar8facts.  It’s  worth   reflec8ng  on  the  values  we  want  them  to  have:   •  Technical  (e.g.,  efficient,  low-­‐cost  genera8on  of   image  metadata)   •  Human  (e.g.,  gender,  racial  equality)   •  Technology’s  influence,  par8cularly  on  young   people   •  E.g.,  Google  search   So  what?   31  
  • 33. Google  image  search:  teacher   32  
  • 34. •  I  have  iden8fied  a  problem,  but  I  haven’t  told  you  if   we  should  solve  it  and,  if  so,  how!   •  Food  for  thought   •  Does  linguis8c  bias  in  metadata  bother  us?   •  If  so,  can  we  build  a  be?er  game,  resul8ng  in   be?er  metdata?   Limita8ons  of  this  work   33  
  • 35. •  Moving  from  observa8on  to  experimenta8on   •  Current  study  was  a  secondary  data  analysis   • No  control  over   • Players  (gender,  age,  social  status)   • S8mulus  (image  content)   •  Currently  iden8fied  some  parameters  we  need  to   study  and  how  (e.g.,  characteris8cs  of  language)   •  System  will  allow  us  to  examine  more  systema8cally   how  each  factor  impacts  the  linguis8c  biases  we   observe     •  Players   •  S8mulus   •  Social  cues  players  receive   Next  steps   34  
  • 36. Step  1:  Collect  demographics   35  
  • 37. Step  2:  Interfaces  with  various  cues   36  
  • 38. •  Contact  me:   •  jahna.o?erbacher@ouc.ac.cy   •  This  presenta8on  was  based  on  the  following  recent   papers:   •  O?erbacher,  J.  2015.  Crowdsourcing  Stereotypes:   Linguis8c  Bias  in  Metadata  Generated  via  GWAP.  In   Proceedings  of  the  Conference  on  Human  Factors  in   Compu8ng  Systems  (ACM  CHI’15).  ACM  Press:  New  York.   •  O?erbacher,  J.  2015.  Linguis8c  Bias  in  Collabora8vely   Produced  Biographies:  Crowdsourcing  Social   Stereotypes?  In  Interna8onal  AAAI  Conference  of   Weblogs  and  Social  Media.  AAAI  Press:  Palo  Alto,  CA.       Thank  you!   37