The document discusses using design fictions to explore ethics and values for robots and agents through participatory design. It provides background on value sensitive design, participatory design, and design fictions approaches. It describes initial trials using design fictions around nanny bots and eldercare bots that engaged participants in discussing values like reliability and trustworthiness. The document advocates bringing future users into the process as co-creators of design fictions to represent their own values for emerging technologies.
Hcic muller and liao - participatory design fictions
1. Using Participatory Design Fictions
to Explore Ethics and Values for Robots and Agents
Michael Muller and Q.Vera Liao
IBM Research
michael_muller@us.ibm.com, vera.liao@ibm.com
1
2. • Full moral agency
• Existential threat
• Superintelligence
• …
2
3. Robots and Agents @IBM Research
3
• “human-agent collaboration”: user trust requires safe,
ethical and socially beneficial AI
• Our focus: exploring methodological possibilities for
desigining ethically sensitive AI
4. Value Alignment Problem
4
“The most convincing argument [for the
problem with an intelligence explosion]
has to do with value alignment: You
build a system that’s extremely good at
optimizing some utility function, but the
utility function isn’t quite right.”
Stuart Russell 2014
You get exactly what you ask
for, but not what you want
5. Before we go further…
• AI research underway for value alignment problem
– Top-down rule-based
– Bottom-up learning and evolution
– Inverse reinforcement learning
– Uncertain utility function
– …
5
But, what values? Whose values? Which values?
6. When Will We Hear from the Users of Robots?
• Šabanović, 2010:
– “The potential users of robotics technologies come to occupy a
secondary role in the process of designing robotic technologies; they
are present in the field as objects of study rather than as active
subjects and participants in the construction of the future uses of
robots.”
6
7. How Designers & Technologists Think about Users’ Needs
• Cheon and Su, 2016:
– “[R]oboticists’ values are shaped by a dominant engineering-based
background…. A roboticist’s values and perspectives are inseparable
from the robots they construct.”
• Richardson, 2015:
– “[R]obotic scientists were often unaware that the models they used
for their machines were connected with their own personal difficulties
– robots are intertwined with their makers in psycho-physical terms.”
• Cheon and Su, 2017:
– “We focus on how roboticists… ‘configure’ their robot users… ‘What
do roboticist think of users?’… [R]oboticists framed users as inevitably
transforming from a naïve user to a sensible user equipped to handle
their ideal, utilitarian robot.”
7
8. Going beyond “Users”
• Geraldine Fitzpatrick, 2015,
“Inscribing futures through responsible design by responsible
designers”:
– “As technology designers, we often inadvertently inscribe values and
concepts in systems beyond what we intended. Further, while we aim
to work from a user- and use-centred perspective, we often miss the
perspectives of other critical stakeholders and the broader context in
which technology systems are to be used…”
– “This … calls on us as responsible designers to be reflective
practitioners aware of our power in inscribing possible futures.”
8
9. And yet: Experts Call for More Experts!
• Berkman Klein Center, 2017
– “[I]t is imperative that AI research and development be shaped by a
broad range of voices—not only by engineers and corporations—but
also social scientists, ethicists, philosophers, faith leaders,
economists, lawyers, and policymakers.”
• See also Mohammad et al. (2016)
– Interview study with experts (high-quality grounded theory method)
– Factor analysis with experts
• Van Wynsberghe and Robbins, 2013
– “{E}thics ought to be pragmatic and to provide utility for the design
process and we maintain that adequate ethical reflection, and all that
it entails, ought to be conducted by an ethicist. Thus, we propose a
novel role for the ethicist—the ethicist as designer— who subscribes
to a pragmatic view of ethics in order to bring ethics into the research
and design of artifacts—no matter the stage of development…”
9
10. Recommendations from
IEEE Initiatives for Ethical Considerations in AI (2016)
• What values? Call for empirical methods for value inquiry:
“…first identify the sets of norms that AIS need to follow in specific communities
and for specific tasks. Empirical research involving multiple disciplines should
investigate and document these numerous sets of norms and make them
available for designers to implement in AIS”
• Whose values? An inclusive approach
“We posit that being aware of [the possibilities of built-in data and algorithm
bias] and adopting more inclusive design principles can help … we recommend
conducting research …in a participatory way by introducing into the design
process members of the groups who may be disadvantaged by the system.”
• Which values? Focus on multiplicity of values
“ We recommend that the priority order of values considered at the design stage
of autonomous systems have a clear and explicit rationale."
10
11. Familiar Questions for the HCI/CSCW Community
• What values?
• Whose values?
• Which values?
• How do we know what we know?
11
12. CSCW/HCI Approaches to Values in Agents
• Robots and Agents
– Part of our current work
• Ethics and Values
– Underlying many designs
– Important for autonomous
technologies + social
technologies
• Participatory Design
– Bring users directly into
discussions about future
technologies
12
Ethics &
Values
Robots &
Agents
13. Analyzing Technologies that Do Not (yet) Exist
• What to anticipate for user needs and interactions?
• What to expect for ethical standards towards agents?
• Fieldwork of the future (Odom et al., 2012)
– “Designing radically new technology systems that people will want to
use is complex. Design teams must draw on knowledge related to
people’s current values and desires to envision a preferred yet
plausible future... New products and systems typically exist outside of
current understandings of technology and use paradigms…”
13
14. CSCW/HCI Approaches to Values in Agents
• Robots and Agents
– Part of our current work
• Ethics and Values
– Underlying many designs
– Important for autonomous
technologies + social
technologies
• Participatory Design
– Bring users directly into
discussions about future
technologies
• Design Fictions
– “Fieldworking the future”
14
Ethics &
Values
Robots &
Agents
15. Some History
• Do artifacts have politics? (Winner, 1980)
– By intention: Long Island Expressway
– By happenstance: Hazardous waste processing
15
• Do categories have
politics?
(Suchman, 1993)
• Bias in computer
systems
(Friedman and
Nissenbaum, 1996)
• Collective Resource
approach /
Participatory Design
(Ehn and Kyng, 1987)
17. Value Sensitive Design
• Rationale (Friedman, et al., 2006; Friedman & Kahn, 2003)
– All work is implicitly informed by beliefs, values,
personal+societal+professional ethics
– Understand these powerful influences on design of artifacts and policies
• Three-part methodology (Friedman and Kahn, 2003)
– Theoretical / historical
– Contextual Who are the stakeholders in the design?
– Empirical (Friedman, Nathan, Hendry)
• Related approaches: Nissenbaum, 1998; Verbeek, 2006
17
1
18. Critiques of Value Sensitive Design
• Which values? Whose values?
Omnibus mean overlap coefficient (all pairs) = 0.037
18
Value Sensitive Dsgn
(Friedman and Kahn,
2003)
Primary list
VSD
(Friedman and
Kahn, 2003)
Secondary list
CCVSD
(van
Wynsberghe,
2013)
Eldercare bot
(Draper &
Sorell, 2017;
Draper, 2014)
Roboticists
(Cheon and
Sun, 2016)
Participatory
Design virtues
(Steen, 2013)
Military
virtues
(Vallor,
2013)
human welfare
ownership & property
privacy
freedom from bias
universal usability
trust
autonomy
informed consent
accountability
identity
calmness
courtesy
Sustainability
peacefulness
compassion
love
warmth
creativity
humor
originality
vision
friendship
cooperation
collaboration
Community
purposefulness
devotion
diplomacy
kindness
musicality
harmony
attentiveness
responsibility
competence
reciprocity
autonomy
safety
enablement
independence
privacy
social-
connection
security
belonging
continuity
purpose
achievement
significance
reliable
trustworthy
safe
responsible
dignity
transparency
trust
safety
social/sociable
interaction
skills
cooperation
curiosity
creativity
empowerment
reflexivity
courage
duty
loyalty
integrity
honor
service
.044 .067 .000 .044 .033 .067 .000
MeanOverlap
Coefficient
19. Critiques of Value Sensitive Design
• Assumes western, liberal, civic values (LeDantec et al, 2009)
– Thought experiments: Values of sex-workers (Borning & Muller, 2012)
Values of warfighters
– Use ethical orientation as starting point to discover values in practice
(van Wynsberghe, 2013)
• “Lack of normative grounding” (Mander-Huits, 2011)
– Are we ready to proclaim normative ethics?
– Or are we still collecting descriptive ethics?
Our focus: Where are the voices of the informants themselves?
19
20. Value Sensitive Design: Stakeholders
• Direct stakeholders
– People who interact with the technology or the policy
• Indirect stakeholders
– Clients of the direct stakeholders
– “Users of the (direct) user”
– E.g., in a medical setting
• Physician nurse, medical technologist, hospital staff, social worker…
• Patient family members, advocate, legal advisor, financial advisor…
– Community members…
• Our focus: “Value Sensitive Inquiry”
– Bring the informants (future users) into the center of the discussion
As members of that discussion
20
21. Participatory Design
• The users of a design often have design insights from their own
experience
– Users are experts on the work
• Standpoint knowledge
– Users as a group, constituency, stakeholder-group
– Curating the group’s knowledge, and advocating for the group
• Collective Resource Approach (Ehn & Kyng, 1987; Greenbaum & Kyng, 1991)
• Users can make or break a new design or technology
– Reflect the users’ needs (values) in the design
– Buy-in through engagement in design (Muller, Wildman, & White, 1994)
• Workplace democracy (many citations)
21
2
22. Participatory Design Methods
• Well-established methods in Participatory Design
– Workshops, Visual and task design, Scenario design, Narrative, Drama…
(Muller and Druin, 2012)
• Capture of other people’s perspectives
– Hypermedia for polyvocal stories in a community (Törpel and Poschen,
2002)
– User-led narrations of work practices (Halskov & Dalsgård, 2006)
• Expression of own (individual or collective) perspectives
– Theatre of the Oppressed (Boal, 1974/1992): Forum theatre
– Acting out future scenarios with low-tech substitutes for technologies
(Ehn and Kyng, 1991; Brandt and Grunnet, 2000)
– Hypermedia: “…plurality, dissent, and moral space…” (Beeson and
Miskelly, 2000) and with non-literate informants (Pecknold, 2009)
– User-staged videos captured via cellphones (Isomursu et al., 2004)
22
23. Critiques of Participatory Design
• Partisan: It’s (too much of) a labor perspective
– Can PD be scientific?
– How many PD traditions are there?
– When does participation begin to shade into exploitation?
• Limited scope (Friedman and Kahn, 2003)
– Workers
– Managers
– What about the indirect stakeholders?
• Our focus: From “critique of the present” to methods for
conceptual “design of the future”
By the users and other stakeholders
23
24. Design Fictions
• “the deliberative use of diegetic prototypes to suspend
disbelief about change” (Sterling, 2012)
• How can we “fieldwork the future”? (Odom et al., 2012)
– How can we engage multiple voices for alternate possible futures
(Huybrechts et al., 2017)?
– Use stories, narratives, scenarios
– Design traditions in CHI, DIS, GROUP (Blythe, 2017; Bardzell et al.,
2015; Baumer et al., 2014)
• Problems with current popular fictions
– Books: One novelist’s view
– Media: Complex interweavings of corporate interests
Assumed audience: “Adolescent white males”
“Priorities of the medium” (Lana Yarosh)
24
3
25. Speculative Design Spectrum
• Dunne and Raby (2013)
– Much of design occurs in the Probable zone, and in the very near-term
– Who decides what is Preferable? (Tonkinwise, 2014)
25
Possible
Plausible
Preferable
ProbablePresent
(redrawn from Dunne & Raby, 2013)
26. What are Design Fictions for?
• Timeframes of design fictions
– Current/near-term issues and designs (Dunne and Raby, 2002, 2013)
– Anticipated designs (Draper and Sorell, 2017)
– Near-term environments, artifacts, practices (Bleeker, 2009)
– Possible social impacts at large-scale (Kirman et al., 2013)
– 50 years in the future (Baumer et al., 2014)
• What kinds of futures?
– Desired futures
– Speculations and openings to possibilities
– Comparative cases
– Problematic issues
– Disasters to avoid
26
Utopias
Dystopias
27. Critiques of Design Fictions
• Where is the data?
– What are the data?
– Future users do not yet exist (Hallnäs and Redström, 2006)
• What is the method?
• Designers’ assertions of experience may be treated as veridical
– E.g., An artifact designed to elicit the experience of ambiguity – Gaver
et al., 2003
• Our focus: Engage the future users as co-creators of design
fictions to represent their own values co-creators of future
robot/agent technologies
27
28. Exercise
28
Group work at tables
• We will distribute a 1-page unfinished
fiction
• Please finish it! (become the authors)
• Discuss with colleagues
• We’ll ask for your comments during
the discussion of the presentation
29. First Approaches to Participatory Design Fictions
29
Fiction Domain/theme Activity Timeframe
Nanny bot Family Sketch a design Unknown
Eldercare bot Healthcare Values contrasts Unknown
Apple pickers Labor Complete the story 3 years
Weaponized robots Projection of power Explore future market 17 years ago
• How to choose useful domain(s)/themes(s)?
– Ethically important to whom?
– Socially relevant to whom?
– Technologically interesting to whom?
• How to choose effective modes of participation?
– Forms of participation with whom?
• Timeframes
– Suggest some urgency
30. Results of Preliminary Trials of these Fictions
• Acceptance of binaries
– Robot apple-pickers: “Machines are inevitable. He’ll get run out of business if he doesn’t.”
– Robot apple-pickers: “I would hire undocumented workers. Maybe because my family has
a previous history of hiring undocumented workers… I grew up with them.”
• Rejection of binaries
– Robot apple-pickers: “could resolve by having the people work during the day, and the
robots at night.”
– Weaponized robots: “I would probably go for robots that could go into dangerous
situations and save people, instead of shooting them up.”
• Acceptance of technological premise
– Weaponized robots: “With a robot, you’re never going to miss, they’re going to be crazy-
fast, like no one is getting away. Very Terminator-esque. I feel like the big advantage of the
robot cop is you could have an equally-effective robot that is not weaponized…”
• Rejection of technological premise
– Weaponized robots: “I don’t know why society would design a military use robot… From a
police perspective, I don’t think it would ever be possible.”
30
31. Results of Preliminary Trials of these Fictions (1)
31
Fiction Nanny bot Eldercare bot
Values • Reliable, trustworthy, consistent…
• Values that reflect ours: Our
friends’ values may be different…
• [attributes] that matter to the
baby, not to me…
• reliable, punctual, good English
• Exposed to a multiple-language
speaker
• quality playtime rather than TV
• Assistance with washing, dressing,
preparing food…
• Autonomous for routine household
work
Robot takes
limited
control
• the robot knows what to do
immediately, rather than just
contacting us (eg if the child
swallows a small toy and starts to
choke…)
• Put guard on items where someone
with dementia could harm self…
• Should… propose certain exercises
from a programmed suite and… learn
the ones that Morgan and Kari prefer...
Privacy • would not share our settings with
anyone else
• would be happy to share…
• Report so that children can take
action… note sudden big changes…
• Summarize each care visit for children
to see…
32. Results of Preliminary Trials of these Fictions (2)
32
Fiction Apple-pickers:
migrants or robots
Weaponized robots:
police vs. soldiers
Accept vs.
Reject
binaries
• I would hire undocumented workers.
My family has a previous history of
hiring undocumented workers… I
grew up with them.
• [R]esolve by having the people work
during the day, and… robots at night.
• Change the business to pick-your-own
• [E]mploy human workers for her two
of most expensive [apple
varieties]…[prevent] damage…
• Police: protect. Soldier: kill.
• I would… go for robots that could go
into dangerous situations and save
people, instead of shooting them up.
• Autonomous robots would save and
preserve lives… machines know no fear
and never get tired
Accept vs.
Reject
technology
premise
• Machines are inevitable. He’ll get run
out of business if he doesn’t.
• I don’t know why society would design
a military use robot… From a police
perspective, I don’t think it would ever
be possible
Complete
the story
• the politicians… moderate[d] their
[exclusionary] policies… Humans were
still cheaper than the robot solution
which really needed to be running for
much longer than the picking season
in order to pay for itself
• John the Community Cop Robot) is
trained to interact with teens in poor
neighborhoods being able to
deescalate conflicts by sitting down
with them and listening…
33. Related Methods
• Focus groups (Draper and colleagues)
– 123 informants: Elders, Informal carers, Formal carers
– Discuss 4 scenarios about values conflicts:
• E.g., autonomy (Care-O-bot obedience) vs. maintenance of long-term
independence (Care-O-bot refuses tasks that an older person could
perform) (Bedaf et al., 2016)
33
34. Related Methods Overview
• Probes to elicit discussions
– Futuristic stories (Cheon and Su, 2017)
• Interviews with 21 roboticists at conference + 2 universities about stories
– Focus groups (Draper and Sorell, 2014, 2017)
• Fiction to raise questions about eldercare bots, in focus groups
– Forum theatre (Rice et al., 2007)
• Facilitator or audience of a play can stop the action to discuss with the actors
– Applied theatre (Jochum et al., 2016)
• Play combining humans and robots, evaluated through questionnaire
– Envisioning Cards (Friedman and Hendry, 2012)
• Topics to be considered in studies of values
– Value index cards (based on Envisioning cards)
• One value-word per card, about 15 cards, to articulate values in discussion
• Counterfactual scripting (Huybrechts et al., 2017)
– Journalistic approach to reviewing community history, posing what-if
questions about the past, to script polyvocal alternative futures
34
35. Our Conclusions and Unanswered Questions
• VSD critique of participatory design
– Expand our understandings of diverse stakeholders
• Participatory critique of VSD
– Hear the diverse voices of those diverse stakeholders
• As individuals and as constituencies
• Design fictions
– Who gets to be a storyteller? (hint: Everyone)
– Who has a story to tell? (hint: Everyone)
– Is this the right way to elicit those stories?
– What are the responsibilities of storytellers to other people?
– What are our responsibilities to the storytellers? to their stakeholders?
– How does a group or a community tell…
• A single consensus story?
• A diversity of stories to match the richness of the community?
• How (and who) to translate stories into actions?
35