CHI 2018
ACM SIGCHI - Special
Interest Group on
Computer-Human
Interaction
Montréal, Canada
Hendrik Heuer
hheuer@uni-bremen.de
Institute for Information Management Bremen
Information Management Group (AGIM)
Keynote: Christian Rudder
Keynote: Christian Rudder
Keynote: Christian Rudder
Keynote: Christian Rudder
Repurposing emoji for Personalised
Communication: Why 🍕 means "I love you"
• The use of emoji in digital communication can
convey a wealth of emotions and concepts that
otherwise would take many words to
express. emoji have become a popular form of
communication, with researchers
claiming emoji represent a type of “ubiquitous
language” that can span different languages. In
this paper however, we explore how emoji are also
used in highly personalised and purposefully
secretive ways. We show that emoji are
repurposed for something other than their
“intended” use between close partners, family
members and friends. We present the range of
reasons why certain emoji get chosen, including
the concept of “emoji affordance” and explore
why repurposing occurs. Normally used for speed,
some emoji are instead used to convey intimate
and personal sentiments that, for many reasons,
their users cannot express in words. We discuss
how this form of repurposing must be considered
in tasks such as emoji-based sentiment analysis.
 I Lead, You Help But Only with Enough Details: Understanding
User Experience of Co-Creation with Artificial Intelligence
• Recent advances in artificial intelligence (AI) have increased the opportunities for users to
interact with the technology. Now, users can even collaborate with AI in creative activities
such as art. To understand the user experience in this new user–AI collaboration, we
designed a prototype, DuetDraw, an AI interface that allows users and the AI agent to
draw pictures collaboratively. We conducted a user study employing both quantitative and
qualitative methods. Thirty participants performed a series of drawing tasks with the think-
aloud method, followed by post-hoc surveys and interviews. Our findings are as follows:
(1) Users were significantly more content with DuetDraw when the tool gave detailed
instructions. (2) While users always wanted to lead the task, they also wanted the AI to
explain its intentions but only when the users wanted it to do so. (3) Although users rated
the alternative low in predictability, controllability, and comprehensibility, they enjoyed
their interactions with it during the task. Based on these findings, we discuss implications
for user interfaces where users can collaborate with AI in creative works.
Empowerment in HCI - A Survey and
Framework
• Empowering people through technology is of
increasing concern in the HCI community.
However, there are different interpretations
of empowerment, which diverge substantially.
The same term thus describes an entire
spectrum of research endeavours and goals. This
conceptual unclarity hinders the development of
a meaningful discourse and exchange. To better
understand what empowerment means in our
community, we reviewed 54 CHI full papers
using the terms empower and empowerment.
Based on our analysis and informed by prior
writings on power and empowerment, we
construct a framework that serves as a lens to
analyze notions of empowerment in current HCI
research. Finally, we discuss the implications of
these notions of empowerment on approaches to
technology design and offer recommendations
for future work. With this analysis, we hope to
add structure and terminological clarity to this
growing and important facet of HCI research.
Empowerment in HCI - A Survey and
Framework
'It's Reducing a Human Being to a Percentage';
Perceptions of Justice in Algorithmic Decisions
• Data-driven decision-making consequential to individuals raises
important questions of accountability and justice. Indeed, European law
provides individuals limited rights to 'meaningful information about the
logic' behind significant, autonomous decisions such as loan approvals,
insurance quotes, and CV filtering. We undertake three experimental
studies examining people's perceptions of justice in algorithmic
decision-making under different scenarios and explanation styles.
Dimensions of justice previously observed in response to human
decision-making appear similarly engaged in response to algorithmic
decisions. Qualitative analysis identified several concerns and heuristics
involved in justice perceptions including arbitrariness, generalisation,
and (in)dignity. Quantitative analysis indicates that explanation styles
primarily matter to justice perceptions only when subjects are exposed
to multiple different styles---under repeated exposure of one style,
scenario effects obscure any explanation effects. Our results suggests
there may be no 'best' approach to explaining algorithmic decisions, and
that reflection on their automated nature both implicates and mitigates
justice dimensions.
A Qualitative Exploration of Perceptions of
Algorithmic Fairness
• Algorithmic systems increasingly shape information people are
exposed to as well as influence decisions about employment, finances,
and other opportunities. In some cases, algorithmic systems may be
more or less favorable to certain groups or individuals, sparking
substantial discussion of algorithmic fairness in public policy circles,
academia, and the press. We broaden this discussion by exploring how
members of potentially affected communities feel about algorithmic
fairness. We conducted workshops and interviews with 44
participants from several populations traditionally marginalized by
categories of race or class in the United States. While the concept of
algorithmic fairness was largely unfamiliar, learning about algorithmic
(un)fairness elicited negative feelings that connect to current national
discussions about racial injustice and economic inequality. In addition
to their concerns about potential harms to themselves and society,
participants also indicated that algorithmic fairness (or lack thereof)
could substantially affect their trust in a company or product.
Communicating Algorithmic Process in
Online Behavioral Advertising
• Advertisers develop algorithms to select the most relevant
advertisements for users. However, the opacity of these
algorithms, along with their potential for violating user privacy,
has decreased user trust and preference in behavioral
advertising. To mitigate this, advertisers have started to
communicate algorithmic processes in behavioral
advertising. However, how revealing parts of the algorithmic
process affects users' perceptions towards ads and
platforms is still an open question. To investigate this, we
exposed 32 users to why an ad is shown to them, what
advertising algorithms infer about them, and how advertisers
use this information. Users preferred interpretable, non-
creepy explanations about why an ad is presented, along
with a recognizable link to their identity. We further found
that exposing users to their algorithmically-derived attributes
led to algorithm disillusionment---users found that advertising
algorithms they thought were perfect were far from it. We
propose design implications to effectively communicate
information about advertising algorithms.
Towards Algorithmic Experience: Initial
Efforts for Social Media Contexts
• Algorithms influence most of our daily activities, decisions, and they guide our
behaviors. It has been argued that algorithms even have a direct impact on
democratic societies. Human-Computer Interaction research needs to develop
analytical tools for describing the interaction with, and experience of algorithms.
Based on user participatory workshops focused on scrutinizing Facebook’s
newsfeed, an algorithm-influenced social media, we propose the concept of
Algorithmic Experience (AX) as an analytic framing for making the interaction
with and experience of algorithms explicit. Connecting it to design, we articulate
five functional categories of AX that are particularly important to cater for in social
media: profiling transparency and management, algorithmic awareness and
control, and selective algorithmic memory.
Fairness and Accountability Design Needs for Algorithmic
Support in High-Stakes Public Sector Decision-Making
• Calls for heightened consideration of fairness and accountability in
algorithmically-informed public decisions—like taxation, justice, and
child protection—are now commonplace. How might designers support
such human values? We interviewed 27 public sector machine learning
practitioners across 5 OECD countries regarding challenges
understanding and imbuing public values into their work. The results
suggest a disconnect between organisational and institutional
realities, constraints and needs, and those addressed by current
research into usable, transparent and 'discrimination-aware' machine
learning—absences likely to undermine practical initiatives unless
addressed. We see design opportunities in this disconnect, such as in
supporting the tracking of concept drift in secondary data sources, and
in building usable transparency tools to identify risks and incorporate
domain knowledge, aimed both at managers and at the 'street-level
bureaucrats' on the frontlines of public service. We conclude by outlining
ethical challenges and future directions for collaboration in these high-
stakes applications.
Fairness and Accountability Design Needs for Algorithmic
Support in High-Stakes Public Sector Decision-Making
Fairness and Accountability Design Needs for Algorithmic
Support in High-Stakes Public Sector Decision-Making
Fairness and Accountability Design Needs for Algorithmic
Support in High-Stakes Public Sector Decision-Making
Fairness and Accountability Design Needs for Algorithmic
Support in High-Stakes Public Sector Decision-Making
Fairness and Accountability Design Needs for Algorithmic
Support in High-Stakes Public Sector Decision-Making
Fairness and Accountability Design Needs for Algorithmic
Support in High-Stakes Public Sector Decision-Making
Explanations as Mechanisms for
Supporting Algorithmic Transparency  
• Transparency can empower users to make informed
choices about how they use an algorithmic decision-
making system and judge its potential
consequences. However, transparency is often
conceptualized by the outcomes it is intended to
bring about, not the specifics of mechanisms to
achieve those outcomes. We conducted an online
experiment focusing on how different ways of
explaining Facebook's News Feed algorithm might
affect participants' beliefs and and judgments
about the News Feed. We found that all explanations
caused participants to become more aware of how
the system works, and helped them to determine
whether the system is biased and if they can control
what they see. The explanations were less effective
for helping participants evaluate the correctness of
the system's output, and form opinions about how
sensible and consistent its behavior is. We present
implications for the design of transparency
mechanisms in algorithmic decision-making systems
based on these results.
Falling for fake news: Investigating the
Consumption of news via Social Media
• In the so called ‘post-truth’ era, characterized by a loss of public
trust in various institutions, and the rise of ‘fake news’
disseminated via the internet and social media, individuals may
face uncertainty about the veracity of information available,
whether it be satire or malicious hoax. We investigate attitudes
to news delivered by social media, and subsequent verification
strategies applied, or not applied, by individuals. A survey reveals
that two thirds of respondents regularly consumed news via
Facebook, and that one third had at some point come
across fake news that they initially believed to be true. An analysis
task involving news presented via Facebook reveals a diverse range
of judgement forming strategies, with participants relying on
personal judgements as to plausibility and scepticism around
sources and journalistic style. This reflects a shift away from
traditional methods of accessing the news, and highlights the
difficulties in combating the spread of fake news.
Falling for fake news: Investigating the
Consumption of news via Social Media
Falling for fake news: Investigating the
Consumption of news via Social Media
Amplifying Quiet Voices: Challenges and Opportunities
for Participatory Design at an Urban Scale
• Many Smart City projects are beginning to consider the role of
citizens. However, current methods for engaging urban populations
in participatory design (PD) activities are somewhat limited. In this
article, we describe an approach taken to empower socially
disadvantaged citizens, using a variety of both social and
technological tools, in a Smart City project. Through analysing the
nature of citizens’ concerns and proposed solutions, we explore the
benefits of our approach, arguing that engaging citizens can uncover
hyper-local concerns that provide a foundation for finding solutions
to address citizen concerns. By reflecting on our approach, we
identify four key challenges to utilising PD at an urban scale;
balancing scale with the personal, who has control of the process,
who is participating and integrating citizen-led work with local
authorities. By addressing these challenges, we will be able to truly
engage citizens as collaborators in co-designing their city.
Amplifying Quiet Voices: Challenges and Opportunities
for Participatory Design at an Urban Scale
1. Creating a treasure hunt app on the cycle path network.
2. Developing an app to collect problems on the cycle path network.
3. Recording videos of key cycle path routes.
4. A pop-up shop for recycled furniture.
5. An advertising scheme for low cost solar installations.
6. Drilling a borehole at an allotment site.
7. A food passport scheme to promote independent food.
8. An app to promote breastfeeding-friendly locations.
9. Developing an app for visually-impaired navigation.
10.A series of community workshops on the Raspberry Pi computer.
11.Developing an age-friendly map of Milton Keynes.
12. Exploring ways of reducing food packaging waste.
13.Detailed data collection regarding Fuel Poverty
Privacy Lies: Understanding How, When, and Why People
Lie to Protect Their Privacy in Multiple Online Contexts
• In this paper, we study online privacy lies: lies
primarily aimed at protecting privacy. Going
beyond privacy lenses that focus on privacy
concerns or cost/benefit analyses, we explore how
contextual factors, motivations, and individual level
characteristics affect lying behavior through a
356- person survey. We find that statistical models
to predict privacy lies that include attitudes about
lying, use of other privacy-protective behaviors
(PPBs), and perceived control over information
improve on models based solely on self-expressed
privacy concerns. Based on a thematic analysis of
open-ended responses, we find that the decision to
tell privacy lies stems from a range of concerns,
serves multiple privacy goals, and is influenced by
the context of the interaction and attitudes about
the morality and necessity of lying. Together, our
results point to the need for conceptualizations of
privacy lies — and PPBs more broadly — that
account for multiple goals, perceived control over
data, contextual factors, and attitudes about PPBs.
The Dark (Patterns) Side of UX Design
• Interest in critical scholarship that engages with the complexity of
user experience (UX) practice is rapidly expanding, yet the
vocabulary for describing and assessing criticality in practice is
currently lacking. In this paper, we outline and explore the limits of a
specific ethical phenomenon known as "dark patterns," where user
value is supplanted in favor of shareholder value. We assembled
a corpus of examples of practitioner-identified dark patterns and
performed a content analysis to determine the ethical concerns
contained in these examples. This analysis revealed a wide range
of ethical issues raised by practitioners that were frequently
conflated under the umbrella term of dark patterns, while also
underscoring a shared concern that UX designers could easily
become complicit in manipulative or unreasonably persuasive
practices. We conclude with implications for the education and
practice of UX designers, and a proposal for broadening research
on the ethics of user experience.
The Dark (Patterns) Side of UX Design
Observations on Typing 

from 136 Million Keystrokes
• We report on typing behaviour and performance
of 168,000 volunteers in an online study. The
large dataset allows detailed statistical analyses
of keystroking patterns, linking them to typing
performance. Besides reporting distributions
and confirming some earlier findings, we report
two new findings. First, letter pairs that are
typed by different hands or fingers are more
predictive of typing speed than, for example,
letter repetitions. Second, rollover-typing,
wherein the next key is pressed before the
previous one is released, is surprisingly
prevalent. Notwithstanding considerable
variation in typing patterns, unsupervised
clustering using normalised inter-key intervals
reveals that most users can be divided into eight
groups of typists that differ in performance,
accuracy, hand and finger usage, and rollover.
The code and dataset are released for scientific
use.
CHI 2018
ACM SIGCHI - Special
Interest Group on
Computer-Human
Interaction
Montréal, Canada
Hendrik Heuer
hheuer@uni-bremen.de
Institute for Information Management Bremen
Information Management Group (AGIM)

ifib Lunchbag: CHI2018 Highlights - Algorithms in (Social) Practice and more

  • 1.
    CHI 2018 ACM SIGCHI- Special Interest Group on Computer-Human Interaction Montréal, Canada Hendrik Heuer hheuer@uni-bremen.de Institute for Information Management Bremen Information Management Group (AGIM)
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
    Repurposing emoji for Personalised Communication: Why🍕 means "I love you" • The use of emoji in digital communication can convey a wealth of emotions and concepts that otherwise would take many words to express. emoji have become a popular form of communication, with researchers claiming emoji represent a type of “ubiquitous language” that can span different languages. In this paper however, we explore how emoji are also used in highly personalised and purposefully secretive ways. We show that emoji are repurposed for something other than their “intended” use between close partners, family members and friends. We present the range of reasons why certain emoji get chosen, including the concept of “emoji affordance” and explore why repurposing occurs. Normally used for speed, some emoji are instead used to convey intimate and personal sentiments that, for many reasons, their users cannot express in words. We discuss how this form of repurposing must be considered in tasks such as emoji-based sentiment analysis.
  • 8.
     I Lead, YouHelp But Only with Enough Details: Understanding User Experience of Co-Creation with Artificial Intelligence • Recent advances in artificial intelligence (AI) have increased the opportunities for users to interact with the technology. Now, users can even collaborate with AI in creative activities such as art. To understand the user experience in this new user–AI collaboration, we designed a prototype, DuetDraw, an AI interface that allows users and the AI agent to draw pictures collaboratively. We conducted a user study employing both quantitative and qualitative methods. Thirty participants performed a series of drawing tasks with the think- aloud method, followed by post-hoc surveys and interviews. Our findings are as follows: (1) Users were significantly more content with DuetDraw when the tool gave detailed instructions. (2) While users always wanted to lead the task, they also wanted the AI to explain its intentions but only when the users wanted it to do so. (3) Although users rated the alternative low in predictability, controllability, and comprehensibility, they enjoyed their interactions with it during the task. Based on these findings, we discuss implications for user interfaces where users can collaborate with AI in creative works.
  • 9.
    Empowerment in HCI -A Survey and Framework • Empowering people through technology is of increasing concern in the HCI community. However, there are different interpretations of empowerment, which diverge substantially. The same term thus describes an entire spectrum of research endeavours and goals. This conceptual unclarity hinders the development of a meaningful discourse and exchange. To better understand what empowerment means in our community, we reviewed 54 CHI full papers using the terms empower and empowerment. Based on our analysis and informed by prior writings on power and empowerment, we construct a framework that serves as a lens to analyze notions of empowerment in current HCI research. Finally, we discuss the implications of these notions of empowerment on approaches to technology design and offer recommendations for future work. With this analysis, we hope to add structure and terminological clarity to this growing and important facet of HCI research.
  • 10.
    Empowerment in HCI -A Survey and Framework
  • 11.
    'It's Reducing aHuman Being to a Percentage'; Perceptions of Justice in Algorithmic Decisions • Data-driven decision-making consequential to individuals raises important questions of accountability and justice. Indeed, European law provides individuals limited rights to 'meaningful information about the logic' behind significant, autonomous decisions such as loan approvals, insurance quotes, and CV filtering. We undertake three experimental studies examining people's perceptions of justice in algorithmic decision-making under different scenarios and explanation styles. Dimensions of justice previously observed in response to human decision-making appear similarly engaged in response to algorithmic decisions. Qualitative analysis identified several concerns and heuristics involved in justice perceptions including arbitrariness, generalisation, and (in)dignity. Quantitative analysis indicates that explanation styles primarily matter to justice perceptions only when subjects are exposed to multiple different styles---under repeated exposure of one style, scenario effects obscure any explanation effects. Our results suggests there may be no 'best' approach to explaining algorithmic decisions, and that reflection on their automated nature both implicates and mitigates justice dimensions.
  • 14.
    A Qualitative Explorationof Perceptions of Algorithmic Fairness • Algorithmic systems increasingly shape information people are exposed to as well as influence decisions about employment, finances, and other opportunities. In some cases, algorithmic systems may be more or less favorable to certain groups or individuals, sparking substantial discussion of algorithmic fairness in public policy circles, academia, and the press. We broaden this discussion by exploring how members of potentially affected communities feel about algorithmic fairness. We conducted workshops and interviews with 44 participants from several populations traditionally marginalized by categories of race or class in the United States. While the concept of algorithmic fairness was largely unfamiliar, learning about algorithmic (un)fairness elicited negative feelings that connect to current national discussions about racial injustice and economic inequality. In addition to their concerns about potential harms to themselves and society, participants also indicated that algorithmic fairness (or lack thereof) could substantially affect their trust in a company or product.
  • 15.
    Communicating Algorithmic Processin Online Behavioral Advertising • Advertisers develop algorithms to select the most relevant advertisements for users. However, the opacity of these algorithms, along with their potential for violating user privacy, has decreased user trust and preference in behavioral advertising. To mitigate this, advertisers have started to communicate algorithmic processes in behavioral advertising. However, how revealing parts of the algorithmic process affects users' perceptions towards ads and platforms is still an open question. To investigate this, we exposed 32 users to why an ad is shown to them, what advertising algorithms infer about them, and how advertisers use this information. Users preferred interpretable, non- creepy explanations about why an ad is presented, along with a recognizable link to their identity. We further found that exposing users to their algorithmically-derived attributes led to algorithm disillusionment---users found that advertising algorithms they thought were perfect were far from it. We propose design implications to effectively communicate information about advertising algorithms.
  • 16.
    Towards Algorithmic Experience:Initial Efforts for Social Media Contexts • Algorithms influence most of our daily activities, decisions, and they guide our behaviors. It has been argued that algorithms even have a direct impact on democratic societies. Human-Computer Interaction research needs to develop analytical tools for describing the interaction with, and experience of algorithms. Based on user participatory workshops focused on scrutinizing Facebook’s newsfeed, an algorithm-influenced social media, we propose the concept of Algorithmic Experience (AX) as an analytic framing for making the interaction with and experience of algorithms explicit. Connecting it to design, we articulate five functional categories of AX that are particularly important to cater for in social media: profiling transparency and management, algorithmic awareness and control, and selective algorithmic memory.
  • 17.
    Fairness and AccountabilityDesign Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making • Calls for heightened consideration of fairness and accountability in algorithmically-informed public decisions—like taxation, justice, and child protection—are now commonplace. How might designers support such human values? We interviewed 27 public sector machine learning practitioners across 5 OECD countries regarding challenges understanding and imbuing public values into their work. The results suggest a disconnect between organisational and institutional realities, constraints and needs, and those addressed by current research into usable, transparent and 'discrimination-aware' machine learning—absences likely to undermine practical initiatives unless addressed. We see design opportunities in this disconnect, such as in supporting the tracking of concept drift in secondary data sources, and in building usable transparency tools to identify risks and incorporate domain knowledge, aimed both at managers and at the 'street-level bureaucrats' on the frontlines of public service. We conclude by outlining ethical challenges and future directions for collaboration in these high- stakes applications.
  • 18.
    Fairness and AccountabilityDesign Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making
  • 19.
    Fairness and AccountabilityDesign Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making
  • 20.
    Fairness and AccountabilityDesign Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making
  • 21.
    Fairness and AccountabilityDesign Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making
  • 22.
    Fairness and AccountabilityDesign Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making
  • 23.
    Fairness and AccountabilityDesign Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making
  • 24.
    Explanations as Mechanismsfor Supporting Algorithmic Transparency   • Transparency can empower users to make informed choices about how they use an algorithmic decision- making system and judge its potential consequences. However, transparency is often conceptualized by the outcomes it is intended to bring about, not the specifics of mechanisms to achieve those outcomes. We conducted an online experiment focusing on how different ways of explaining Facebook's News Feed algorithm might affect participants' beliefs and and judgments about the News Feed. We found that all explanations caused participants to become more aware of how the system works, and helped them to determine whether the system is biased and if they can control what they see. The explanations were less effective for helping participants evaluate the correctness of the system's output, and form opinions about how sensible and consistent its behavior is. We present implications for the design of transparency mechanisms in algorithmic decision-making systems based on these results.
  • 25.
    Falling for fake news: Investigatingthe Consumption of news via Social Media • In the so called ‘post-truth’ era, characterized by a loss of public trust in various institutions, and the rise of ‘fake news’ disseminated via the internet and social media, individuals may face uncertainty about the veracity of information available, whether it be satire or malicious hoax. We investigate attitudes to news delivered by social media, and subsequent verification strategies applied, or not applied, by individuals. A survey reveals that two thirds of respondents regularly consumed news via Facebook, and that one third had at some point come across fake news that they initially believed to be true. An analysis task involving news presented via Facebook reveals a diverse range of judgement forming strategies, with participants relying on personal judgements as to plausibility and scepticism around sources and journalistic style. This reflects a shift away from traditional methods of accessing the news, and highlights the difficulties in combating the spread of fake news.
  • 26.
    Falling for fake news: Investigatingthe Consumption of news via Social Media
  • 27.
    Falling for fake news: Investigatingthe Consumption of news via Social Media
  • 28.
    Amplifying Quiet Voices:Challenges and Opportunities for Participatory Design at an Urban Scale • Many Smart City projects are beginning to consider the role of citizens. However, current methods for engaging urban populations in participatory design (PD) activities are somewhat limited. In this article, we describe an approach taken to empower socially disadvantaged citizens, using a variety of both social and technological tools, in a Smart City project. Through analysing the nature of citizens’ concerns and proposed solutions, we explore the benefits of our approach, arguing that engaging citizens can uncover hyper-local concerns that provide a foundation for finding solutions to address citizen concerns. By reflecting on our approach, we identify four key challenges to utilising PD at an urban scale; balancing scale with the personal, who has control of the process, who is participating and integrating citizen-led work with local authorities. By addressing these challenges, we will be able to truly engage citizens as collaborators in co-designing their city.
  • 29.
    Amplifying Quiet Voices:Challenges and Opportunities for Participatory Design at an Urban Scale 1. Creating a treasure hunt app on the cycle path network. 2. Developing an app to collect problems on the cycle path network. 3. Recording videos of key cycle path routes. 4. A pop-up shop for recycled furniture. 5. An advertising scheme for low cost solar installations. 6. Drilling a borehole at an allotment site. 7. A food passport scheme to promote independent food. 8. An app to promote breastfeeding-friendly locations. 9. Developing an app for visually-impaired navigation. 10.A series of community workshops on the Raspberry Pi computer. 11.Developing an age-friendly map of Milton Keynes. 12. Exploring ways of reducing food packaging waste. 13.Detailed data collection regarding Fuel Poverty
  • 30.
    Privacy Lies: UnderstandingHow, When, and Why People Lie to Protect Their Privacy in Multiple Online Contexts • In this paper, we study online privacy lies: lies primarily aimed at protecting privacy. Going beyond privacy lenses that focus on privacy concerns or cost/benefit analyses, we explore how contextual factors, motivations, and individual level characteristics affect lying behavior through a 356- person survey. We find that statistical models to predict privacy lies that include attitudes about lying, use of other privacy-protective behaviors (PPBs), and perceived control over information improve on models based solely on self-expressed privacy concerns. Based on a thematic analysis of open-ended responses, we find that the decision to tell privacy lies stems from a range of concerns, serves multiple privacy goals, and is influenced by the context of the interaction and attitudes about the morality and necessity of lying. Together, our results point to the need for conceptualizations of privacy lies — and PPBs more broadly — that account for multiple goals, perceived control over data, contextual factors, and attitudes about PPBs.
  • 31.
    The Dark (Patterns)Side of UX Design • Interest in critical scholarship that engages with the complexity of user experience (UX) practice is rapidly expanding, yet the vocabulary for describing and assessing criticality in practice is currently lacking. In this paper, we outline and explore the limits of a specific ethical phenomenon known as "dark patterns," where user value is supplanted in favor of shareholder value. We assembled a corpus of examples of practitioner-identified dark patterns and performed a content analysis to determine the ethical concerns contained in these examples. This analysis revealed a wide range of ethical issues raised by practitioners that were frequently conflated under the umbrella term of dark patterns, while also underscoring a shared concern that UX designers could easily become complicit in manipulative or unreasonably persuasive practices. We conclude with implications for the education and practice of UX designers, and a proposal for broadening research on the ethics of user experience.
  • 32.
    The Dark (Patterns)Side of UX Design
  • 33.
    Observations on Typing
 from 136 Million Keystrokes • We report on typing behaviour and performance of 168,000 volunteers in an online study. The large dataset allows detailed statistical analyses of keystroking patterns, linking them to typing performance. Besides reporting distributions and confirming some earlier findings, we report two new findings. First, letter pairs that are typed by different hands or fingers are more predictive of typing speed than, for example, letter repetitions. Second, rollover-typing, wherein the next key is pressed before the previous one is released, is surprisingly prevalent. Notwithstanding considerable variation in typing patterns, unsupervised clustering using normalised inter-key intervals reveals that most users can be divided into eight groups of typists that differ in performance, accuracy, hand and finger usage, and rollover. The code and dataset are released for scientific use.
  • 34.
    CHI 2018 ACM SIGCHI- Special Interest Group on Computer-Human Interaction Montréal, Canada Hendrik Heuer hheuer@uni-bremen.de Institute for Information Management Bremen Information Management Group (AGIM)