Table of Contents
EXECUTIVE SUMMARY
AI TRANSFORMS THE CUSTOMER EXPERIENCE
HOW AI CHANGES EXPECTATIONS
PRINCIPLES FOR CUSTOMER-CENTRIC AI
BUILDING A CULTURE OF CUSTOMER-CENTRIC AI
A CHECKLIST FOR CUSTOMER-CENTRIC AI
A WAY FORWARD
ENDNOTES
ABOUT US
METHODOLOGY
ACKNOWLEDGEMENTS
HOW TO WORK WITH US
2
3
5
9
16
22
24
25
28
28
28
29
Executive Summary
In the next five years, machine intelligence will become
ubiquitous, and technology innovations, such as Internet of
Things (IoT), chatbots, and augmented reality, will proliferate.
We will interact more and more with devices via talk and text, on a range of devices
and locations, often determined and delivered by machine learning algorithms.
As a result, Artificial Intelligence (AI) will shape our experiences with companies,
products, and services in unprecedented ways.
This report explores the impact of AI on the customer experience, lays out a set
of operating principles, and includes insight from technology users, developers,
academics, designers, and other experts on how to design customer-centric
experiences in the age of AI. More than anything, business leaders today should
begin to treat AI as fundamental to the customer experience. This means thinking
about the values it perpetuates as an essential and eventually indistinguishable
expression of product, services and brand.
12
AI Transforms the
Customer Experience
However we define it, whether we know it or not, most of us interact with AI daily. The cluster
of technologies that we think of as AI is fundamental to recommendation engines, search
engines, word processing programs, messaging, personal digital assistants, social networks,
and everyday household items. It’s used in everything from Snapchat’s puppy and flower
crown filters to driverless cars, children’s toys, and predictive analytics.
Now, after decades of stop-and-start growth, AI finally has the momentum to change how
businesses and customers interact.1
IDC expects global spending on cognitive and AI
solutions to achieve a Compound Annual Growth Rate (CAGR) of 54.4% through 2020, when
revenues will exceed $46 billion.2
And many consumers generally believe that AI is a positive
phenomenon. A recent PwC research study found that 63% of American consumers believe
that “AI will help solve complex problems that plague modern societies.” 3
Every day, we see extraordinary advances in healthcare, natural language processing, self-
driving cars, image recognition, and other technologies based on AI. But unlike rules-based
systems, machines that can learn—from images, movements, locations, interactions, sounds,
relationships, temperature, past errors and a host of other signals—create a new set of
considerations for customer experiences. While people expect to interact with organizations
in multiple ways—on the web, in person, via email, on social media, or through other
venues—experiences can be quite different when they are informed or even delivered by
systems using machine learning algorithms.
In addition, AI arrives on the scene with a set of preconceptions shaped by popular culture.
As early as the 1940s, authors such as Isaac Asimov, Arthur C. Clarke, and William Gibson
explored the implications of intelligent machines and their impact on society.4
More recently,
luminaries such as Elon Musk and Bill Gates have warned of “The Singularity,” a period by
which they believe machine intelligence will surpass human intelligence.
This report, however, focuses on more immediate business issues: the impact of AI on
customer experiences and what we can learn from technologists, academics, startup
entrepreneurs, think tanks, and business leaders who are building the foundation for a
customer-centric experience of AI. The first step is to look at the norms, processes, and
expectations AI disrupts.
3
“We can only see a short distance ahead, but we can see plenty there that
needs to be done.”
ALAN TURING, AUTHOR OF COMPUTING MACHINERY AND INTELLIGENCE
How AI Changes Expectations
While the fundamental principles of customer experience haven’t
changed, the environment in which businesses are operating
certainly has.
AI upends many of the norms that govern normal business interactions. Chatbots
create new interaction models between people and organizations, while the presence
of AI-enabled applications or services expands the possibilities for engagement and
provides important challenges to consider. Figure 1 lays out some of the changes that
affect customer experience.
4
CUSTOMER EXPERIENCE CHALLANGES OF AI
FIGURE 1
NEW INTERACTION
MODEL
CHANGE DESCRIPTION IMPACT
Machine learning algorithms introduce
a new layer between people and
organizations that is sometimes obvious
and sometimes hidden. Traditional
interaction models (apps, websites)
are instrumented and explicit, while a
chatbot or Virtual Personal Assistant
(VPA) is open-ended. For example, AI is
embedded into recommendation engines
and ad targeting but embodied in VPAs
and driverless cars.
Lack of clarity and confidence in how to interact.
In the case of chatbots or driverless cars, it may
be unclear:
• What it knows and what it doesn’t
• Whether one is interacting with AI or a
rules-based system
• Whether one is interacting with AI or a person
• What it can and can’t do
• How and when to override or escalate
INFORMATION
ASYMMETRY
& AMPLIFICATION
OF BIAS
AI introduces information asymmetry
between people and organizations.
We may not know for certain whether
AI is present (for example, in a
recommendation engine), what its
agenda may be, or exactly what caused
it to reach its conclusion or behave in a
particular way.
Machine learning algorithms are also
prone to bias. Common datasets, such
as Word2Vec and ImageNet, have been
shown to include intrinsic gender and
racial biases that require remediation.
Potential for unfairness and disenfranchisement.
This is fairly benign in some use cases (Netflix
recommendations) but powerful in others,
such as an algorithm that determines the cost
of an insurance premium, who gets hired, who
is eligible for parole, or who has access to a
particular medical treatment.
THE BLACK BOX
PROBLEM
It is possible to observe an algorithm’s
inputs and outputs but difficult or
impossible to diagnose and remediate
errors in data or judgment.
Lack of trust in results and risk. In fairness,
human decision-making can also be opaque,
but organizations that use machine learning
algorithms need to be able to understand and
explain the justification for their conclusions and
decisions (“Explainability”).5
UNCLEAR REDRESS Few if any ways for people to opt out of
algorithmic decision-making. Unclear
escalation processes.
Lack of confidence in or frustration with AI-enabled
systems and services.
5
All of these factors: New interaction models, information asymmetry, amplification of bias,
the “black box problem,” and unclear accountability, combined with the sheer novelty and
momentum of AI, strain the norms of customer interactions in unprecedented ways. The
question is: What can businesses learn from customers to focus efforts not on fighting crises,
but on unlocking AI’s potential to enable innovation?
LEARNING FROM MISTAKES
While it is still early for a set of best practices, there are plenty of “worst practices” and
unintended consequences from which to learn. One of the most widely circulated stories
is a study by researchers from Boston University and Microsoft arguing that “the blind
application of machine learning runs the risk of amplifying biases present in data” and that
Word2Vec, a data set commonly used to train machine learning algorithms, such as search
engines, recommendation engines, and machine translation, was in their words “blatantly
sexist.” Figure 2 illustrates an example of the “word embeddings” (or associations) that the
researchers discovered between occupation and gender.6
EXAMPLES OF MOST EXTREME OCCUPATIONS BY GENDER
FIGURE 2
EXTREME HE
OCCUPATIONS
1. HOMEMAKER
2. NURSE
3. RECEPTIONIST
4. LIBRARIAN
5. SOCIALITE
6. HAIRDRESSER
EXTREME SHE
OCCUPATIONS
7. NANNY
8. BOOKKEEPER
9. STYLIST
10. HOUSEKEEPER
11. INTERIOR DESIGNER
12. GUIDANCE COUNSELOR
1. MAESTRO
2. SKIPPER
3. PROTEGE
4. PHILOSOPHER
5. CAPTAIN
6. ARCHITECT
7. FINANCIER
8. WARRIOR
9. BROADCASTER
10. MAGICIAN
11. FIGHTER PILOT
12. BOSS
6
But algorithmic bias is not confined to text. Machine learning algorithms in image and
facial recognition can amplify bias as well. In 2015, Flickr faced widespread criticism after
it was discovered that its auto-tagging feature mislabeled people and places in offensive
ways. Subsequent stories revealed gender, race, and other types of human bias in machine
learning algorithms taught by images.7
Source: Bolukbasi, Tolga; Chang, Kai-Wei; Zou, James; Saligrama, Venkatesh; and Adam Kalai. “Man Is to Computer Programmer as Woman
Is to Homemaker? Debiasing Word Embeddings.”
The types of issues these stories raise are directly relevant to any organization that is using
or planning to use machine learning in predictive analytics, ad targeting, recommendation
engines, search, audience segmentation, customer engagement, product/service
development, or a host of other enterprise use cases (see Figure 3).8
AI CRISES IN THE HEADLINES
FIGURE 3
JUNE 1, 2016
Google voice searches and keeps records of conversations
JANUARY 13, 2017
“Koko,” a user on Reddit depression thread, fails to disclose it
is a bot
AUGUST 16, 2017
AI programs learning to exclude some African-American voices
JUNE 21, 2017
Smart doll fitted with AI chip can read your child’s emotions
VIRTUAL PERSONAL
ASSISTANTS AND CHATBOTS
SELF-DRIVING CARS JUNE 30, 2017
Self-driving cars confused by kangaroos
SEPTEMBER 8, 2017
Computers can tell if you’re gay from photos
AUGUST 21, 2017
Machines taught by photos learn a sexist view of women
AUGUST 22, 2017
Transgender YouTubers had their videos grabbed to train
facial recognition software
AUDIENCE SEGMENTATION
AD TARGETING SEPTEMBER 21, 2017
Instagram uses “I Will Rape You” post as latest ad
SEPTEMBER 15, 2017
Google allowed advertisers to target people searching
racist phrases
RECOMMENDATION ENGINES AUGUST 25, 2017
New app scans your face and tells companies whether you’re
worth hiring
SEPTEMBER 9, 2017
Facebook pulls 9/11 trending topic after it promotes a hoax
SEPTEMBER 18, 2017
Amazon suggests users purchase dangerous item combinations
7
Granted, these stories are shocking, and it can be tempting to dismiss them as edge cases,
but they hold three critical lessons:
1. Algorithms are not pristine mathematical formulas for truth.9
2. Intentions are irrelevant; AI-enabled experiences will become a mirror for brand
reputation and corporate values.
3. These are cautionary tales best viewed as an early warning system for many more crises
to come.
The most powerful leaders in AI technology already know this. On October 7, 2017, Alex
Stamos, chief security officer of Facebook, tweeted the following in response to claims of
alleged Russian interference in the 2016 US election:
“Nobody of substance at the big companies thinks of algorithms as neutral.
Nobody is not aware of the risks.”10
In the meantime, companies such as Google, Microsoft, Facebook, and the academic
and non-profit communities are conducting research into AI ethics and are publishing
comprehensive studies, such as the AI Now Institute 2017 Report, which looks at issues
such as labor and automation, inclusion and bias, and rights and liberties.11
While efforts to
address algorithmic bias will solve some problems for some audiences, it will fall on business
leaders to use AI in a way that augments rather than detracts from customer experience. The
first step is to establish healthy and sustainable norms for AI, both now and in the future.
8
This preview version of
“The Customer Experience of AI”
contains only the first seven pages of the report.
To download the entire report, free of charge, please
visit the link below:
http://bit.ly/altimeter-cx-of-ai