11. 11
PERSONALITY
They are excitable, sentimental and social.
They are hedonistic: they feel their desires strongly and are
easily tempted by them. They are emotionally aware: they
are aware of their feelings and how to express them. And
they are modest: they are uncomfortable being the centre of
attention.
Their choices are driven by a desire for belongingness.
12. What is the state of
voice today? Not what
is said, but how it is
said.
Paralinguistics, tone,
loudness, tempo, and
voice quality.
12
13. Royal Bank of Scotland | Case Study
Innovator: Pete Evans, Innovation
Strategy & Insight at RBS
The Test
13
15. 15
90% Similar to Qualitative
STAN Human
Systematic Classification Plethora of verbatims
Non Opinionated Opinionated
Thorough Quantification
Visual Focus on Text
Focused Insights Exhaustive Reporting
Faster & Cheaper
24. 1
G E T R I D O F
B L AC K B O X E S
Data Science and
Analytics is exciting.
Many don’t speak the
same language so try
to demystify to gain
traction and support
on your projects.
Show the human
behind the machine.
2
T H I N K
B U S I N E S S
The more relevant
the data is for the
business, the more
relevant you are for
the business.
Focus on
meaningful insights,
start with a business
question in mind.
3
T E L L
S TO R I E S
Data analysis is
now the domain of
machines.
Machines can’t
create stories, use
frameworks or
understand clients
(YET), you can!
Not doomed to vanish, to gain a seat at the table…
24
Voice Tech & AI – Changing the way brands interact with customer conversations
There has always been something that gets in the way of our relationship with technology: the keyboard, the mouse, the screen. We’re now ready to analyse the most natural and intuitive form of interaction—the voice. RBS wanted to understand how their senior management felt about their business purpose, goals and strategy and so Kantar’s Analytics Practice used Artificial Intelligence to: structure voice data at scale and decode emotion from voice.
Alexa we have done in the past but it is more about the fact that we are moving to actual emotions on voice.
Alexa we have done in the past but it is more about the fact that we are moving to actual emotions on voice.
New: Speech Analysis
Humans communicate and read emotions in a number of ways: facial expressions, speech, gestures and more. Our vision is to develop artificial emotion intelligence – Emotion AI that can detect emotion just the way humans do, from multiple channels. Our long term goal is to develop a “Multi-modal Emotion AI”, that combines analysis of both face and speech to provide richer insight into the human expression of emotion.
As the first milestone towards our Multi-modal Emotion AI, we have now added speech capabilities to Emotion as a Service.
The speech capabilities allow you to analyze a pre-recorded audio segment, such as an MP3 file, to identify emotion events and gender. The API analyzes not what is said, but how it is said, observing changes in speech paralinguistics, tone, loudness, tempo, and voice quality to distinguish speech events, emotions, and gender. The initial set of metrics include:
Laughing – The action or sound of laughing.
Anger/Irritation – A strong expression of displeasure, hostility, irritation or frustration.
Arousal – The degree of alertness, excitement, or engagement produced by the object’s of emotion.
Gender – The human perception of gender expression (Male/Female).
The output file provides the analysis on speech events occurring in the audio segment every few hundred milliseconds and not just at the end of the entire utterance.
Note that since speech is still in beta, the Emotion as a Service visualization is only available for facial analysis at this point in time.
Alexa we have done in the past but it is more about the fact that we are moving to actual emotions on voice.