My name is Dr. Barbara Ondrisek and today I&apos;m going to give a talk about Security And Privacy Of Chatbots.
About me: I’ve made my PhD at the University of Technology Vienna – so this is my Alma Mater and also my PhD adviser Peter Purgathofer is speaking right after me about e-voting.
I’ve been working as a freelancer and consultant for more than 15 years for different mostly big companies. I have a strong Backend background and in the last couple of years I’ve been working as a Senior Backend developer with a disposedness to full stack and mobile.
I also love to play around with other technologies - lately with Ionic or AngularJS (especially in my last project George for the Erste Group, Austria’s largest bank). I also was working as a freelancer and consulant for several years, but I was always lacking this one super idea to found my own startup.
And as Facebook announced on their F8 conference mid of April to open up their Messenger platform to bots I was eager to try their API!
So I created one of the very first chatbots on Facebook – and definitely Austria’s first Facebook Messenger and Skype chat bot: Mica, the Hipster Cat Bot.
Mica, the Hipster Cat Bot is a chatbot, that helps you discover the best places near by.
Mica started as a spin-off of LIKE A HIPSTER, an app that shows you trendy places.
Meanwhile I developed Mica, the Hipster Cat Bot for Facebook Messenger, Skype, Telegram, WeChat and KIK.
What are chatbots?
A chatbot is a service that enables you to interact with a service or company through a conversational interface. It’s a computer program in a computer program that can have an (intelligent) conversation with one or more human users. Chatbots are also referred to as “virtual assistants” or “conversational commerce”.
Microsoft announced end of March at their BUILD conference the bot-support on Skype. Two weeks later Facebook announced at their F8 conference that they finally opened their messenger API for bots and the first bots started to be approved by Facebook. Only a few days later Hi Poncho, a Facebook Messenger weather bot, raised 2M of funding.
Further Google presented at their Google I/O (mid May) another innovative chat platform Allo that also should support bots. Further they announced in mid July their Cloud Natural Language API as a Natural Language Processing (NLP) and Machine Learning framework. IBM also released their NLP framework Watson. So all of the big players in the field are pushing the topic.
Just a couple of days ago, Google announced, they bought api.ai, a machine learning framework.
Chatbots are not a super new thing! Ok, FB and Google announced something, but not every new project becomes a success... what about Google Glass for instance?
Historically speaking the very first chatbot was developed at the MIT AI Lab by the computer scientist Joseph Weizenbaum in the mid–1960s. This bot, ELIZA, simulated a Rogerian psychiatrist and Weizenbaum wanted to find out with this project, how natural language can be used in human-computer-interfaces. ELIZA was programmed to analyze the input of key words and to respond with a number of canned phrases containing therapeutical language.
Also the first computers where designed in this manner: A question-answer system.
And there were also text based computer games in the 80s such as Zork.
(c) Benjamin Keyser
Why are we doing this?
One minute of three minutes spent online is spent on mobile devices, but we see, that the usage of different apps is dropping: 80% of mobile time is spent in only three different apps – but this does not concern messenger apps! The current app trend is to go away from social media to messengers.
This graph is already outdated, it’s from 2015...
Facebook Messenger: 1 Billon monthly active users (MAU)
WhatsApp: 1 Billion MAU
QQ („ICQ“ China): 900 MAU
WeChat („WhatsApp“ in China): over 700 MAU
Skype: 300 MAU
Line App (Asien) : 220 MAU
Telegram: 100 MAU
Kik (USA): 300 Millionen registrierte User (gesamt)
Slack: 3 Millionen täglich aktive User
And others with several mio users such as Twitter (310 MAU), IMessage (250 mio users), Viber (784 million registered user), HipChat, Kakao, BBM, VKontakte etc.
(c) david pichsenmeister
Different messenger platforms are used in different countries.
Kik for instance is super popular in the US (especially for teenages), Viber (build by a Japanese company) is very popular in Slavik countries, Line is popular in Asian countries. So if you decide to launch a bot in a certain region take this regional differences in account.
However, messengers are widely used, but what about data security and privacy in messenger apps and their chatbots?
Messenger app providers are expected to keep their user’s data private. Messaging is still a private and intimate thing and so also the conversation between a user and the chatbot owner are thought not be shared publicly without the user’s explicit consent, but how about security of the platforms?
“In the face of widespread Internet surveillance, we need a secure and practical means of talking to each other from our phones and computers.
Many companies offer “secure messaging” products—but are these systems actually secure? The Electronic Frontier Foundation decided to find out and created the Secury Messaging Scorecard.
Version 1.0 of [their] scorecard evaluated apps and tools based on a set of seven specific criteria ranging from whether messages were encrypted in transit to whether or not the code had been recently audited.
Though all of those criteria are necessary for a tool to be secure, they can&apos;t guarantee it; security is hard, and some aspects of it are hard to measure.”
The score card is from Nov. 2014
As you can see on this score card, most messenger programs encrypt the message during transit, but
some messengers such as Kik haven’t even been audited recently.
Some messengers open up their source code to independent reviews. Most of the messengers analyzed by EFF have no possibility to identify the identity of the contact (only Signal and WhatsApp are providing this feature).
Some messenger apps are end-to-end encrypted such as WhatsApp and Signal, meaning that the platform’s server is not “reading” the conversation. Usually with bots the platform provider and also the bot provider see the conversation un-encrypted.
The only messenger that would get an A grade is Signal, but Skype is also widely used (300 MAU) and would get such as Kik a very bad grade.
Some of these messengers provide an API for bots such as Telegram, Skype, Facebook and Kik.
In 2016 Viber added also end-to-end encryption to their service, but only for one-to-one and group conversations in which all participants are using the latest Viber version.
Similar criticism comes with Allo, the new AI-based messaging app from Google, having the end-to-end encryption turned off by default.
Meanwhile the race for the next major platform for chatbots has started. Microsoft and Facebook, among others, are competing to be the go-to place for bot developers. Their pitch is that any business, big or small, will be able to easily design an intuitive user experience via their bot platforms.
The paradox is that in messenger apps the majority of conversations are private between two people.
With bots we are entering a new era from a privacy perspective. We will be completing the new shift in data control: From the user to the messenger app provider.
The same is already happening in China with WeChat and QQ, where people integrate the messenger app far more in their intimate personal life through micro-payments to friends, or paying their electronic bills or rents in WeChat.
WeChat pay offers a lot of different services and became a single medium for all transactions — and Messenger wants to become this for the West.
Personal data is worth a lot for Facebook or Google – and bot platforms were not designed with privacy in mind.
Chatbots could also analyze data with external tools for Natural Language Processing and intent understanding and usually the data is also not encrypted when sent to tools such as wit.ai, api.ai or IBM Watson. These cloud-based APIs process users’ input for intelligent analysis.
Usually bots don’t know much about their users initially. Typically it is something like the name and a screenname.
In Facebook Messenger a bot is basically a FB app connected to a FB page, that has no access to a user’s FB profile. But there are workarounds to match there page scoped userids to the FB profiles – and then you expose all your data to the bot.
And this is only what you receive through the APIs of the messenger platform. Think of all the data you send to the bot. It is super easy to create character studies based on the text you send to a program.
For instance there is sentiment analysis as the process of computationally identifying and categorizing opinions expressed in a piece of text, especially in order to determine whether the writer&apos;s attitude towards a particular topic, product, etc. is positive, negative, or neutral.
However, consider also who else might be listening to your conversation – and I don’t only mean the bot developer or project managers: Currently the conversation between bot and user slips through the server of the messenger app, so Facebook or Google also listens to everything you say to a bot!Bot usually also store contextual data such as a geo location or a state. This could also be a telephone number or other private date — and no one knows whether the data is encrypted before saving it to a database.
People, especially teenagers or seniors tend to text with the bot more. You design your bot for a nr. one use case, but people start to chat with the bot. Studies show that seniors tend to chat with Siri when they are lonely – the same happens to bots that are capable of conversations.
Users also tend to text with bots like “no-one is listening”. When Weizenbaum was studying ELIZA he realized that one test person felt ashamed when he entered the room and said: “Sorry, but I’m currently talking to ELIZA!”
Also an interesting aspect is, that people react emotionally to bots – they love them and tell the bot this or hate them, start to use foul language. Dependent on this data you receive you can create personality profiles of bot users. So be careful what you write with a bot and what data you expose on platforms