What will we cover?
Subsets of machine learning and their different use cases, particularly pertaining to communication apps
Start off with a meme: Jokes aside, AI is more than if/else statements. Although here’s a secret: I do sometimes still make chatbots using if/else statements, but we will soon see how Autopilot makes it easy to do without conditionals.
AI=can be defined in multiple ways, particularly, as an automated system capable of analyzing data and making choices autonomously. Indeed, this is what often leads people to link artificial intelligence with chatbots. Two different types of artificial intelligence can be distinguished, based on the extent to which human cognitive functions are replicated:
ML= term refers to a process in which a machine, for example, a chatbot, is endowed with the capacity to learn automatically.
Deep Learning = A subcategory of machine learning, deep learning permits hierarchical learning of a large quantity of information. In other words, the machine processes the data in order of complexity to understand a reality and grasp it through its own means with the aid of a neural network. The name deep learning alludes to the fact that the system functions in layers.
In short: Machine learning uses algorithms to parse data, learn from that data, and make decisions based on what it has learned. Deep learning structures algorithms in layers to make an “artificial neural network” that can learn and make intelligent decisions on its own. While both fall under the broad category of artificial intelligence, deep learning powers the more human-like artificial intelligence.
Subsets of AI, which is a large over-arching subject. Alternatively, I call this AI, ML, WTF?
Natural Language Processing is the cornerstone of artificial intelligence, machine learning and linguistics. Many NLP tasks make life easier for your users, such as speech recognition, document summarization, machine translation, spam detection, named entity recognition, question answering, autocomplete, predictive typing and so on (detecting toxic chat messages).
Subset of NLP= NLU, the processing chain that will make sense of the phrase. This is where the intention will be identified
Computer vision is a field of AI that trains computers to interpret and understand the visual world. Machines can accurately identify and locate objects then react to what they “see” using digital images from cameras, videos, and deep learning models. Image recognition, object detection, pose detection, facial recognition which is not very secure, detecting if someone is wearing a mask or touching their face…so many applications you can add to your applications and companies, especially when more and more meetings are happening over video and phone!
Personal assistants, many of which are ML-based like Siri, Alexa. They assist with info/answering questions like providing directions or turning on lights helping make life easier and more entertaining. Many chatbots and IVRs (click)
How can ML be used with Twilio? Twilio provides communication APIs…
Image detection, object recognition
Pose detection in a video chat (computer vision)
Real-time analysis of a phone call
Detect toxic language, insults in a chat room
Some ML vocabulary.
, responding to questions of maximum relevancy to the business or chatbot. Chatbots can be limited and get stuck—have you ever ordered a pizza via phone with a bot and try to change your order? In cases like answering questions or FAQs, play you a song or tell you the weather, chatbots can handle restaurant orders, take in answers about finances, schedule an appointment, and more.
Actions in Weak AI systems/machines are pre-programmed by a human. Weak AI can only merely simulate human behavior and are trained for a single task. Examples of Weak AI include navigation systems, voice recognition, character recognition, and suggestions for corrections in searches, Alpha Go, Deep Blue who plays chess. On the other hand…
strong AI uses more complex algorithms to help it better act in different situations. They have a mind of their own
Strong: reproduce, better mind, less programming, more training. When presented with a situation it does not know or is unfamiliar with, it has enough intelligence to come up with a solution. This does not really exist yet, but examples would include Skynet, Terminator, C3Po…
Simplification of human brain: Neural networks can adapt to changing input; so the network generates the best possible result without needing to redesign the output criteria. Useful in generating text, music, predicting what’s next like in predictive analysis, image recognition, and speech processing.
Thought of by England’s Alan Turing in the 50s.
Is a computer indistinguishable from a human?
by using the replies to questions put to both.
Today it’s still considered to be the most valid way of judging the level of artificial intelligence attained by a machine.
Customers want bots and businesses want to build them. 70% of consumers today expect a self-service option in the form of bots or IVRs for handling commercial questions and complaints. Still, this doesn’t preclude the need for a personal solution: If self-service falls short, personal contact as a safety net is an absolute necessity.
have a DTMF keypad IVR (as shown on left) but that’s limiting. (click) That’s annoying.
This conversational IVR (on the right) made with Twilio Autopilot, a bot-building platform where you build once and then deploy across multiple channels like phone calls, SMS, Messenger, Alexa, and more.
(click)
Solution? Build a bot once, deploy across channels. Autopilot integrates with different channels and platforms, handles the NLP backend so you don’t have to worry about it. Your bot will have the intelligence to understand everything your customers saying, including all permutations combinations of ways to say the same thing so you don’t need if/else. But at the same time, if it doesn’t understand the customer (which is always going to be a possibility, AI isn’t perfect), your bot needs to have the flexibility to escalate to agents. Now when it does escalate to an agent, the agent need to be empowered to provide all the information in the most seamless way possible, while holding context of past conversations.
An Autopilot bot is made up of Tasks, which are what you want your bot to accomplish. Tasks contain JSON Actions that can say/send back text, collect answers in a series of questions, redirect to a different server or application to handle the input or save it to a database, show an image, and more. Here we have a “DocBot”.
That make_appointment task is made in the JSON bin with this JSON:
How do Tasks run? Triggered by samples, which are words or phrases. Much easier than using if/else statements to detect specific words, this uses ML on the backend to recognize that there are different ways of saying the same thing.
DEMO of Autopilot. Here we have Deep Table, the world’s best restaurant. We’re going to automate this flow: you either want to get today’s specials, make a reservation, or talk to the host. So we’re going to build a virtual assistant that does those 3 things.
Channels->Voice URL->configure number
A couple of common tasks
Go into console