SlideShare a Scribd company logo
1 of 118
Download to read offline
XI – Artificial Intelligence
S. Murugan @ Prakasam
PGT Teacher, Computer Science Dept
AI Applications and Methodologies
Key Fields of Application in AI
➢Chatbots (Natural Language Processing, Speech).
➢Alexa, Siri and others.
➢Computer Vision
➢Weather Predictions.
➢Price forecast for Commodities.
➢Self-driving cars.
Activity 1:
➢ Let us do a small exercise on how AI is perceived by our society.
➢ Do you have any idea what people think about AI? Do an online image search for
the term “AI”
➢ The results of your search would help you get an idea about how AI is perceived.
[Note - If you are using Google search, choose "Images" instead of “All” search]
➢ Based on the outcome of your web search, create a short story on “People’s
Perception of AI”
Activity 2:
➢ In this activity, let us explore what is being written on the web about AI?
➢ Do you think the articles on the net are anywhere close to reality or are they just
complete science fantasies?
Now based on your experience during the online search about ‘AI’, prepare your
summary report that answers the following questions:
1. What was the name of the article, name of the author and web link where the article
was published?
2. What was the main idea of the article? Mention in not more than 5 sentences
Chatbots
➢ Chatbot is one of the applications of AI which simulate conversation with
humans either through voice commands or through text chats or both.
➢ We can define Chatbot as an AI application that can imitate a real
conversation with a user in their natural language.
➢ They enable communication via text or audio on websites, messaging
applications, mobile apps, or telephones.
➢ Some interesting applications of Chatbot are Customer services, E-
commerce, Sales and Marketing, Schools and Universities etc.
➢ Natural Language Processing is what allows chatbots to understand your
messages and respond appropriately.
➢ When you send a message with “Hello”, it is the NLP that lets the
chatbot know that you’ve posted a standard greeting, which in turn allows the
chatbot to leverage its AI capabilities to come up with a fitting response. In this
case, the chatbot will likely respond with a return greeting.
➢ Without Natural Language Processing, a chatbot can’t meaningfully
differentiate between the responses “Hello” and “Goodbye”. To a chatbot without
NLP, “Hello” and “Goodbye” will both be nothing more than text-based user
inputs.
➢ Natural Language Processing (NLP) helps provide context and
meaning to text-based user inputs so that AI can come up with the best
response.
➢ Please visit the link to get a first-hand experience of banking related conversations
with EVA : https://v1.hdfcbank.com/htdocs/common/eva/index.html
➢ You can also visit https://watson-assistant-demo.ng.bluemix.net/ for a completely
new experience on the IBM Web based text Chatbot
➢ Apollo Hospitals URL: https://covid.apollo247.com/
Types of Chatbots
Chatbots can broadly be divided into Two Types:
1. Rule based Chatbot
2. Machine Learning (or AI) based Chatbot risk
1. Rule – Based Chatbot
This is the simpler form of Chatbot which follows a set of pre-defined rules
in responding to user’s questions. For example, a Chatbot installed at a school reception
area, can retrieve data from the school’s archive to answer queries on school fee structure,
course offered, pass percentage, etc.
Rule based Chatbot is used for simple conversations, and cannot be used for
complex conversations.
For example, let us create the below rule and train the Chatbot: bot.train
( ['How are you?’,
'I am good.’,
'That is good to hear.’,
'Thank you’,
'You are welcome.',])
After training, if you ask the Chatbot, “How are you?”, you would get a
response, “I am good”. But if you ask ‘What’s going on? or What’s up? ‘, the rule
based Chatbot won’t be able to answer, since it has been trained to take only a certain
set of questions.
Quick Question?
Question: Are rule-based bots able to answer questions based on how good the rule is and
how extensive the database is? Answer: We can have a rule-based bot that understands the
value that we supply. However, the limitation is that it won’t understand the intent and
context of the user’s conversation with it.
For example: If you are booking a flight to Paris, you might say “Book a flight to Paris”
and someone else might say “I need a flight to Paris” while someone from another part
of the world may use his/her native language. AI-based or NLP-based bot identifies the
language, context and intent and then it reacts accordingly. A rule-based bot only
understands a pre-defined set of options.
Machine Learning (or AI) Based Chatbot
Such Chatbots are advanced forms of chatter-bots capable of holding
complex conversations in real-time. They process the questions (using neural network
layers) before responding to them. AI based Chatbots also learn from previous
experience and reinforced learning and it keeps on evolving.
AI Chatbots are developed and programmed to meet user requests by furnishing
suitable and relevant responses. The challenge, nevertheless lies in aligning the requests to
the most intelligent and closest response that would satisfy the user.
Had the rule based Chatbot discussed earlier been an AI Chatbot and you had
posed ‘What’s going on? or What’s up?’ instead of ‘How are you?’, you would have
got a suitable response from the AI Chatbot.
The earlier mentioned IBM Watson Chatbot - https://watson-assistant-
mo.ng.bluemix.net/ is in fact an AI Chatbot.
How AI Chatbots Understands User Requests
source: https://dzone.com/articles/how-to-make-a-chatbot-with-artificial-intelligence
Natural Language Processing (NLP)
➢ Do you understand the technology behind the Chatbots? It is called is NLP,
short for Natural Language Processing. Every time you throw a question to Alexa,
Siri or Google assistant, they use NLP to answer your question. Now we are faced
with the question, what is Natural Language processing (NLP)?
➢ Natural language means the language of humans. It refers to the
different forms in which humans communicate with each other – verbal,
written or non-verbal or expressions (sentiments such as sad, happy, etc.).
➢ The Technology which enables the machines (software) to understand
and process the natural language (of humans), is called natural language
processing (NLP).
➢ A Machine can able to read, write, speak, translate like human through
natural language.
➢ Therefore, we can safely conclude that NLP essentially comprises of natural
language understanding (human to machine) and natural language generation
(machine to human).
➢ NLP is a sub – area of Artificial Intelligence deals with the capability of
software to process and analyze human language, both verbal and written language.
***Example:
Email filters, Digital phone calls
Smart assistants, Spell check
Search results, Language translation
Social Media Monitoring.
Natural Language Processing &Neural Networks :
(Neural Networks (Rank Brain))
We surf the internet for things on Google without realizing how
efficiently Google always responds to us with accurate answers. Not only does it
come up with results to our search in a matter of seconds, it also suggests and
auto-corrects our typed sentences.
1. Text Recognition (in an image or a video)
➢ You might have seen / heard about cameras that read vehicle’s number plate
➢ Camera / Machine Translation
➢ using NLP => KL 85 C 4780
➢ Camera / machine captures the image of number plate, the image is transferred to
neural network layer of NLP application, NLP extracts the vehicle’s number
from the image. However, correct extraction of data also depends on the quality of
image.
Quick Question?
➢ Question: Where and how is NLP used? Answer: NLP is natural language
processing and can be used in scenarios where static or predefined answers, options
and questions may not work. In fact, if you want to understand the intent and
context of the user, then it is advisable to use NLP.
➢ Let’s take the example of pizza ordering bot. When it comes to pre-listed pizza
topping options, you can consider using the rule-based bot, whereas in case you
want to understand the intent of the user where one person is saying “I am hungry”
and another person is saying “I am starving”, it would make more sense to use
NLP, in which case the bot can understand the emotion of the user and what he/she
is trying to convey.
2. Summarization by NLP
NLP not only can read and understand the paragraphs or full article but
can summarize the article into a shorter narrative without changing the meaning.
It can create the abstract of entire article.
There are two ways in which summarization takes place – one in which key
phrases are extracted from the document and combined to form a summary
(extraction-based summarization) and the other in which the source document is
shortened (abstraction-based summarization).
https://techinsight.com.vn/language/en/text-summarization-in-machine-learning/
3.Information Extraction
➢ Information extraction is the technology of finding a specific information in a
document or searching the document itself. It automatically extracts structured
information such as entities, relationships between entities, and attributes
describing entities from unstructured sources.
Example
➢ For example, a school’s Principal writes an email to all the teachers in his school -
➢ “I have decided to organize a teachers’ meet tomorrow. You all are requested to
please assemble in my office at 2.30 pm. I will share the agenda just before the
meeting.”
➢ NLP can extract the meaningful information for teachers:
➢ What: Meeting called by Principal
➢ When: Tomorrow at 2.30 pm
➢ Where: Principal’s office
➢ Agenda: Will be shared before the meeting
➢ Another very common example is Search Engines like Google retrieving results
using Information Extraction.
4. Sentiment Analyzer:
➢ It is used to analysis quickly by detect emotions in text data/emoji/etc.
➢ Sentiment analysis is contextual mining of text which identifies and extracts
subjective information in source material, and helping a business to understand the
social sentiment of their brand, product or service while monitoring online
conversations.
➢ Sentiment is the attitudes, opinions, and emotions of a person towards a
person, place, thing, or entire body of text in a document.
➢ Sentiment analysis studies the subjective information in an expression,
that is, the opinions, appraisals, emotions, or attitudes towards a topic, person
or entity. Expressions can be classified as positive, negative, or neutral.
➢ Sentiment analysis works by breaking a message down into topic chunks and
then assigning a sentiment score to each topic.
5. Speech Processing
➢ The ability of a computer to hear human speech and analyze and understand
the content is called speech processing. When we talk to our devices like Alexa or
Siri, they recognize what we are saying to them. For example:
➢ You: Alexa, what is the date today?
➢ Alexa: It is the 18-March-2020.
➢ What happens when we speak to our device? The microphones of the device hears
our audio and plots the graphs of our sound frequencies. As light-wave has a
standard frequency for each colour, so does sound. Every sound (phonetics) has a
unique frequency graph. This is how NLP recognizes each sound and
composes an individual’s words and sentences.
Below table shows the step-wise process of how speech recognition works:
***Speech Recognition System:
We now a days have pocket assistants that can do a lot of tasks at just one
command. Alexa, Google Assistant are some common examples of the voice assistants
which are a major part of our digital devices.
***6. Translate (Neural Machine Translation (NMT)
➢ A Google Translate is a language processor used to converting one
language into another language and that uses NMT increase fluency and accuracy
in Google Translate.
➢ Translate can translate multiple forms of text and media, which includes
text, speech, and text within still or moving images.
➢ AI translators are digital tools that use advanced artificial intelligence to not
only translate the words that are written or spoken, but also to translate the meaning
(and sometimes sentiment) of the message.
➢ This results in greater accuracy and fewer misunderstandings than when
using simple machine translation.
Activity
➢ Have you noticed that virtual assistants like Alexa and Google Assistant work very
well when they are connected to the internet, and cannot function when they are
offline?
➢ Can you do a web search to find out the reason for the same? Please summarize
your findings/understanding below:
Natural Language Processing Tools and Libraries for advance learners
1. NLTK ( www.nltk.org ) Open source NLP Toolkit
2. Cornel (https://stanfordnlp.github.io/CoreNLP/)
3. Open NLP ( https://opennlp.apache.org/ ) Open Source NLP toolkit for Data
Analysis and sentiment Analysis
4. SpaCy ( https://spacy.io/ ) for Data Extraction, Data Analysis, Sentiment Analysis,
Text Summarization
5. Allen NLP ( https://allennlp.org/ ) for text analysis and Sentiment Analysis
Computer Vision (CV)
➢ CV is a field of study that enables the computers to “see”.
➢ Computer vision related projects translate digital visual data into
descriptions. This data is then turned into computer-readable language to aid the
decision-making process. The main objective of this domain of AI is to teach
machines to collect information from pixels.
➢ Computer Vision, cut as CV, is a domain of AI that depicts the capability of a
machine to get and analyze visual information and afterwards predict some
decisions about it. The entire process involves image acquiring, screening, analyzing,
identifying and extracting information.
➢ This extensive processing helps computers to understand any visual content
extraction of information from digital images. In computer vision, Input to machines
can be photographs, videos and pictures from thermal or infrared sensors, indicators and
different sources.
➢ Image Processing Technique & Pattern Reorganization used in CV, they are
subset of Computer Vision.
➢ To help us navigate to places, apps like UBER and Google Maps come in human.
Thus, one no longer needs to stop repeatedly to ask for directions.
CV has been most effective in the following:
➢ Object Detection
➢ Optical Character Recognition
➢ Fingerprint Recognition
Source:
https://towardsdatascience.com/everything-you-ever-wanted-to-know-about-computer-vision
heres-a-look-why-it-s-so-awesome-e8a58dfb641e & https://nordicapis.com/8-biometrics-apis-
at-your-fingertips/
Quick Question?
Q1: List out three of 3 places/locations (like library, playground etc.) in your
school where surveillance cameras have been installed?
Q2: In your opinion, who does the actual surveillance – the camera or the
person sitting behind the device? Justify your answer. and why?
Q3: What features you would like to add to the surveillance camera to make
it true smart surveillance camera?
Before we dive deep into the interesting world of computer vision, let us perform
an online activity:
Activity: Open the URL in your web browser –
https://studio.code.org/s/oceans/stage/1/puzzle/1
Activity details
Level 1:
➢ The first level is talking about a branch of AI called Machine Learning (ML). This
level is explaining about the example of ML in our daily life like email filters, voice
recognition, auto-complete text and computer vision.
Level 2 – 4:
➢ Students can proceed through the first four levels on their own or with a partner. In
order to program AI, use the buttons to label an image as either "Fish" or "Not
Fish". Each image and its label become part of the data which is used to train the
Computer Vision Model. Once the AI model gets properly trained, it will classify
an image as ‘Fish’ and ‘Not Fish’. Fish will be allowed to live in the ocean but
‘Not-Fish’ will be grabbed and stored in the box, to be taken out of ocean/river
later.
➢ At the end of Level-4, we should explain to students how AI identifies objects using
the training data, in this case fish is our training data. This ability of AI to see and
identify an object is called ‘Computer Vision’. Similar to the case of humans where
they can see an object and recognize it, an AI application (using camera) can also
see an object and recognize it.
➢ Advance learners may go ahead with the level 5 and beyond.
Quick Question?
Based on the activity you just did, answer the following questions:
Q1: Can we say ‘computer vision’ acts as the eye of a AI machine / robot?
Q2: In the above activity, why did we have to supply many examples of fish before
the AI model actually started recognizing the fish?
Q3: Can a regular camera see things?
Q4: Let me check your imagination – You have been tasked to design and develop a
prototype of a robot to clean your nearby river/ ponds. Can you use your imagination
and write 5 lines about features of this ‘future’ robot?
Computer Vision: How does computer see an image
Consider the image below:
Looking at the picture, the human eye can easily tell that a train/engine has crashed
through the statin wall (https://en.wikipedia.org/wiki/Montparnasse_derailment). Do
you think the computer also views the image the same was as humans? No, the
computer sees images as a matrix of 2-dimensional array (or three-dimensional
array in case of a colour image).
➢ The above image is a grayscale image, which means each value in the 2D matrix
represents the brightness of the pixels. The number in the matrix ranges between
0 to 255, wherein 0 represents black and 255 represents white and the values
between them is a shade of grey.
➢ For example, the above image has been represented by a 9x9 matrix. It shows 9
pixels horizontally and 9 pixels vertically, making it a total of 81 pixels (this is a
very low pixel count for an image captured by a modern camera, just treat this as an
example).
➢ In the grayscale image, each pixel represents the brightness or darkness of the pixel,
which means the grayscale image is composed of only one channel*. But colour
image is the right mix of three primary colors (Red, Green and Blue), so a
colour image will have three channels*. #darkblue=122*11*81=108702
➢ Since colour images have three channels*, computers see the colour image as a
matrix of a 3-dimensional array. If we have to represent the above locomotive
image in colour, the 3D matrix will be 9x9x3. Each pixel in this colour image has
three numbers (ranging from 0 to 255) associated with it. These numbers represent
the intensity of red, green and blue colour in that particular pixel.
➢ * Channel refers to the number of colors in the digital image. For example, a
colored image has 3 channels – the red channel, the green channel and the blue
channel. Image is usually represented as height x width x channels, where channel
is 3 for coloured image and channel is for 1 for grayscale image.
Computer Vision: Primary Tasks
There are primarily four tasks that Computer vision accomplishes:
1. Semantic Segmentation (Image Classification)
2. Classification + Localization
3. Object Detection
4. Instance Segmentation
113b*155g*177r=1245698 (darkpink)
1. Semantic Segmentation
➢ Semantic Segmentation is also called the Image classification. Semantic
segmentation is a process in Computer Vision where an image is classified
depending on its visual content. Basically, a set of classes (objects to identify in
images) are defined and a model is trained to recognize them with the help of
labelled example photos. In simple terms it takes an image as an input and outputs a
class i.e. a cat, dog etc. or a probability of classes from which one has the highest
chance of being correct.
➢ For human, this ability comes naturally and effortlessly but for machines, it’s a
fairly complicated process. For example, the cat image shown below is the size of
248x400x3 pixels (297,600 numbers)
(Source : http://cs231n.github.io/assets/classify.png)
2. Classification and Localization
Once the object classified and labelled, the localization task is evoked
which puts a bounding box around the object in the picture. The term
‘localization’ refers to where the object is in the image. Say we have a dog in an
image, the algorithm predicts the class and creates a bounding box around the object
in the image.
Source: https://medium.com/analytics-vidhya/image-classification-vs-object-detection-vs-
image-segmentation-f36db85fe81 3.
3. Object Detection
When human beings see a video or an image, they immediately identify the objects
present in them. This intelligence can be duplicated using a computer. If we have multiple
objects in the image, the algorithm will identify all of them and localize (put a bounding
box around) each one of them. You will therefore, have multiple bounding boxes and labels
around the objects.
Source: https://pjreddie.com/darknet/yolov1/
4. Instance Segmentation
Instance segmentation is that technique of CV which helps in identifying
and outlining distinctly each object of interest appearing in an image.
This process helps to create a pixel-wise mask for each object in the
image and provides us a far more granular understanding of the object(s) in the
image. As you can see in the image below, objects belonging to the same class are
shown in multiple colours.
Source: https://towardsdatascience.com/detection-and-segmentation-through-
convnets-47aa42de27ea
Activity : Let’s have some fun now!
➢ Given a grayscale image, one simple way to find edges is to look at two neighboring pixels
and take the difference between their values. If it’s big, this means the colours are very
different, so it’s an edge.
➢ The grids below are filled with numbers that represent a grayscale image. See if you can
detect edges the way a computer would do it.
➢ Try it yourself!
Grid 1: If the values of two neighboring squares on the grid differ by more than 50, draw a
thick line between them.
Grid 2: If the values of two neighboring squares on the grid differ by more than 40, draw a
thick line between them.
Weather Predictions
➢ Weather forecasting deals with gathering the satellites data, identifying
patterns in the observations made, and then computing the results to get accurate
weather predictions. This is done in real-time to prevent disasters.
➢ Artificial Intelligence uses computer-generated mathematical programs
and computer vision technology to identify patterns so that relevant weather
predictions can be made. Scientists are now using AI for weather forecasting to
obtain refined and accurate results, fast!
➢ In the current model of weather forecasting, scientists gather satellite
data i.e. temperature, wind, humidity etc. and compare and analyze this data
against a mathematical model that is based on past weather patterns and
geography of the region.
➢ This is done in real time to prevent disasters. This model being primarily
human dependent (and the mathematical model cannot be adjusted real time) faces
many challenges in forecasting.
➢ This has resulted in scientists preferring AI for weather forecasting. One of
the key advantages of the AI based model is that it adjusts itself with the
dynamics of atmospheric changes.
***CNN:(Convolution Neural Network)
➢ A Convolutional Neural Network (CNN) is a type of artificial neural
network used in image recognition and processing, classification,
segmentation that is specifically designed to process pixel data.
➢ CNN have their “neurons” arranged more like those of the frontal lobe,
the area responsible for processing visual stimuli in humans and other animals.
➢ Convolutional Neural Network (CNN) allows researchers to generate
accurate rainfall predictions six hours ahead of when the precipitation occurs.
Application of CNN:
✓ Understanding Climate, Visual Search, Analyzing Documents.
✓ Advertising, Image Tagging, Character recognition.
✓ Predictive Analytics - Health Risk Assessment
✓ Historic and Environmental Collections.
The AI based commodity forecasting system can produce transformative results,
such as:
1. The accuracy level would be much higher than the classical forecasting model
2. AI can work on broad range of data and due to which it can reveal new insights
3. AI model is not rigid like classical model therefore its forecasting is always based
on the most recent input.
Self Driving Car: (Computer Vision & Neural Networks )
➢ A Self-driving car, also known as an autonomous vehicle (AV), driverless
car, robot car, or robotic car is a vehicle that is capable of sensing its
environment using Google Street View and collects input to moving safely with
little or without human input.”
➢ Self-driving cars combine a variety of sensors to perceive their
surroundings, such as radar, lidar, sonar, GPS, odometry and inertial
measurement units.
Question 1: What should the self-driving car do in the above scenario?
____________________________________________________________________
Question 2: In the age of self-driving cars, do we need zebra crossings? After all, self-
driving cars can potentially make it safe to cross a road anywhere.
_____________________________________________________________________
Self-driving cars or autonomous cars, work on a combination of technologies.
Here is a brief introduction to them:
1. Computer Vision: Computer vision allows the car to see / sense the surrounding. Basically,
it uses:
‘Camera’ that captures the picture of its surrounding which then goes to a deep learning model
to processing the image. This helps the car to know when the light is red or where there is a
zebra crossing etc.
‘Radar’ – It is a detection system to find out the how far or close the other vehicles on the
road are
‘Lidar’ – It is a surveillance method with which the distance to a target is measured. The
lidar (Lidar emits laser rays) is usually placed in a spinning wheel on top of the car so that they
can spin around very fast, looking at the environment around them. Here you can see a Lidar
placed on top of the Google car.
2. Deep Learning:
This is the brain of the car which takes driving decisions on the information
gathered through various sources like computer visions etc.
Robotics:
The self-driven cars have a brain and vision but still its brain needs to
connect with other parts of the car to control and navigate effectively. Robotics helps
transmit the driving decisions (by deep learning) to steering, breaks, throttle etc.
Navigation:
Using GPS, stored maps etc. the car navigates busy roads and hurdles to
reach its destination.
***Price Forecast for Commodities:
➢ Forecasting Energy Commodity Prices Using Neural Networks.
➢ The use of neural networks as an advanced signal processing tool may be
successfully used to model and forecast energy commodity prices, such as crude oil, coal,
natural gas, copper and electricity prices.
➢ A Commodity's price is determined primarily by the forces of supply and demand
for the commodity in the market. If the weather in a certain region is going to affect the
supply of a commodity, the price of that commodity will be affected directly.
Examples: corn, soybeans, and wheat.
➢ Copper, one of the oldest and most commonly used commodities, ChAI came
together as a team with a goal to leverage the latest AI techniques in order to provide the
most accurate forecasts over a range of commodities.
❑ Prediction Accuracy.
❑ Confidence Bound Improvement.
❑ Explain ability of our models.
Google Duplex(Speech Recognition System: )
What Is Google Duplex?
Google Duplex technology is created to sound normal, to make the conversation
experience more friendly. It’s important to us that clients and businesses have a decent
experience with this service, and transparency is a key piece of that. We need to be clear about
the purpose of the call so organizations understand the context.
What will I use Google Duplex for?
Google Duplex won’t say whatever you instruct it to — it only works for specific
kinds of over-the-phone requests. The two illustrations Google introduced at I/O involved
setting up a hair salon appointment and a reservation at a restaurant. Another illustration
is getting some information about business hours. In other words, it only works for those
scenarios because it’s been trained for them, and cannot speak freely in a general context.
How does Google Duplex work?
Duplex relies upon a machine learning model drawn from real world data - for this
circumstance, phone conversations. It consolidates the organization’s most recent leaps
forward in speech recognition and content to-speech synthesis with a range of contextual
details, including the purpose and history of the conversation.
To make Duplex sound normal, Google has even attempted to reproduce the flaw
of common human speech. Duplex consolidates a short delay before issuing certain
responses and says “uh” and “well” as it artificially reviews data. The outcome is
phenomenally exact with amazingly short latency.
https://www.youtube.com/watch?v=D5VN56jQMWM
Characteristics and Types of AI
➢ Artificial Intelligence is autonomous and can make independent decisions — it
does not require human inputs, interference or intervention and works silently in the
background without the user’s knowledge.
➢ These systems do not depend on human programming, instead they learn on their
own through data experiencing.
➢ Has the capacity to predict and adapt – Its ability to understand data patterns is
being used for future predictions and decision-making.
➢ It is continuously learning – It learns from data patterns.
➢ AI is reactive – It perceives a problem and acts on perception.
➢ AI is futuristic – Its cutting-edge technology is expected to be used in many more
fields in future.
There are many applications and tools, being backed by AI, which has a direct impact
on our daily life. So, it is important for us to understand what kind of systems can be
developed, in a broad sense, using AI.
***Characteristics and Types of AI
The ideal characteristic of artificial intelligence is its ability to
rationalize and take actions that have the best chance of achieving a
specific goal.
Eliminate Dull Tasks
➢ We all have days where we have to do busy work: dull, boring tasks that may be
necessary but are neither important nor valuable. Fortunately, machine learning
and AI are beginning to address some of this through human-computer
interaction technologies.
➢ More robust use of this technology can be found in companies like Dialog flow,
a subsidiary of Google, and formerly known as Api.ai. Such companies build
conversational interfaces that use machine learning to understand and
meet customer needs.
➢ Virtual assistants like Alexa, Siri, Cortana, and Google Assistant perform basic
tasks, conversing with the user in natural language.
➢ Want to send 5,000 calendar invites, or book a flight from San Francisco to
Paris? Need a reliable method to answer basic customer questions online,
instantaneously? Solutions like those offered by Dialog flow cut through the
dull work that would otherwise require hours of human time.
➢ Sun Express, a Turkish-based airline, has become the first such airline in
the world to allow users to book tickets using the Alexa voice assistant from
Amazon, according to Simple Flying.
***Focus Diffuse Problems
➢ The expanse of available data has gone beyond what human beings are capable
of synthesizing, making it a perfect job for machine learning and AI.
➢ For example, Elucify helps sales teams automatically update their contacts.
➢ By simply clicking a button, information is pulled from a multitude of public
and private data sources.
➢ Elucify takes all of that diffuse data, compares it, and makes changes
where necessary.
Distribute data
➢ Modern cybersecurity drives a need for comparing terabytes of inside data with
a comparable amount of outside data.
➢ This has been a very difficult problem to solve, but machine learning and AI are
perfect tools for the job.
➢ Vectra Networks - a security company that uses AI to fight cyber - attacks.
➢ By comparing outside network data to the log inside the enterprise, Vectra
Networks can automate the process of detecting attacks.
➢ As the problems of cyber security change and increase, organizations like these
will be vital in addressing the problems of distributed data.
***Solve Dynamic Data
➢ Now, some forward-looking companies are adopting AI to solve the
dynamic problems of human behavior.
➢ A startup called Gi Prime utilizes code data to determine the most
productive work patterns for software engineers.
➢ These complex patterns help organizations move faster and be more
responsive to changing needs.
➢ Finding that human influence in millions of lines of code would have
been impossible in the past, but machine learning and AI can help us
see them.
***Prevent Dangerous Issues
➢ Brand-new industrial systems combine AI-powered robots, 3D
printing, and human oversight. Not only companies realize billions in
savings with these systems, but they will also save lives.
➢ This process not only reduces cost and improves efficiency for the
company, but creates much safer environments for the human
workers.
➢ The dangerous elements of manufacturing jobs are replaced by
machines, while the brains behind them remain safe and human.
➢ Machine learning and AI provide an excellent approach to solve many
of these dull, diffuse, distributed, dynamic, and dangerous problems.
Recommendation Systems
➢ A recommendation system recommends or suggests products, services,
information to users based on analysis of data based on a number of factors
such as the history, behavior, preference, and interest of the user, etc. It is a
data driven AI which needs data for training.
➢ A recommender system, or a recommendation system (sometimes
replacing 'system' with a synonym such as platform or engine), is a subclass of
information filtering system that seeks to predict the "rating" or "preference" a
user would give to an item.
➢ They are primarily used in commercial applications.
➢ For instance, when you watch a video on YouTube, it suggests many
videos that match or suit the videos that you generally search for, prefer, or
have been watching in the past.
➢ So, in short it gathers data from the history of your activities and behavior.
Let us now take another example. Flipkart recommends the types of products that
you normally prefer or buy.
➢ All these are classic examples of recommendation, but you must know that
Machine Learning (AI) is working behind all this to give you a good user
experience.
➢ Recommender systems engines help the users to get personalized
recommendations, helps users to take correct decisions in their online
transactions, increase sales and redefine the users web browsing
experience, retain the customers, enhance their shopping experience.
➢ There are majorly six types of recommender systems which work
primarily in the Media and Entertainment industry:
➢ Collaborative recommender system
➢ Content-based recommender system
➢ Demographic based recommender system
➢ Utility based recommender system
➢ Knowledge based recommender system
➢ Hybrid recommender system.
***Collaborative Recommender System:
➢ Collaborative recommender systems aggregate ratings or recommendations.
➢ If people have similar interests in the past, they will have similar interests in the
future. Collaborative filtering uses algorithms to filter data from user reviews to
make personalized recommendations for users with similar preferences.
➢ Collaborative filtering is based on the assumption that people who agreed in the
past will agree in the future and that they will like similar kind of objects as
they liked in the past.
Content Based Recommender System:
➢ A content-based recommender learns a profile of the new user’s interests based
on the features present, in objects the user has rated.
➢ It’s basically a keyword specific recommender system here keywords are used
to describe the items.
➢ Thus, in a content-based recommender system the algorithms used are such that
it recommends users similar items that the user has liked in the past or is
examining currently.
***Demographic Based Recommender System:
➢ In Demographic-based recommender system the algorithms first need a
proper market research in the specified region accompanied with a
short survey to gather data for categorization.
➢ Demographic techniques form “people-to-people” correlations like
collaborative ones, but use different data.
➢ The benefit of a demographic approach is that it does not require a
history of user ratings like that in collaborative and content based
recommender systems.
***Utility Based Recommender System:
➢ Utility based recommender system makes suggestions based on
computation of the utility of each object for the user. Of course, the
central problem for this type of system is how to create a utility for
individual users.
➢ In utility based system, every industry will have a different
technique for arriving at a user specific utility function and applying
it to the objects under consideration.
➢ The main advantage of using a utility based recommender system is
that it can factor non-product attributes, such as vendor reliability
and product availability, into the utility computation.
➢ This makes it possible to check real time inventory of the object and
display it to the user.
***Knowledge Based Recommender System:
➢ This type of recommender system attempts to suggest objects based
on inferences about a user’s needs and preferences.
➢ Knowledge based recommendation works on functional knowledge: they
have knowledge about how a particular item meets a particular user
need, and can therefore reason about the relationship between a
need and a possible recommendation.
Hybrid Recommender System:
➢ Combining any of the two systems in a manner that suits a
particular industry is known as Hybrid Recommender system.
➢ This is the most sought after Recommender system that many companies
look after, as it combines the strengths of more than two
Recommender system and also eliminates any weakness which exist
when only one recommender system is used.
➢ There are several ways in which the systems can be combined, such as:
Weighted Hybrid Recommender:
➢ P-Tango system combines collaborative and content based recommendation
systems giving them equal weight in the starting, but gradually adjusting the
weighting as predictions about the user ratings are confirmed or
disconfirmed.
Switching Hybrid Recommender:
➢ Suppose if we combine the content and collaborative based recommender
systems then, the switching hybrid recommender can first deploy content
based recommender system and if it doesn’t work then it will deploy
collaborative based recommender system.
Mixed Hybrid Recommender:
➢ Where it’s possible to make a large number of recommendations
simultaneously, we should go for Mixed recommender systems. Here
recommendations from more than one technique are presented together, so it the
user can choose from a wide range of recommendations.
Data Driven AI
➢ The Recent development in cheap data storage (hard disk etc.), fast processors
(CPU, GPU or TPU) and sophisticated deep learning algorithm has made it
possible to extract huge value from data and that has led to the rise of data
centric AI systems.
➢ Such AI systems can predict what will happen next based on what they’ve
experienced so far, very efficiently. At times these systems have managed to
outperform humans.
➢ “Data-Driven” approach makes strategic decisions based on Data Analysis and
Interpretation.
➢ Data Driven AI systems, get trained with large datasets, before it makes
predictions, forecasts or decisions.
➢ However, the success of such systems depends largely on the availability of
correctly labelled large datasets.
➢ Every AI system we know around, are data driven AI system.
➢ Data-driven learning, a learning approach driven by research-like
access to data. Scientific methods to extract knowledge from data.
➢ Data-driven testing is the creation of test scripts to run together with
their related data sets in a framework.
➢ The framework provides re-usable test logic to reduce maintenance and
Data-driven science, an interdisciplinary field of improve test coverage.
Autonomous System
➢ Autonomous system is a technology which understands the environment
and reacts without human intervention.
➢ The autonomous systems are often based on Artificial intelligence.
➢ For example, self-driving car, space probe rover like Mars rover or a
floor cleaning robot etc. are autonomous system.
➢ An IoT device like a smart home system is another example of an
autonomous system.
➢ Because it does things like gather information about the temperature of
a room, then uses that information to determine whether the AC needs to be
switched on or not, then executes a task to achieve the desired result.
➢ Autonomous systems are defined as systems that are able to accomplish a
task, achieve a goal, or interact with its surroundings with minimal to no human
involvement.
➢ It is also essential that these systems be able to predict, plan, and be aware of
the world around them.
➢ The Intelligent Autonomous Systems group investigates methods for
cognition-enabled robot control.
➢ The research is at the intersection of robotics and Artificial
Intelligence and includes methods for intelligent perception, expert object
manipulation, plan-based robot control, and knowledge representation for
robots.
➢ Robots performing complex tasks in open domains, such as assisting
humans in a household or collaboratively assembling products in a factory, need
to have cognitive capabilities for interpreting their sensor data,
understanding scenes, selecting and parametrizing their actions, recognizing
and handling failures and interacting with humans.
➢ In our research, we are developing solutions for these kinds of issues and
implement and test them on the robots in our laboratory.
***Types of Automatous System:
***Autonomous System
An autonomous system (AS) is a collection of connected Internet
Protocol (IP) routing prefixes under the control of one or more network
operators on behalf of a single administrative entity or domain that presents
a common, clearly defined routing policy to the internet.
An autonomous system number (ASN) is a unique number that's
available globally to identify an autonomous system and which enables that
system to exchange exterior routing information with other neighboring
autonomous systems. A unique ASN is allocated to each AS for use in
Border Gateway Protocol (BGP) routing.
➢ Fixed automation,
➢ Programmable automation,and.
➢ Flexible automation.
***FIXED AUTOMATION
It is a system in which the sequence of processing (or assembly)
operations is fixed by the equipment configuration. The operations in
the sequence are usually simple.
It is the integration and coordination of many such operations into
one piece of equipment that makes the system complex.
The typical features of fixed automation are:
➢ High initial investment for custom–engineered equipment;
➢ High production rates;
➢ The economic justification for fixed automation is found in products
with very high demand rates and volumes.
➢ The high initial cost of the equipment can be spread over a very large
number of units, thus making the unit cost attractive compared to
alternative methods of production.
➢ Examples of fixed automation include mechanized assembly and
machining transfer lines.
***PROGRAMMABLE AUTOMATION
In this the production equipment is designed with the capability to change the
sequence of operations to accommodate different product configurations.
The operation sequence is controlled by a program, which is a set of
instructions coded so that the system can read and interpret them. New programs can be
prepared and entered into the equipment to produce new products. Some of the features that
characterize programmable automation are:
➢ High investment in general-purpose equipment;
➢ Low production rates relative to fixed automation;
➢ Flexibility to deal with changes in product configuration; and
➢ Most suitable for batch production.
➢ To produce each new batch of a different product, the system must be
reprogrammed with the set of machine instructions that correspond to the new
product. The physical setup of the machine must also be changed over: Tools must
be loaded, fixtures must be attached to the machine table also be changed machine
settings must be entered. This changeover procedure takes time. Consequently, the
typical cycle for given product includes a period during which the setup and
reprogramming takes place, followed by a period in which the batch is produced.
Examples of programmed automation include numerically controlled machine tools
and industrial robots.
***FLEXIBLEAUTOMATION
It is an extension of programmable automation. A flexible automated system
is one that is capable of producing a variety of products (or parts) with virtually
no time lost for changeovers from one product to the next. There is no production
time lost while reprogramming the system and altering the physical setup (tooling,
fixtures, and machine setting). Consequently, the system can produce various
combinations and schedules of products instead of requiring that they be made in
separate batches. The features of flexible automation can be summarized as follows:
➢ High investment for a custom-engineered system.
➢ Continuous production of variable mixtures of products.
➢ Medium production rates.
➢ Flexibility to deal with product design variations.
➢ The essential features that distinguish flexible automation from programmable
automation are:
➢ The capacity to change part programs with no lost production time; and
➢ The capability to changeover the physical setup, again with no lost production time.
➢ Advances in computer systems technology are largely responsible for this
programming capability in flexible automation.
Human Like
➢ Human-Level Intelligence has come to be known as Strong AI or
Artificial General Intelligence (AGI).
➢ Humans have the ability to gain and apply general knowledge to
problem solving for a wide range of subject domains.
➢ Some of which is acquired by all human beings, such as walking and
talking, etc.
➢ Most of the time, when developers claim they've created a “human-like”
AI, what they mean is that they've automated a task that humans are often
employed for.
➢ Artificial General Intelligence (AGI) refers to a machine that can apply
knowledge and skills within different contexts – in short, one that can
learn by itself and work out problems like a human.
➢ Sophia: A humanoid robot developed by Hanson Robotics, is one of the
most human-like robots.
➢ Sophia is a realistic humanoid robot capable of displaying humanlike
expressions, human-like conversation and is able to make many human-
like facial expressions and interacting with people.
➢ It's designed for research, education, and entertainment, and helps promote
public discussion about AI ethics and the future of robotics.
Activity
➢ Does this picture remind you of something? Can you please share your
thoughts about this image in the context of AI?
--------------------------------------------------------------------------------------------------------
➢ The dream of making machines that think and act like humans, is not new.
We have long tried to create intelligent machines to ease our work.
➢ These machines perform at greater speed, have higher operational ability and
accuracy, and are also more capable of undertaking highly tedious and monotonous
jobs compared to humans.
➢ Humans do not always depend on pre-fed data as required for AI. Human
memory, its computing power, and the human body as an entity may seem
insignificant compared to the machine’s hardware and software infrastructure.
➢ But, the depth and layers present in our brains are far more complex and
sophisticated, and which the machines still cannot beat at sin the near future.
➢ These days, the AI machines we see, are not true AI. These machines are
super good in delivering a specific type of jobs.
Think of Jarvis in “Iron Man” and you’ll get a sneak peek of Artificial
General Intelligence (AGI) and that is nothing but human-like AI. Although we still
have a long way to go in attaining human-like AI.
Non-Technical Explanation of Deep Learning
➢ The excitement behind artificial intelligence and deep learning is at its peak. At the
same time there is growing perception that these terms are meant for techies to
understand, which is a misconception.
➢ Here is a brief overview of deep learning in simple non-technical terms!
➢ Imagine the below scenario.
➢ A courier delivery person (X) has to deliver a box to a destination, which is right
across the road from the courier company. In order to carry out the task, X will pick
up the box, cross the road and deliver the same to the concerned person.
➢ In neural network terminology, the activity of ‘crossing the road’ is termed as
neuron. So, input ‘X’ goes into a single neuron which is ‘Crossing the road’ to
produce the output/goal which in this case is the ‘Final Destination’.
➢ In this example the starting and the ending are connected with a straight line. This
is an example of a simple neural network.
➢ The life of delivery man is not so simple and straightforward in reality. They
start from a particular location in the city, and go to different locations across
the city. For instance, as shown below, the delivery man could choose multiple
paths to fulfil his/her deliveries as shown in the image below:
1. Option 1: ‘Address 1’ to ‘Destination 1’and then to the Final Destination’.
2. Option 2: ‘Address 2’ to ‘Destination 2’ and then to the ‘Final Destination’
3. Option 3: ‘Address 3’ to ‘Destination 3’ and then the ‘Final Destination’.
➢ All of the above (and more combinations) are valid paths that take the delivery man
from ‘start’ to ‘finish’.
However, some routes (or “paths”) are better than the rest. Let us assume that
all paths take the same time, but some are really bumpy while some are smooth.
Maybe the path chosen in ‘Option3’ is bumpy and the delivery man has to burn a
(loss) lot more fuel on the way! Whereas, choosing ‘Option 2’ is perfectly smooth, so
the deliveryman does not lose anything!
In this case, the deliveryman would definitely prefer ‘Option 2’ (Address 1 –
Destination 2 – Final Destination) as he/she will not want to burn extra fuel! So, the
goal of Deep Learning then, is to assign ‘weights’ to each path from address to
destination so that delivery man will find the most optimum path.
How does this work?
➢ As we discussed, the goal of the deep learning exercise is to assign weights to each path
from start to finish (start – address – destination – Final Destination). To do this for the
example discussed above, each time a delivery man goes from start to finish, the fuel
consumption for every path is computed. Based on this parameter (in reality the number of
parameters can go up to 100 or more), cost for each path is calculated and this cost is
called the ‘Loss Function’ in deep learning.
➢ As we saw above in our example, ‘Option 3’ (Address 3 -- Destination 3 –Final
Destination) lost a lot of fuel, so that path had a large loss function. However, ‘Option 2’
(Address 1 – Destination 2 – Final Destination) cost the deliveryman least fuel cost,
thereby having a small loss function as well– thereby making it the most efficient route!
➢ The above picture is a representation of a neural network of the most efficient path to be
taken by the deliveryman to reach the goal i.e. to go from starting point ‘X’ to ‘Final
Destination’ using the best possible route (which is most fuel efficient). This is a very
small neural network consisting of 3 neurons and 2 layers (address and destination).
➢ In reality neural networks are not as simple as the above discussed example. They may
look like the one shown below!
(Source - http://www.rsipvision.com/exploring-deep-learning/ )
➢ The term ‘deep’ in Deep Learning refers to the various layers you will find in a neural
network. This closely relates to how our brains work.
➢ The neural network shown above, is a network of 3 layers (hidden layer1, hidden layer 2
and hidden layer 3) with each layer having 9 neurons each.
***Non-Technical Explanation of Deep Learning:
➢ These Four Capabilities: Classifying what we perceive, Predicting
things in a sequence, Reasoning and Planning are some of the key
aspect of intelligence.
➢ we were shown just a few pictures of cats and dogs and from that we
could easily spot a new picture of cat? We learnt to classify things we
perceive
➢ Also, we learnt to predict what would happen next in a sequence of
events. For instance, in a peekaboo game when someone hid their face
with their hands and took their hands away we expected to see their face.
➢ As we aged, we learnt not to just go by what we see or perceive, but
reason. For instance even though we only saw a cat with three legs we
would reason it perhaps lost a leg in an unfortunate accident or was born
with a congenital deformity. We would not conclude it is a new species
of cat.
➢ Lastly we learnt very early to start planning to get what we want from
our parents/guardians.
The Future with AI, and AI in Action
➢ Transportation: Although it could take a decade or more to perfect them,
autonomous cars will one day ferry us from place to place. The place where AI may
have the biggest impact in the near future is self-driving cars. Unlike humans, AI
drivers never look down at the radio, put on mascara or argue with their kids in
the backseat. Thanks to Google, autonomous cars are already here, but watch for
them to be universal by 2030. Driverless trains already rule the rails in European
cities, and Boeing is building an autonomous jetliner (pilots are still required to put
info into the system).
➢ Manufacturing: AI powered robots work alongside humans to perform a limited
range of tasks like assembly and stacking, and predictive analysis sensors keep
equipment running smoothly.
➢ Medicine: AI algorithms will enable doctors and hospitals to better analyze data
and customize their health care to the genes, environment and lifestyle of each
patient. From diagnosing brain tumors to deciding which cancer treatment will work
best for an individual, AI will drive the personalized medicine revolution.
➢ Education: Textbooks are digitized with the help of AI, early-stage virtual tutors
assist human instructors and facial analysis gauges the emotions of students to help
determine who’s struggling or bored and better tailor the experience to their
individual needs.
Entertainment:
➢ In the future, you could sit on the couch and order up a custom movie featuring
virtual actors of your choice. Meanwhile, film studios may have a future without
flops: Sophisticated predictive programs will analyze a film script’s storyline and
forecast its box office potential.
➢ The film industry has given us a fair share of artificial intelligence movies
with diverse character roles — big to small, anthropomorphic to robotic, and evil to
good. And over the years, this technology has advanced to make its way into the media
and entertainment industry. From video games to movies and more, AI technologies have
been playing a crucial role in improving efficiencies and contributing to growth therein.
➢ Immersive visual experience is also another popular application of Artificial
intelligence. Virtual Reality (VR) and Augmented Reality (AR) are all the rage
nowadays. Development of VR content for games, live events, reality shows and other
entertainment have been gaining immense popularity and attention
Vital Tasks:
➢ AI will assistants the tools to help older people stay independent and live in their
own homes longer. AI tools will keep nutritious food available, safely reach objects
on high shelves, and monitor movement in a senior’s home.
➢ Could mow lawns, keep windows washed and even help with bathing and
hygiene. Many other jobs that are repetitive and physical are perfect for AI-based
tools. But the AI-assisted work may be even more critical in dangerous fields like
mining, firefighting, clearing mines and handling radioactive materials.
Chatbots:
➢ AI-enabled chatbots have become a crucial part of sales and customer service.
Helping brands serve their customers and improve the entire support experience,
chatbots are bringing a new way for businesses to communicate with the world.
Earlier versions of chatbots were configured with a limited number of inputs and
weren’t able to process any queries outside of those parameters. As a result, the
communication experience was far less appealing and frustrating for a customer.
➢ However, artificial intelligence and its subset of technologies have transformed the
overall functionality and intelligence of chatbots. AI-Powered Machine Learning
(ML) Algorithms and Natural Language Processing (NLP) provided the ability
to Learn and Mimic Human Conversation. Deploying such advanced level of
processing makes chatbots extremely helpful in a wide variety of applications
ranging from Digital Commerce and Banking to Research, Sales and Brand
Building.
➢ Time-consuming and repetitive manual business tasks can be handled by AI-enabled
chatbots, thereby improving efficiency, reducing personnel costs, eliminating human
error and ensuring quality services.
Cybersecurity:
➢ There were about 707 million cybersecurity breaches in 2015, and 554 million
in the first half of 2016 alone. Companies are struggling to stay one step ahead of
hackers. USC experts say the self-learning and automation capabilities enabled
by AI can protect data more systematically and affordably, keeping people safer
from terrorism or even smaller-scale identity theft.
➢ AI-based tools look for patterns associated with malicious computer viruses
and programs before they can steal massive amounts of information or disaster.
Save Our Planet:
➢ Artificial Intelligence can help us create a better world by addressing complex
environmental issues such as climate change, pollution, water scarcity,
extinction of species, and more. Transforming the traditional sectors and
systems, AI can prevent the adverse effects that threaten the existence of
biodiversity.
➢ According to a report published by Intel in 2018, 74% of the 200
professionals in this field surveyed agreed that AI can help solve
environmental challenges.
➢ AI’s ability to gather and process vast amount of unstructured data and
derive patterns from them prove beneficial in resolving many
environmental issues.
➢ An autonomous floating garbage truck powered by AI is another case in
point. This autonomous truck is designed to remove large amounts of plastic
pollution from oceans and safeguard sea life.
Customer Service:
➢ Google is working on an AI assistant that can place human-like calls
to make appointments at, say, your neighborhood hair salon. In addition to
words, the system understands context and hint.
Predictive Analytics
➢ Do you know how some of the largest companies in the world make so much
money? They use many different strategies, but AI-powered predictive analytics is
one of the most important. Take a look at Netflix for example.
➢ A popular media service provider saves more than $1 billion every year thanks
to the powerful content recommendation system.
➢ Instead of giving irrelevant suggestions, Netflix analyzes users’ real interests to
provide them with highly personalized content suggestions.
➢ Amazon does pretty much the same thing – it utilizes AI to display tailored product
ads to each website visitor individually. The system understands users’ earlier
interactions with the company and hence designs perfect cross-sales offers.
Automotive Industry
➢ Tesla is not only a leader in the electric vehicle market but also a pioneer of AI
deployment in the automotive industry. The company is famous for its
autonomous car program that relies on AI to design independent vehicles that can
operate on their own.
➢ Their cars already amassed more than a billion miles of autopilot data in the US, thus
helping the developers to improve the system and add a wide range of other
functions. That includes features such as ABS breaking, smart airbags, lane
guidance, cruise control, and many more.
Healthcare
➢ The entire healthcare industry is getting better due to the influence of AI. Watson for
Oncology is a true gem in the crown as it combines years of cancer treatment studies
and practice to come up with individualized therapies for patients.
There are many other use-cases we should discuss, but let’s just mention a few:
➢ AI is used in biopharmaceutical research.
➢ AI is used for earlier cancer and blood disease detection.
➢ AI is used in radiology for scan analysis and diagnostic.
➢ AI is used for rare disease treatment.
Music Industry:
➢ If you are into music, you must know Pandora. A music streaming service is
known for its machine learning algorithms that allow the provider to analyze
audio content and make incredibly accurate song suggestions.
➢ Namely, Pandora analyzes a database of music to look at over 450 attributes.
AI helps it to analyze everything from vocals and percussions to grungy guitars and
synthesizer echoes. Such accurate data analytics enables Pandora to create song
lists and recommendations that perfectly match listeners’ preferences.
➢ AI helps the musical composer by giving appropriate suggestion in the
tunes.
Creating Powerful Websites:
➢ Create the website may sound difficult, but today there are so many
different website builders that use AI. It’s never been easier to create a website
with sophisticated AI design tools.
➢ Most require little to no coding skills, offer nearly endless options of
well-designed templates, and AI features to optimize your site seamlessly.
➢ Artificial intelligence has made web design easier than ever. However, you
still need to choose the right tools. With AI website builders, designing and
optimizing your site is no longer hard or expensive.
➢ We can provide you customized Artificial intelligence mailing list
depending on your requirement.
Listed below are some of the challenges posed by AI
1. Integrity of AI
➢ AI systems learn by analysing huge volumes of data. What are the consequences
of using biased data (via the training data set)) in favors of a particular
class/section of customers or users?
➢ In 2016, the professional networking site LinkedIn was discovered to have a
gender bias in its system. When a search was made for the female name ‘Andrea’,
the platform would show recommendations/results of male users with the name
‘Andrew’ and its variations. However, the site did not show similar
recommendations/results for male names. i.e. A search/query for the name
‘Andrew’ did not result in a prompt asking the users if he/she meant to find
‘Andrea’. The company said this was due to a gender bias in their training data
which they fixed later.
2. Technological Unemployment
➢ Due to heavy automation, (with the advent of AI and robotics) some sets of
people will lose their jobs. These jobs will be replaced by intelligent machines.
There will be significant changes in the workforce and the market - there will be
creation of some high skilled jobs however some roles and jobs will become
obsolete.
3. Disproportionate control over data
➢ Data is the fuel to AI; the more data you have, the more intelligent machine you
would be able to develop. Technology giants are investing heavily in AI and data
acquisition projects. This gives them an unfair advantage over their smaller
competitors.
4. Privacy
➢ In this digitally connected world, privacy will become next to impossible.
Numerous consumer products, from smart home appliances to computer
applications have features that makes them vulnerable to data exploitation by AI.
AI can be utilized to identify, track and monitor individuals across multiple
devices, whether they are at work, home, or at a public location. To complicate
things further, AI does not forget anything. Once AI knows you, it knows you
forever!
Cognitive Computing (Perception, Learning, Reasoning)
➢ In summary, Cognitive Computing can be defined as a technology platform that is
built on AI and signal processing, to mimic the functioning of a human brain
(speech, vision, reasoning etc.) and help humans in decision making.
(For additional reference please see: https://www.coursera.org/lecture/introduction-to-
ai/cognitive-computing-perception-learning-reasoning-UBtrp)
Now that we have understood what Cognitive Computing is, let us explore the
need for the same
➢ Enormous amounts of unstructured data and information (Facebook pages,
twitter posts, WhatsApp data, sensors data, traffic data, traffic signals data, medical
reports so on) are available to us in this digital age.,
➢ We need an advanced technology (traditional computing cannot process this
amount of data) to make sense of the data in order to help humans to take better
decisions. Because Cognitive Computing generates new knowledge using existing
knowledge, it is viewed as platform holding the potential of future computing.
Applications of Cognitive Computing
➢ In mainstream usage, Cognitive Computing is used to aid humans in their
decision-making process. Some examples of Cognitive Computing and its
applications include the treatment of disease/’ illness by supporting medical
doctors.
➢ For example, ‘The IBM Watson for Oncology’ has been used at the
‘Memorial Sloan Kettering Cancer Centre’ to provide oncologists with evidence-
based treatment alternatives for patients having cancer. When medical staff pose
questions, Watson generates a list of hypotheses and offers treatment possibilities
for doctors.
Watch this video : https://www.coursera.org/lecture/introduction-to-ai/cognitive-
computing-perception-learning-reasoning-UBtrp
Cognitive Computing (Perception, Learning, Reasoning)
➢ Cognitive Computing refers to individual technologies that
perform specific tasks to facilitate human intelligence.
➢ Basically, these are smart decision support systems that we have been
working with since the beginning of the internet boom.
➢ With recent breakthroughs in technology, these support systems
simply use better data, better algorithms in order to get a better analysis
of a huge amount of information.
Also, you can refer to Cognitive Computing as:
➢ Understanding and simulating reasoning
➢ Understanding and simulating human behavior
Using cognitive computing systems help in making better human
decisions at work. Some of the applications of cognitive computing include
speech recognition, sentiment analysis, face detection, risk assessment, and
fraud detection.
➢ The term cognitive computing is typically used to describe AI systems that
aim to simulate human thought.
➢ A number of AI technologies are required for a computer system to
build cognitive models that mimic human thought processes,
including machine learning, deep learning, neural networks, NLP and
sentiment analysis.
➢ In the light of the above notions we define cognitive reasoning as modeling
the human ability to draw meaningful conclusions despite incomplete and
inconsistent knowledge.
How Cognitive Computing Works?
➢ Cognitive computing systems synthesize data from various information
sources while weighing context and conflicting evidence to suggest suitable
answers.
➢ To achieve this, cognitive systems include self-learning technologies
using data mining, pattern recognition, and natural language processing
(NLP) to understand the way the human brain works.
Perception:
➢ Cognitive Perception includes, the senses like listening, seeing, tasting, smiling
and feeling the way we which we deal with the information’s.
➢ Perception - Obtaining information's from our environment.
➢ Perception is ability to capture, process and actively make sense of the information
that our senses receive. It is a Cognitive Process that make possible to interpret our
surrounding with the stimuli that we receive through our sensory organ.
➢ Perception is the process of acquiring, interpreting, selecting, and organizing
sensory information. In AI, the study on perception is mostly focused on the
reproduction of human perception, especially on the perception of aural and visual
signals.
➢ Perception often results in learning information that is directly relevant to the
goals. Perception becomes more skillful with practice and experience, and perceptual
learning can be thought of as the education of attention.
Learning:
By use of machine learning AI is able to use learning from a data
set to solve problems and give relevant recommendations.
Cognitive Computing are Systems that Learn at Scale, Reason
with purpose and interact with humans naturally.
(e.g.)
Proper respect is shown to the National Anthem by standing up
when the National Anthem is Played.
Reasoning:
Reasoning is the mental (cognitive) process of looking for reasons for
beliefs, conclusions, actions or feelings. Humans have the ability to engage in
reasoning about their own reasoning using introspection.
(e.g.)
If we saw a cat with three legs we would reason it perhaps lost a leg in
an unfortunate accident or was born with a congenital deformity. We would not
conclude it is a new species of cat.
To achieve these capabilities, cognitive computing systems must have some key attributes.
Key Attributes:
Adaptive:
Cognitive systems must be flexible enough to understand the changes in the
information. Also, the systems must be able to digest dynamic data in real-time and make
adjustments as the data and environment change.
Interactive:
Human Computer Interaction (HCI) is a critical component in cognitive systems. Users
must be able to interact with cognitive machines and define their needs as those needs change.
The technologies must also be able to interact with other processors, devices and cloud
platforms.
Iterative and Stateful:
These systems must be able to identify problems by asking questions or pulling in
additional data if the problem is incomplete. The systems do this by maintaining information
about similar situations that have previously occurred.
Contextual:
Cognitive systems must understand, identify and mine contextual data, such as
syntax, time, location, domain, requirements, a specific user’s profile, tasks or goals. They
may draw on multiple sources of information, including structured and unstructured data and
visual, auditory or sensor data.
Cognitive Computing Artificial Intelligence
➢ Cognitive Computing focuses
on mimicking human behavior and
reasoning to solve complex problems.
➢ AI augments human thinking to
solve complex problems. It focuses
on providing accurate results.
➢ It simulates human thought processes
to find solutions to complex
problems.
➢ AI finds patterns to learn or reveal
hidden information and find
solutions.
➢ They simply supplement
information for humans to make
decisions.
➢ AI is responsible for making
decisions on their own minimizing
the role of humans.
➢ It is mostly used in sectors
like customer service, health care,
industries, etc.
➢ It is mostly used in finance, security,
healthcare, retail, manufacturing,
etc.
Applications of Cognitive AI
➢ Smart IoT: This includes connecting and optimizing devices, data and the
IoT. But assuming we get more sensors and devices, the real key is what’s going
to connect them.
➢ AI-Enabled Cybersecurity: We can fight the cyber-attacks with the use of data
security encryption and enhanced situational awareness powered by AI. This
will provide a document, data, and network locking using smart distributed data
secured by an AI key.
➢ Content AI: A solution powered by cognitive intelligence continuously learns
and reasons and can simultaneously integrate location, time of day, user habits,
semantic intensity, intent, sentiment, social media, contextual awareness, and
other personal attributes
➢ Cognitive Analytics in Healthcare: The technology implements human-like
reasoning software functions that perform deductive, inductive and abductive
analysis for life sciences applications.
➢ Intent-Based NLP: Cognitive intelligence can help a business become more
analytical in their approach to management and decision making. This will work
as the next step from machine learning and the future applications of AI will
incline towards using this for performing logical reasoning and analysis.
AI and Society
➢ Business consultant Ronald van Loon asserts, “The role of Artificial Intelligence devices in
augmenting humans and in achieving tasks that were previously considered unachievable is just
amazing.
➢ With the world progressing towards an age of unlimited innovations and unhindered
progress, we can expect that AI will have a greater role in actually serving us for the better.
➢ Below are a number of ways pundits expect AI to benefit society.
In the Workplace:
➢ Many futurists fear AI will make humans redundant dramatically increasing unemployment and
causing social unrest.
➢ Many other futurists believe AI and other Information Age technologies will create more jobs than
it eliminates. Regardless of which viewpoint is correct, there will be job displacements requiring
retraining and reskilling.
➢ Beyond AI’s impact on employment, Peter Daisyme co-founder of Hostt, asserts AI will make the
workplace safer. He explains, “Many jobs, despite a focus on health and safety, still involve a
considerable amount of risk.
➢ Risks involved for employees of U.S. factories can be alleviated by the use of the Industrial
Internet of Things and predictive maintenance.
➢ The role of AI and cognitive computing [involves] improving the link between maintenance and
workplace safety.
➢ These technologies address issues before they put factory workers at risk. In addition, AI can take
on more roles within the factory, eliminating the need for humans to work in potentially dangerous areas.
➢ AI can also help relieve workers from having to perform tedious jobs freeing them to perform
more interesting and satisfying work.
In Healthcare:
➢ Ilan Moscovitz asserts, “Healthcare is ground zero for AI. In fact, AI has been
quietly helping doctors treat diseases for almost its entire existence. AI is being used to
assist medical professionals in their diagnoses and to help develop effective drugs more
quickly.
➢ AI can help monitor a patient’s condition 24/7 regardless of their location. It can
help predict which patients are likely to make repeat visits to emergency rooms and can
even predict when patients are likely to die.
In Agriculture:
➢ Artificial intelligence is only one of the Information Age technologies that will
help us feed a growing world population using fewer resources.
➢ Precision farming and other modern techniques rely heavily on these technologies.
In Transportation:
➢ Driverless cars are being pursued by every major automobile manufacturer.
➢ These vehicles are predicted to dramatically reduce deaths due to accident by
reducing human errors and constantly monitoring the physical condition of the car.
➢ In addition, autonomous trucks are being tested in platoons in North Carolina and
autonomous ships are predicted to one day fill sea lanes delivering supplies.
➢ Those are only a few of the ways cognitive technologies are predicted to affect our
lives in the years ahead.
In Education:
➢ Artificial intelligence can be used to personalize education so students can advance at the pace
best-suited to their skills, temperament, and mental abilities.
➢ These technologies won’t eliminate the need for talented teachers; rather, they will help free
teachers to deal with special needs.
In Our Cities:
➢ There is a growing smart cities movement aimed at helping urban residents use scarce resources
more wisely.
➢ As the urbanization trend continues to see cities gain population, smart technologies will
help make urban life more pleasurable and affordable.
➢ AI systems will help manage traffic, improve public transportation, monitor water systems
and electrical grids, and assist law enforcement.
➢ Daisyme notes, “Smart cities, powered by AI platforms, are changing the very fiber of urban
areas. AI technology is addressing old social ills in new ways.”
In Our Environment:
➢ Despite the doubters, scientists have used big data and sophisticated models to demonstrate how
our environment is changing.
➢ Daisyme writes, “Up to now, there has been no way to fully understand the impact that human
beings and economic development have on the environment, including their adverse effects on plant and
animal life.
➢ But that may soon change, as AI provides the necessary tools to collect, shift and organize
extraordinary amounts of data.
➢ AI can help us prevent further damage and better understand how to address development
needs while focusing on sustainability.”
***Recommended Deep-Dive in NLP
Deep Dive:
➢ A deep dive analysis has been recorded as 'a strategy of immersing a team
rapidly into a situation, to provide solutions or create ideas.
➢ Deep dive analysis will normally focus in areas such as process,
organization, leadership and culture.
Natural Language Processing:
➢ NLP stands for Natural Language Processing, which is a part of Computer
Science, Human Language, and Artificial Intelligence.
➢ It is the technology that is used by machines to understand, analyse,
manipulate, and interpret human's languages.
➢ It helps developers to organize knowledge for performing tasks such as
translation, automatic summarization, Named Entity Recognition (NER), speech
recognition, relationship extraction, and topic segmentation.
➢ Natural Language Processing (NLP) is one of the most important
Technologies in Data Science, one of the top branches of machine learning and
artificial intelligence (AI), and has abundant job prospects.
➢ Applications of NLP are everywhere communication is handled mostly
by language: web searches, AI-based.
➢ The Natural Languages Processing started in the year 1940s.
➢ 1948 - In the Year 1948, the first recognizable NLP application was
introduced in Birkbeck College, London.
➢ In 1965, MIT AI laboratory-created Eliza, the first Chabot on “Natural
Language Processing (NLP)”.
Advantages of NLP
➢ NLP helps users to ask questions about any subject and get a direct
response within seconds.
➢ NLP offers exact answers to the question means it does not offer
unnecessary and unwanted information.
➢ NLP helps computers to communicate with humans in their languages.
➢ It is very time efficient.
➢ Most of the companies use NLP to improve the efficiency of documentation
processes, accuracy of documentation, and identify the information from large
databases.
Components of NLP
1. Natural Language Understanding (NLU)
➢ Natural Language Understanding (NLU) helps the machine to understand and
analyse human language by extracting the metadata from content such as concepts, entities,
keywords, emotion, relations, and semantic roles.
➢ NLU mainly used in Business applications to understand the customer's problem in
both spoken and written language.
NLU involves the following tasks:
➢ It is used to map the given input into useful representation.
➢ It is used to analyze different aspects of the language.
2. Natural Language Generation (NLG)
➢ Natural Language Generation (NLG) acts as a translator that converts the
computerized data into natural language representation.
➢ It mainly involves Text planning, Sentence planning, and Text Realization.
Applications and Future of NLP:
1. Question Answering
➢ Question Answering focuses on building systems that automatically answer
the questions asked by humans in a natural language.
2. Spam Detection
➢ Spam detection is used to detect unwanted e-mails getting to a user's inbox.
3. Sentiment Analysis
➢ Sentiment Analysis is also known as opinion mining. It is used on the web
to analyse the attitude, behavior, and emotional state of the sender.
➢ This application is implemented through a combination of NLP (Natural
Language Processing) and statistics by assigning the values to the text (positive,
negative, or natural), identify the mood of the context (happy, sad, angry, etc.)
4. Machine Translation
➢ Machine translation is used to translate text or speech from one natural
language to another natural language.
5. Spelling Correction
➢ Microsoft Corporation provides word processor software like MS-word,
PowerPoint for the spelling correction.
6. Speech Recognition
➢ Speech Recognition is used for converting spoken words into text.
➢ It is used in applications, such as mobile, home automation, video
recovery, dictating to Microsoft Word, voice biometrics, voice user interface, and
so on.
7. Chatbot
➢ Implementing the Chatbot is one of the important applications of NLP. It is
used by many companies to provide the customer's chat services.
8. Information Extraction
➢ Information extraction is one of the most important applications of NLP.
➢ It is used for extracting structured information from unstructured or semi-
structured machine-readable documents.
9. Natural Language Understanding (NLU)
➢ It converts a large set of text into more formal representations such as first-
order logic structures that are easier for the computer programs to manipulate
notations of the natural language processing.
Phases of NLP
1. Lexical Analysis and Morphological
➢ The first phase of NLP is the Lexical Analysis. This phase scans the source code
as a stream of characters and converts it into meaningful lexemes. It divides the whole
text into paragraphs, sentences, and words.
2. Syntactic Analysis (Parsing)
➢ Syntactic Analysis is used to check grammar, word arrangements, and shows
the relationship among the words.
➢ Example: Agra goes to the USA
➢ In the real world, Agra goes to the USA, does not make any sense, so this
sentence is rejected by the Syntactic analyzer.
3. Semantic Analysis
➢ Semantic analysis is concerned with the meaning full representation. It mainly
focuses on the literal meaning of words, phrases, and sentences.
4. Discourse Integration
➢ Discourse Integration depends upon the sentences that proceeds it and also
invokes the meaning of the sentences that follow it.
5. Pragmatic Analysis
➢ Pragmatic is the fifth and last phase of NLP. It helps you to discover the
intended effect by applying a set of rules that characterize cooperative dialogues.
➢ For Example: "Open the Door" is interpreted as a request instead of an order.
NLP Libraries:
➢ Scikit-learn: It provides a wide range of algorithms for building machine
learning models in Python.
➢ Natural language Toolkit (NLTK): NLTK is a complete toolkit for all NLP
techniques.
➢ Pattern: It is a web mining module for NLP and machine learning.
➢ Text Blob: It provides an easy interface to learn basic NLP tasks like
sentiment analysis, noun phrase extraction, or pos-tagging.
➢ Quepy: Quepy is used to transform natural language questions into queries
in a database query language.
➢ SpaCy: SpaCy is an open-source NLP library which is used for Data
Extraction, Data Analysis, Sentiment Analysis, and Text Summarization.
➢ Gensim: Gensim works with large datasets and processes data streams.
***Recommended Deep-Dive in Computer Vision
Computer Vision:
➢ It is an interdisciplinary scientific field that deals with how computers can
gain high-level understanding from digital images or videos.
➢ From the perspective of engineering, it seeks to understand and automate
tasks that the human visual system can do.
Object Classification: What broad category of object is in this photograph?
Object Identification: Which type of a given object is in this photograph?
Object Verification: Is the object in the photograph?
Object Detection: Where are the objects in the photograph(location of the object)?
Object Landmark Detection: What are the key points for the object in the
photograph?
Object Segmentation: What pixels belong to the object in the image?
Object Recognition: What objects are in this photograph and where are they?
Applications and Future
Smartphones and Web:
➢ Google Lens, QR Codes, Snapchat filters (face tracking), Night Sight, Face and
Expression Detection, Lens Blur, Portrait mode, Google Photos (Face, Object and scene
recognition), Google Maps (Image Stitching).
Medical Imaging:
➢ Computer vision helps in MRI reconstruction, automatic pathology, diagnosis, and
computer aided surgeries and more.
Insurance:
➢ Property Inspection and Damage analysis
Optical Character Recognition (OCR):
➢ Optical character recognition or optical character reader is the electronic or mechanical
conversion of images of typed, handwritten or printed text into machine-encoded text, whether
from a scanned document, a photo of a document, a scene-photo or from subtitle text
superimposed on an image.
3D Model Building (Photogrammetry):
➢ 3D modeling refers to the process of creating a mathematical representation of a 3-
dimensional object or shape.
➢ Motion pictures, video games, architecture, construction, product development, medical,
all these industries are using 3D models for visualizing, simulating and rendering graphic
designs.
Merging CGI with Live Actors in Movies:
➢ Films that are both live-action and computer-animated tend to have fictional characters or
figures represented and characterized by cast members through motion capture and then
animated and modeled by animators, while films that are live action and traditionally animated
use hand-drawn, computer-generated imagery (CGI).
Industry:
➢ In manufacturing or assembly line, computer vision is being used for
automated inspections, identifying defective products on the production line, and
for remote inspections of machinery.
➢ The technology is also used to increase the efficiency of the production line.
Automotive:
➢ Self-driving AI analyzes data from a camera mounted on the vehicle to
automate lane finding, detect obstacles, and recognize traffic signs and signals.
Automated Lip Reading:
➢ This is one of the practical implementations of computer vision to help
people with disabilities or who cannot speak, it reads the movement of lips and
compares it to already known movements that were recorded and used to create
the model.
AR/VR:
➢ Object occlusion, outside-in tracking, and inside-out tracking for virtual and
augmented reality.
Internet:
➢ Image search, Mapping, Photo captioning, Ariel imaging for maps, video
categorization and more.
Oil and Natural Gas:
➢ The oil and natural gas companies produce millions of barrels of oil and billions of
cubic feet of gas every day but for this to happen, first, the geologists have to find a feasible
location from where oil and gas can be extracted.
➢ To find these locations they have to analyze thousands of different locations using
images taken on the spot.
➢ Suppose if geologists had to analyze each image manually how long will it take to find
the best location? Maybe months or even a year but due to the introduction of computer vision
the period of analyzing can be brought down to a few days or even a few hours.
➢ You just need to feed in the images taken to the pre-trained model and it will get the
work done.
Hiring process:
➢ In the HR world, computer vision is changing how candidates get hired in the interview
process. By using computer vision, machine learning, and data science, they’re able to
quantify soft skills and conduct early candidate assessments to help large companies shortlist
the candidates.
Video surveillance:
➢ The Concept of video tagging is used to tag videos with keywords based on the objects
that appear in each scene.
➢ Now imagine being that security company who’s asking to look for a suspect in a blue
van amongst hours and hours of footage. You will just have to feed the video to the algorithm.
➢ With computer vision and object recognition, searching through videos has become a
thing of the past.
Construction:
➢ Flying a drone with wires around the electric tower doesn’t sound particularly safe
either.
➢ So how could you apply computer vision here? Imagine that if a person on the
ground took high-resolution images from different angles.
➢ Then the computer vision specialist could create a custom classifier and use it to
detect the flaws and amount of rust or cracks present.
Healthcare:
➢ From the past few years, the healthcare industry has adopted many next-generation
technologies that include artificial intelligence and machine learning concept.
➢ One of them is computer vision which helps determine or diagnose disease in
humans or any living creatures.
Agriculture:
➢ The agricultural farms have started using computer vision technologies in various
forms such as smart tractors, smart farming equipment, and drones to help monitor and
maintain the fields efficiently and easily.
➢ It also helps improve yield and the quality of crops.
Military:
➢ For modern armies, Computer Vision is an important technology that helps them to
detect enemy troops and it also enhances the targeting capabilities of guided missile
systems.
➢ It uses image sensors to deliver battlefield intelligence used for tactical decision-
making.
➢ One more important Computer Vision application in the areas of autonomous
vehicles like UAV’s and remote-controlled semi-automatic vehicles, which need to
navigate challenging terrain.
Phases of Computer Vision
1. Digitization
➢ It is the process of converting information into a digital format.
➢ The result is the representation of an object, image, sound, document or
signal by generating a series of numbers that describe a discrete set of
2. Signal Processing
➢ It is an electrical engineering subfield that focuses on analysing,
modifying, and synthesizing signals such as sound, images, and scientific
measurements.
➢ Digital Filters, Linear Filters, Gaussian Filters & Smoothing .
3. Edge Detection
➢ Edge detection includes a variety of mathematical methods that aim at
identifying points in a digital image at which the image brightness changes
sharply or, more formally, has discontinuities.
➢ The points at which image brightness changes sharply are typically
organized into a set of curved line segments termed edges.
Identifying Edge Pixels, Gradient Operators, Robert’s Operator, Sobel’s
Operator, Laplacian Operator & Successive Refinement and Marr’s Primal Sketch.
4. Region Detection
➢ Region Based Convolutional Neural Networks (R-CNN) are a family of
machine learning models for computer vision and specifically object detection.
➢ Region Growing, The Problem of Texture, Representing Regions-Quad-
Trees & Computational Problems.
5. Reconstructing Objects
➢ In computer vision and computer graphics, 3D reconstruction is the process
of capturing the shape and appearance of real objects.
➢ Waltz’s Algorithm & Problems with Labelling.
6. Identifying Objects
➢ Object detection is a computer technology related to computer vision and
image processing that deals with detecting instances of semantic objects of a
certain class (such as humans, buildings, or cars) in digital images and videos.
➢ Using Bitmaps ,Using Summary Statistics, Using Outlines, Hand Written
Recognition &Using Properties of Regions.
7. Multiple Images
➢ This causes multiple images to look out of scale to the other images in an
article.
➢ Stereo Vision & Moving Pictures.
https://teachablemachine.withgoogle.com/
Thank You

More Related Content

Similar to Artificial Intelligence (Unit - 2).pdf

Student information chatbot final report
Student information chatbot  final report Student information chatbot  final report
Student information chatbot final report jaysavani5
 
The Software Challenges of Building Smart Chatbots - ICSE'21
The Software Challenges of Building Smart Chatbots - ICSE'21The Software Challenges of Building Smart Chatbots - ICSE'21
The Software Challenges of Building Smart Chatbots - ICSE'21Jordi Cabot
 
How to build an AI-powered chatbot.pdf
How to build an AI-powered chatbot.pdfHow to build an AI-powered chatbot.pdf
How to build an AI-powered chatbot.pdfAnastasiaSteele10
 
How to build an AI-powered chatbot.pdf
How to build an AI-powered chatbot.pdfHow to build an AI-powered chatbot.pdf
How to build an AI-powered chatbot.pdfStephenAmell4
 
How to build an AI-powered chatbot.pdf
How to build an AI-powered chatbot.pdfHow to build an AI-powered chatbot.pdf
How to build an AI-powered chatbot.pdfJamieDornan2
 
Meetup 6/3/2017 - Artificiële Intelligentie: over chatbots & robots
Meetup 6/3/2017 - Artificiële Intelligentie: over chatbots & robotsMeetup 6/3/2017 - Artificiële Intelligentie: over chatbots & robots
Meetup 6/3/2017 - Artificiële Intelligentie: over chatbots & robotsDigipolis Antwerpen
 
Build a Chatbot with IBM Watson - No Coding Required
Build a Chatbot with IBM Watson - No Coding RequiredBuild a Chatbot with IBM Watson - No Coding Required
Build a Chatbot with IBM Watson - No Coding RequiredCharlotte Han
 
isl mini project report Akshay
isl mini project report Akshayisl mini project report Akshay
isl mini project report AkshayAkshayKalapgar
 
Chatbot and Virtual AI Assistant Implementation in Natural Language Processing
Chatbot and Virtual AI Assistant Implementation in Natural Language Processing Chatbot and Virtual AI Assistant Implementation in Natural Language Processing
Chatbot and Virtual AI Assistant Implementation in Natural Language Processing Shrutika Oswal
 
VOCAL- Voice Command Application using Artificial Intelligence
VOCAL- Voice Command Application using Artificial IntelligenceVOCAL- Voice Command Application using Artificial Intelligence
VOCAL- Voice Command Application using Artificial IntelligenceIRJET Journal
 
Debunking The Myths of ChatBots
Debunking The Myths of ChatBotsDebunking The Myths of ChatBots
Debunking The Myths of ChatBotsMathew Hume
 
AI Terminologies for Marketers
AI Terminologies for MarketersAI Terminologies for Marketers
AI Terminologies for MarketersCall Sumo
 
UXPA2019 Not Your Average Chatbot: Using Cognitive Intercept to Improve Infor...
UXPA2019 Not Your Average Chatbot: Using Cognitive Intercept to Improve Infor...UXPA2019 Not Your Average Chatbot: Using Cognitive Intercept to Improve Infor...
UXPA2019 Not Your Average Chatbot: Using Cognitive Intercept to Improve Infor...UXPA International
 
Natural Language Processing for development
Natural Language Processing for developmentNatural Language Processing for development
Natural Language Processing for developmentAravind Reddy
 
Natural Language Processing for development
Natural Language Processing for developmentNatural Language Processing for development
Natural Language Processing for developmentAravind Reddy
 

Similar to Artificial Intelligence (Unit - 2).pdf (20)

Student information chatbot final report
Student information chatbot  final report Student information chatbot  final report
Student information chatbot final report
 
Everything you need to know about chatbots
Everything you need to know about chatbotsEverything you need to know about chatbots
Everything you need to know about chatbots
 
Chatbot.pptx
Chatbot.pptxChatbot.pptx
Chatbot.pptx
 
The Software Challenges of Building Smart Chatbots - ICSE'21
The Software Challenges of Building Smart Chatbots - ICSE'21The Software Challenges of Building Smart Chatbots - ICSE'21
The Software Challenges of Building Smart Chatbots - ICSE'21
 
How to build an AI-powered chatbot.pdf
How to build an AI-powered chatbot.pdfHow to build an AI-powered chatbot.pdf
How to build an AI-powered chatbot.pdf
 
How to build an AI-powered chatbot.pdf
How to build an AI-powered chatbot.pdfHow to build an AI-powered chatbot.pdf
How to build an AI-powered chatbot.pdf
 
How to build an AI-powered chatbot.pdf
How to build an AI-powered chatbot.pdfHow to build an AI-powered chatbot.pdf
How to build an AI-powered chatbot.pdf
 
Meetup 6/3/2017 - Artificiële Intelligentie: over chatbots & robots
Meetup 6/3/2017 - Artificiële Intelligentie: over chatbots & robotsMeetup 6/3/2017 - Artificiële Intelligentie: over chatbots & robots
Meetup 6/3/2017 - Artificiële Intelligentie: over chatbots & robots
 
Build a Chatbot with IBM Watson - No Coding Required
Build a Chatbot with IBM Watson - No Coding RequiredBuild a Chatbot with IBM Watson - No Coding Required
Build a Chatbot with IBM Watson - No Coding Required
 
isl mini project report Akshay
isl mini project report Akshayisl mini project report Akshay
isl mini project report Akshay
 
Ai and bots
Ai and botsAi and bots
Ai and bots
 
Chatbot and Virtual AI Assistant Implementation in Natural Language Processing
Chatbot and Virtual AI Assistant Implementation in Natural Language Processing Chatbot and Virtual AI Assistant Implementation in Natural Language Processing
Chatbot and Virtual AI Assistant Implementation in Natural Language Processing
 
VOCAL- Voice Command Application using Artificial Intelligence
VOCAL- Voice Command Application using Artificial IntelligenceVOCAL- Voice Command Application using Artificial Intelligence
VOCAL- Voice Command Application using Artificial Intelligence
 
Debunking the Myths of ChatBots
Debunking the Myths of ChatBotsDebunking the Myths of ChatBots
Debunking the Myths of ChatBots
 
Debunking The Myths of ChatBots
Debunking The Myths of ChatBotsDebunking The Myths of ChatBots
Debunking The Myths of ChatBots
 
AI Terminologies for Marketers
AI Terminologies for MarketersAI Terminologies for Marketers
AI Terminologies for Marketers
 
UXPA2019 Not Your Average Chatbot: Using Cognitive Intercept to Improve Infor...
UXPA2019 Not Your Average Chatbot: Using Cognitive Intercept to Improve Infor...UXPA2019 Not Your Average Chatbot: Using Cognitive Intercept to Improve Infor...
UXPA2019 Not Your Average Chatbot: Using Cognitive Intercept to Improve Infor...
 
Chatbot
ChatbotChatbot
Chatbot
 
Natural Language Processing for development
Natural Language Processing for developmentNatural Language Processing for development
Natural Language Processing for development
 
Natural Language Processing for development
Natural Language Processing for developmentNatural Language Processing for development
Natural Language Processing for development
 

Recently uploaded

Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
 
costume and set research powerpoint presentation
costume and set research powerpoint presentationcostume and set research powerpoint presentation
costume and set research powerpoint presentationphoebematthew05
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machinePadma Pradeep
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesSinan KOZAK
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):comworks
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksSoftradix Technologies
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptxLBM Solutions
 
Science&tech:THE INFORMATION AGE STS.pdf
Science&tech:THE INFORMATION AGE STS.pdfScience&tech:THE INFORMATION AGE STS.pdf
Science&tech:THE INFORMATION AGE STS.pdfjimielynbastida
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...shyamraj55
 
APIForce Zurich 5 April Automation LPDG
APIForce Zurich 5 April  Automation LPDGAPIForce Zurich 5 April  Automation LPDG
APIForce Zurich 5 April Automation LPDGMarianaLemus7
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 

Recently uploaded (20)

Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
 
costume and set research powerpoint presentation
costume and set research powerpoint presentationcostume and set research powerpoint presentation
costume and set research powerpoint presentation
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping Elbows
 
Vulnerability_Management_GRC_by Sohang Sengupta.pptx
Vulnerability_Management_GRC_by Sohang Sengupta.pptxVulnerability_Management_GRC_by Sohang Sengupta.pptx
Vulnerability_Management_GRC_by Sohang Sengupta.pptx
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machine
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen Frames
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other Frameworks
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptx
 
Science&tech:THE INFORMATION AGE STS.pdf
Science&tech:THE INFORMATION AGE STS.pdfScience&tech:THE INFORMATION AGE STS.pdf
Science&tech:THE INFORMATION AGE STS.pdf
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
 
APIForce Zurich 5 April Automation LPDG
APIForce Zurich 5 April  Automation LPDGAPIForce Zurich 5 April  Automation LPDG
APIForce Zurich 5 April Automation LPDG
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 

Artificial Intelligence (Unit - 2).pdf

  • 1. XI – Artificial Intelligence S. Murugan @ Prakasam PGT Teacher, Computer Science Dept
  • 2. AI Applications and Methodologies Key Fields of Application in AI ➢Chatbots (Natural Language Processing, Speech). ➢Alexa, Siri and others. ➢Computer Vision ➢Weather Predictions. ➢Price forecast for Commodities. ➢Self-driving cars.
  • 3. Activity 1: ➢ Let us do a small exercise on how AI is perceived by our society. ➢ Do you have any idea what people think about AI? Do an online image search for the term “AI” ➢ The results of your search would help you get an idea about how AI is perceived. [Note - If you are using Google search, choose "Images" instead of “All” search] ➢ Based on the outcome of your web search, create a short story on “People’s Perception of AI”
  • 4. Activity 2: ➢ In this activity, let us explore what is being written on the web about AI? ➢ Do you think the articles on the net are anywhere close to reality or are they just complete science fantasies? Now based on your experience during the online search about ‘AI’, prepare your summary report that answers the following questions: 1. What was the name of the article, name of the author and web link where the article was published? 2. What was the main idea of the article? Mention in not more than 5 sentences Chatbots ➢ Chatbot is one of the applications of AI which simulate conversation with humans either through voice commands or through text chats or both. ➢ We can define Chatbot as an AI application that can imitate a real conversation with a user in their natural language. ➢ They enable communication via text or audio on websites, messaging applications, mobile apps, or telephones. ➢ Some interesting applications of Chatbot are Customer services, E- commerce, Sales and Marketing, Schools and Universities etc.
  • 5. ➢ Natural Language Processing is what allows chatbots to understand your messages and respond appropriately. ➢ When you send a message with “Hello”, it is the NLP that lets the chatbot know that you’ve posted a standard greeting, which in turn allows the chatbot to leverage its AI capabilities to come up with a fitting response. In this case, the chatbot will likely respond with a return greeting. ➢ Without Natural Language Processing, a chatbot can’t meaningfully differentiate between the responses “Hello” and “Goodbye”. To a chatbot without NLP, “Hello” and “Goodbye” will both be nothing more than text-based user inputs. ➢ Natural Language Processing (NLP) helps provide context and meaning to text-based user inputs so that AI can come up with the best response.
  • 6. ➢ Please visit the link to get a first-hand experience of banking related conversations with EVA : https://v1.hdfcbank.com/htdocs/common/eva/index.html ➢ You can also visit https://watson-assistant-demo.ng.bluemix.net/ for a completely new experience on the IBM Web based text Chatbot ➢ Apollo Hospitals URL: https://covid.apollo247.com/
  • 7. Types of Chatbots Chatbots can broadly be divided into Two Types: 1. Rule based Chatbot 2. Machine Learning (or AI) based Chatbot risk 1. Rule – Based Chatbot This is the simpler form of Chatbot which follows a set of pre-defined rules in responding to user’s questions. For example, a Chatbot installed at a school reception area, can retrieve data from the school’s archive to answer queries on school fee structure, course offered, pass percentage, etc. Rule based Chatbot is used for simple conversations, and cannot be used for complex conversations. For example, let us create the below rule and train the Chatbot: bot.train ( ['How are you?’, 'I am good.’, 'That is good to hear.’, 'Thank you’, 'You are welcome.',]) After training, if you ask the Chatbot, “How are you?”, you would get a response, “I am good”. But if you ask ‘What’s going on? or What’s up? ‘, the rule based Chatbot won’t be able to answer, since it has been trained to take only a certain set of questions.
  • 8. Quick Question? Question: Are rule-based bots able to answer questions based on how good the rule is and how extensive the database is? Answer: We can have a rule-based bot that understands the value that we supply. However, the limitation is that it won’t understand the intent and context of the user’s conversation with it. For example: If you are booking a flight to Paris, you might say “Book a flight to Paris” and someone else might say “I need a flight to Paris” while someone from another part of the world may use his/her native language. AI-based or NLP-based bot identifies the language, context and intent and then it reacts accordingly. A rule-based bot only understands a pre-defined set of options. Machine Learning (or AI) Based Chatbot Such Chatbots are advanced forms of chatter-bots capable of holding complex conversations in real-time. They process the questions (using neural network layers) before responding to them. AI based Chatbots also learn from previous experience and reinforced learning and it keeps on evolving. AI Chatbots are developed and programmed to meet user requests by furnishing suitable and relevant responses. The challenge, nevertheless lies in aligning the requests to the most intelligent and closest response that would satisfy the user. Had the rule based Chatbot discussed earlier been an AI Chatbot and you had posed ‘What’s going on? or What’s up?’ instead of ‘How are you?’, you would have got a suitable response from the AI Chatbot. The earlier mentioned IBM Watson Chatbot - https://watson-assistant- mo.ng.bluemix.net/ is in fact an AI Chatbot.
  • 9. How AI Chatbots Understands User Requests source: https://dzone.com/articles/how-to-make-a-chatbot-with-artificial-intelligence
  • 10. Natural Language Processing (NLP) ➢ Do you understand the technology behind the Chatbots? It is called is NLP, short for Natural Language Processing. Every time you throw a question to Alexa, Siri or Google assistant, they use NLP to answer your question. Now we are faced with the question, what is Natural Language processing (NLP)? ➢ Natural language means the language of humans. It refers to the different forms in which humans communicate with each other – verbal, written or non-verbal or expressions (sentiments such as sad, happy, etc.). ➢ The Technology which enables the machines (software) to understand and process the natural language (of humans), is called natural language processing (NLP). ➢ A Machine can able to read, write, speak, translate like human through natural language. ➢ Therefore, we can safely conclude that NLP essentially comprises of natural language understanding (human to machine) and natural language generation (machine to human). ➢ NLP is a sub – area of Artificial Intelligence deals with the capability of software to process and analyze human language, both verbal and written language.
  • 11. ***Example: Email filters, Digital phone calls Smart assistants, Spell check Search results, Language translation Social Media Monitoring. Natural Language Processing &Neural Networks : (Neural Networks (Rank Brain)) We surf the internet for things on Google without realizing how efficiently Google always responds to us with accurate answers. Not only does it come up with results to our search in a matter of seconds, it also suggests and auto-corrects our typed sentences.
  • 12.
  • 13. 1. Text Recognition (in an image or a video) ➢ You might have seen / heard about cameras that read vehicle’s number plate ➢ Camera / Machine Translation ➢ using NLP => KL 85 C 4780 ➢ Camera / machine captures the image of number plate, the image is transferred to neural network layer of NLP application, NLP extracts the vehicle’s number from the image. However, correct extraction of data also depends on the quality of image. Quick Question? ➢ Question: Where and how is NLP used? Answer: NLP is natural language processing and can be used in scenarios where static or predefined answers, options and questions may not work. In fact, if you want to understand the intent and context of the user, then it is advisable to use NLP. ➢ Let’s take the example of pizza ordering bot. When it comes to pre-listed pizza topping options, you can consider using the rule-based bot, whereas in case you want to understand the intent of the user where one person is saying “I am hungry” and another person is saying “I am starving”, it would make more sense to use NLP, in which case the bot can understand the emotion of the user and what he/she is trying to convey.
  • 14. 2. Summarization by NLP NLP not only can read and understand the paragraphs or full article but can summarize the article into a shorter narrative without changing the meaning. It can create the abstract of entire article. There are two ways in which summarization takes place – one in which key phrases are extracted from the document and combined to form a summary (extraction-based summarization) and the other in which the source document is shortened (abstraction-based summarization). https://techinsight.com.vn/language/en/text-summarization-in-machine-learning/
  • 15. 3.Information Extraction ➢ Information extraction is the technology of finding a specific information in a document or searching the document itself. It automatically extracts structured information such as entities, relationships between entities, and attributes describing entities from unstructured sources. Example ➢ For example, a school’s Principal writes an email to all the teachers in his school - ➢ “I have decided to organize a teachers’ meet tomorrow. You all are requested to please assemble in my office at 2.30 pm. I will share the agenda just before the meeting.” ➢ NLP can extract the meaningful information for teachers: ➢ What: Meeting called by Principal ➢ When: Tomorrow at 2.30 pm ➢ Where: Principal’s office ➢ Agenda: Will be shared before the meeting ➢ Another very common example is Search Engines like Google retrieving results using Information Extraction.
  • 16. 4. Sentiment Analyzer: ➢ It is used to analysis quickly by detect emotions in text data/emoji/etc. ➢ Sentiment analysis is contextual mining of text which identifies and extracts subjective information in source material, and helping a business to understand the social sentiment of their brand, product or service while monitoring online conversations. ➢ Sentiment is the attitudes, opinions, and emotions of a person towards a person, place, thing, or entire body of text in a document. ➢ Sentiment analysis studies the subjective information in an expression, that is, the opinions, appraisals, emotions, or attitudes towards a topic, person or entity. Expressions can be classified as positive, negative, or neutral. ➢ Sentiment analysis works by breaking a message down into topic chunks and then assigning a sentiment score to each topic.
  • 17. 5. Speech Processing ➢ The ability of a computer to hear human speech and analyze and understand the content is called speech processing. When we talk to our devices like Alexa or Siri, they recognize what we are saying to them. For example: ➢ You: Alexa, what is the date today? ➢ Alexa: It is the 18-March-2020. ➢ What happens when we speak to our device? The microphones of the device hears our audio and plots the graphs of our sound frequencies. As light-wave has a standard frequency for each colour, so does sound. Every sound (phonetics) has a unique frequency graph. This is how NLP recognizes each sound and composes an individual’s words and sentences. Below table shows the step-wise process of how speech recognition works:
  • 18. ***Speech Recognition System: We now a days have pocket assistants that can do a lot of tasks at just one command. Alexa, Google Assistant are some common examples of the voice assistants which are a major part of our digital devices.
  • 19. ***6. Translate (Neural Machine Translation (NMT) ➢ A Google Translate is a language processor used to converting one language into another language and that uses NMT increase fluency and accuracy in Google Translate. ➢ Translate can translate multiple forms of text and media, which includes text, speech, and text within still or moving images. ➢ AI translators are digital tools that use advanced artificial intelligence to not only translate the words that are written or spoken, but also to translate the meaning (and sometimes sentiment) of the message. ➢ This results in greater accuracy and fewer misunderstandings than when using simple machine translation.
  • 20. Activity ➢ Have you noticed that virtual assistants like Alexa and Google Assistant work very well when they are connected to the internet, and cannot function when they are offline? ➢ Can you do a web search to find out the reason for the same? Please summarize your findings/understanding below: Natural Language Processing Tools and Libraries for advance learners 1. NLTK ( www.nltk.org ) Open source NLP Toolkit 2. Cornel (https://stanfordnlp.github.io/CoreNLP/) 3. Open NLP ( https://opennlp.apache.org/ ) Open Source NLP toolkit for Data Analysis and sentiment Analysis 4. SpaCy ( https://spacy.io/ ) for Data Extraction, Data Analysis, Sentiment Analysis, Text Summarization 5. Allen NLP ( https://allennlp.org/ ) for text analysis and Sentiment Analysis
  • 21. Computer Vision (CV) ➢ CV is a field of study that enables the computers to “see”. ➢ Computer vision related projects translate digital visual data into descriptions. This data is then turned into computer-readable language to aid the decision-making process. The main objective of this domain of AI is to teach machines to collect information from pixels. ➢ Computer Vision, cut as CV, is a domain of AI that depicts the capability of a machine to get and analyze visual information and afterwards predict some decisions about it. The entire process involves image acquiring, screening, analyzing, identifying and extracting information. ➢ This extensive processing helps computers to understand any visual content extraction of information from digital images. In computer vision, Input to machines can be photographs, videos and pictures from thermal or infrared sensors, indicators and different sources. ➢ Image Processing Technique & Pattern Reorganization used in CV, they are subset of Computer Vision. ➢ To help us navigate to places, apps like UBER and Google Maps come in human. Thus, one no longer needs to stop repeatedly to ask for directions. CV has been most effective in the following: ➢ Object Detection ➢ Optical Character Recognition ➢ Fingerprint Recognition
  • 23.
  • 24. Quick Question? Q1: List out three of 3 places/locations (like library, playground etc.) in your school where surveillance cameras have been installed? Q2: In your opinion, who does the actual surveillance – the camera or the person sitting behind the device? Justify your answer. and why? Q3: What features you would like to add to the surveillance camera to make it true smart surveillance camera? Before we dive deep into the interesting world of computer vision, let us perform an online activity: Activity: Open the URL in your web browser – https://studio.code.org/s/oceans/stage/1/puzzle/1
  • 25. Activity details Level 1: ➢ The first level is talking about a branch of AI called Machine Learning (ML). This level is explaining about the example of ML in our daily life like email filters, voice recognition, auto-complete text and computer vision. Level 2 – 4: ➢ Students can proceed through the first four levels on their own or with a partner. In order to program AI, use the buttons to label an image as either "Fish" or "Not Fish". Each image and its label become part of the data which is used to train the Computer Vision Model. Once the AI model gets properly trained, it will classify an image as ‘Fish’ and ‘Not Fish’. Fish will be allowed to live in the ocean but ‘Not-Fish’ will be grabbed and stored in the box, to be taken out of ocean/river later. ➢ At the end of Level-4, we should explain to students how AI identifies objects using the training data, in this case fish is our training data. This ability of AI to see and identify an object is called ‘Computer Vision’. Similar to the case of humans where they can see an object and recognize it, an AI application (using camera) can also see an object and recognize it. ➢ Advance learners may go ahead with the level 5 and beyond.
  • 26. Quick Question? Based on the activity you just did, answer the following questions: Q1: Can we say ‘computer vision’ acts as the eye of a AI machine / robot? Q2: In the above activity, why did we have to supply many examples of fish before the AI model actually started recognizing the fish? Q3: Can a regular camera see things? Q4: Let me check your imagination – You have been tasked to design and develop a prototype of a robot to clean your nearby river/ ponds. Can you use your imagination and write 5 lines about features of this ‘future’ robot?
  • 27. Computer Vision: How does computer see an image Consider the image below: Looking at the picture, the human eye can easily tell that a train/engine has crashed through the statin wall (https://en.wikipedia.org/wiki/Montparnasse_derailment). Do you think the computer also views the image the same was as humans? No, the computer sees images as a matrix of 2-dimensional array (or three-dimensional array in case of a colour image).
  • 28. ➢ The above image is a grayscale image, which means each value in the 2D matrix represents the brightness of the pixels. The number in the matrix ranges between 0 to 255, wherein 0 represents black and 255 represents white and the values between them is a shade of grey. ➢ For example, the above image has been represented by a 9x9 matrix. It shows 9 pixels horizontally and 9 pixels vertically, making it a total of 81 pixels (this is a very low pixel count for an image captured by a modern camera, just treat this as an example). ➢ In the grayscale image, each pixel represents the brightness or darkness of the pixel, which means the grayscale image is composed of only one channel*. But colour image is the right mix of three primary colors (Red, Green and Blue), so a colour image will have three channels*. #darkblue=122*11*81=108702 ➢ Since colour images have three channels*, computers see the colour image as a matrix of a 3-dimensional array. If we have to represent the above locomotive image in colour, the 3D matrix will be 9x9x3. Each pixel in this colour image has three numbers (ranging from 0 to 255) associated with it. These numbers represent the intensity of red, green and blue colour in that particular pixel. ➢ * Channel refers to the number of colors in the digital image. For example, a colored image has 3 channels – the red channel, the green channel and the blue channel. Image is usually represented as height x width x channels, where channel is 3 for coloured image and channel is for 1 for grayscale image.
  • 29. Computer Vision: Primary Tasks There are primarily four tasks that Computer vision accomplishes: 1. Semantic Segmentation (Image Classification) 2. Classification + Localization 3. Object Detection 4. Instance Segmentation 113b*155g*177r=1245698 (darkpink)
  • 30.
  • 31. 1. Semantic Segmentation ➢ Semantic Segmentation is also called the Image classification. Semantic segmentation is a process in Computer Vision where an image is classified depending on its visual content. Basically, a set of classes (objects to identify in images) are defined and a model is trained to recognize them with the help of labelled example photos. In simple terms it takes an image as an input and outputs a class i.e. a cat, dog etc. or a probability of classes from which one has the highest chance of being correct. ➢ For human, this ability comes naturally and effortlessly but for machines, it’s a fairly complicated process. For example, the cat image shown below is the size of 248x400x3 pixels (297,600 numbers) (Source : http://cs231n.github.io/assets/classify.png)
  • 32. 2. Classification and Localization Once the object classified and labelled, the localization task is evoked which puts a bounding box around the object in the picture. The term ‘localization’ refers to where the object is in the image. Say we have a dog in an image, the algorithm predicts the class and creates a bounding box around the object in the image. Source: https://medium.com/analytics-vidhya/image-classification-vs-object-detection-vs- image-segmentation-f36db85fe81 3. 3. Object Detection When human beings see a video or an image, they immediately identify the objects present in them. This intelligence can be duplicated using a computer. If we have multiple objects in the image, the algorithm will identify all of them and localize (put a bounding box around) each one of them. You will therefore, have multiple bounding boxes and labels around the objects. Source: https://pjreddie.com/darknet/yolov1/
  • 33. 4. Instance Segmentation Instance segmentation is that technique of CV which helps in identifying and outlining distinctly each object of interest appearing in an image. This process helps to create a pixel-wise mask for each object in the image and provides us a far more granular understanding of the object(s) in the image. As you can see in the image below, objects belonging to the same class are shown in multiple colours. Source: https://towardsdatascience.com/detection-and-segmentation-through- convnets-47aa42de27ea
  • 34. Activity : Let’s have some fun now! ➢ Given a grayscale image, one simple way to find edges is to look at two neighboring pixels and take the difference between their values. If it’s big, this means the colours are very different, so it’s an edge. ➢ The grids below are filled with numbers that represent a grayscale image. See if you can detect edges the way a computer would do it. ➢ Try it yourself! Grid 1: If the values of two neighboring squares on the grid differ by more than 50, draw a thick line between them. Grid 2: If the values of two neighboring squares on the grid differ by more than 40, draw a thick line between them.
  • 35. Weather Predictions ➢ Weather forecasting deals with gathering the satellites data, identifying patterns in the observations made, and then computing the results to get accurate weather predictions. This is done in real-time to prevent disasters. ➢ Artificial Intelligence uses computer-generated mathematical programs and computer vision technology to identify patterns so that relevant weather predictions can be made. Scientists are now using AI for weather forecasting to obtain refined and accurate results, fast! ➢ In the current model of weather forecasting, scientists gather satellite data i.e. temperature, wind, humidity etc. and compare and analyze this data against a mathematical model that is based on past weather patterns and geography of the region. ➢ This is done in real time to prevent disasters. This model being primarily human dependent (and the mathematical model cannot be adjusted real time) faces many challenges in forecasting. ➢ This has resulted in scientists preferring AI for weather forecasting. One of the key advantages of the AI based model is that it adjusts itself with the dynamics of atmospheric changes.
  • 36. ***CNN:(Convolution Neural Network) ➢ A Convolutional Neural Network (CNN) is a type of artificial neural network used in image recognition and processing, classification, segmentation that is specifically designed to process pixel data. ➢ CNN have their “neurons” arranged more like those of the frontal lobe, the area responsible for processing visual stimuli in humans and other animals. ➢ Convolutional Neural Network (CNN) allows researchers to generate accurate rainfall predictions six hours ahead of when the precipitation occurs. Application of CNN: ✓ Understanding Climate, Visual Search, Analyzing Documents. ✓ Advertising, Image Tagging, Character recognition. ✓ Predictive Analytics - Health Risk Assessment ✓ Historic and Environmental Collections.
  • 37. The AI based commodity forecasting system can produce transformative results, such as: 1. The accuracy level would be much higher than the classical forecasting model 2. AI can work on broad range of data and due to which it can reveal new insights 3. AI model is not rigid like classical model therefore its forecasting is always based on the most recent input. Self Driving Car: (Computer Vision & Neural Networks ) ➢ A Self-driving car, also known as an autonomous vehicle (AV), driverless car, robot car, or robotic car is a vehicle that is capable of sensing its environment using Google Street View and collects input to moving safely with little or without human input.” ➢ Self-driving cars combine a variety of sensors to perceive their surroundings, such as radar, lidar, sonar, GPS, odometry and inertial measurement units.
  • 38.
  • 39. Question 1: What should the self-driving car do in the above scenario? ____________________________________________________________________ Question 2: In the age of self-driving cars, do we need zebra crossings? After all, self- driving cars can potentially make it safe to cross a road anywhere. _____________________________________________________________________
  • 40. Self-driving cars or autonomous cars, work on a combination of technologies. Here is a brief introduction to them: 1. Computer Vision: Computer vision allows the car to see / sense the surrounding. Basically, it uses: ‘Camera’ that captures the picture of its surrounding which then goes to a deep learning model to processing the image. This helps the car to know when the light is red or where there is a zebra crossing etc. ‘Radar’ – It is a detection system to find out the how far or close the other vehicles on the road are ‘Lidar’ – It is a surveillance method with which the distance to a target is measured. The lidar (Lidar emits laser rays) is usually placed in a spinning wheel on top of the car so that they can spin around very fast, looking at the environment around them. Here you can see a Lidar placed on top of the Google car.
  • 41. 2. Deep Learning: This is the brain of the car which takes driving decisions on the information gathered through various sources like computer visions etc. Robotics: The self-driven cars have a brain and vision but still its brain needs to connect with other parts of the car to control and navigate effectively. Robotics helps transmit the driving decisions (by deep learning) to steering, breaks, throttle etc. Navigation: Using GPS, stored maps etc. the car navigates busy roads and hurdles to reach its destination.
  • 42. ***Price Forecast for Commodities: ➢ Forecasting Energy Commodity Prices Using Neural Networks. ➢ The use of neural networks as an advanced signal processing tool may be successfully used to model and forecast energy commodity prices, such as crude oil, coal, natural gas, copper and electricity prices. ➢ A Commodity's price is determined primarily by the forces of supply and demand for the commodity in the market. If the weather in a certain region is going to affect the supply of a commodity, the price of that commodity will be affected directly. Examples: corn, soybeans, and wheat. ➢ Copper, one of the oldest and most commonly used commodities, ChAI came together as a team with a goal to leverage the latest AI techniques in order to provide the most accurate forecasts over a range of commodities. ❑ Prediction Accuracy. ❑ Confidence Bound Improvement. ❑ Explain ability of our models.
  • 43. Google Duplex(Speech Recognition System: ) What Is Google Duplex? Google Duplex technology is created to sound normal, to make the conversation experience more friendly. It’s important to us that clients and businesses have a decent experience with this service, and transparency is a key piece of that. We need to be clear about the purpose of the call so organizations understand the context. What will I use Google Duplex for? Google Duplex won’t say whatever you instruct it to — it only works for specific kinds of over-the-phone requests. The two illustrations Google introduced at I/O involved setting up a hair salon appointment and a reservation at a restaurant. Another illustration is getting some information about business hours. In other words, it only works for those scenarios because it’s been trained for them, and cannot speak freely in a general context. How does Google Duplex work? Duplex relies upon a machine learning model drawn from real world data - for this circumstance, phone conversations. It consolidates the organization’s most recent leaps forward in speech recognition and content to-speech synthesis with a range of contextual details, including the purpose and history of the conversation. To make Duplex sound normal, Google has even attempted to reproduce the flaw of common human speech. Duplex consolidates a short delay before issuing certain responses and says “uh” and “well” as it artificially reviews data. The outcome is phenomenally exact with amazingly short latency. https://www.youtube.com/watch?v=D5VN56jQMWM
  • 44.
  • 45. Characteristics and Types of AI ➢ Artificial Intelligence is autonomous and can make independent decisions — it does not require human inputs, interference or intervention and works silently in the background without the user’s knowledge. ➢ These systems do not depend on human programming, instead they learn on their own through data experiencing. ➢ Has the capacity to predict and adapt – Its ability to understand data patterns is being used for future predictions and decision-making. ➢ It is continuously learning – It learns from data patterns. ➢ AI is reactive – It perceives a problem and acts on perception. ➢ AI is futuristic – Its cutting-edge technology is expected to be used in many more fields in future. There are many applications and tools, being backed by AI, which has a direct impact on our daily life. So, it is important for us to understand what kind of systems can be developed, in a broad sense, using AI.
  • 46. ***Characteristics and Types of AI The ideal characteristic of artificial intelligence is its ability to rationalize and take actions that have the best chance of achieving a specific goal. Eliminate Dull Tasks ➢ We all have days where we have to do busy work: dull, boring tasks that may be necessary but are neither important nor valuable. Fortunately, machine learning and AI are beginning to address some of this through human-computer interaction technologies. ➢ More robust use of this technology can be found in companies like Dialog flow, a subsidiary of Google, and formerly known as Api.ai. Such companies build conversational interfaces that use machine learning to understand and meet customer needs. ➢ Virtual assistants like Alexa, Siri, Cortana, and Google Assistant perform basic tasks, conversing with the user in natural language. ➢ Want to send 5,000 calendar invites, or book a flight from San Francisco to Paris? Need a reliable method to answer basic customer questions online, instantaneously? Solutions like those offered by Dialog flow cut through the dull work that would otherwise require hours of human time. ➢ Sun Express, a Turkish-based airline, has become the first such airline in the world to allow users to book tickets using the Alexa voice assistant from Amazon, according to Simple Flying.
  • 47. ***Focus Diffuse Problems ➢ The expanse of available data has gone beyond what human beings are capable of synthesizing, making it a perfect job for machine learning and AI. ➢ For example, Elucify helps sales teams automatically update their contacts. ➢ By simply clicking a button, information is pulled from a multitude of public and private data sources. ➢ Elucify takes all of that diffuse data, compares it, and makes changes where necessary. Distribute data ➢ Modern cybersecurity drives a need for comparing terabytes of inside data with a comparable amount of outside data. ➢ This has been a very difficult problem to solve, but machine learning and AI are perfect tools for the job. ➢ Vectra Networks - a security company that uses AI to fight cyber - attacks. ➢ By comparing outside network data to the log inside the enterprise, Vectra Networks can automate the process of detecting attacks. ➢ As the problems of cyber security change and increase, organizations like these will be vital in addressing the problems of distributed data.
  • 48. ***Solve Dynamic Data ➢ Now, some forward-looking companies are adopting AI to solve the dynamic problems of human behavior. ➢ A startup called Gi Prime utilizes code data to determine the most productive work patterns for software engineers. ➢ These complex patterns help organizations move faster and be more responsive to changing needs. ➢ Finding that human influence in millions of lines of code would have been impossible in the past, but machine learning and AI can help us see them.
  • 49. ***Prevent Dangerous Issues ➢ Brand-new industrial systems combine AI-powered robots, 3D printing, and human oversight. Not only companies realize billions in savings with these systems, but they will also save lives. ➢ This process not only reduces cost and improves efficiency for the company, but creates much safer environments for the human workers. ➢ The dangerous elements of manufacturing jobs are replaced by machines, while the brains behind them remain safe and human. ➢ Machine learning and AI provide an excellent approach to solve many of these dull, diffuse, distributed, dynamic, and dangerous problems.
  • 50. Recommendation Systems ➢ A recommendation system recommends or suggests products, services, information to users based on analysis of data based on a number of factors such as the history, behavior, preference, and interest of the user, etc. It is a data driven AI which needs data for training. ➢ A recommender system, or a recommendation system (sometimes replacing 'system' with a synonym such as platform or engine), is a subclass of information filtering system that seeks to predict the "rating" or "preference" a user would give to an item. ➢ They are primarily used in commercial applications. ➢ For instance, when you watch a video on YouTube, it suggests many videos that match or suit the videos that you generally search for, prefer, or have been watching in the past. ➢ So, in short it gathers data from the history of your activities and behavior. Let us now take another example. Flipkart recommends the types of products that you normally prefer or buy. ➢ All these are classic examples of recommendation, but you must know that Machine Learning (AI) is working behind all this to give you a good user experience.
  • 51. ➢ Recommender systems engines help the users to get personalized recommendations, helps users to take correct decisions in their online transactions, increase sales and redefine the users web browsing experience, retain the customers, enhance their shopping experience. ➢ There are majorly six types of recommender systems which work primarily in the Media and Entertainment industry: ➢ Collaborative recommender system ➢ Content-based recommender system ➢ Demographic based recommender system ➢ Utility based recommender system ➢ Knowledge based recommender system ➢ Hybrid recommender system.
  • 52.
  • 53. ***Collaborative Recommender System: ➢ Collaborative recommender systems aggregate ratings or recommendations. ➢ If people have similar interests in the past, they will have similar interests in the future. Collaborative filtering uses algorithms to filter data from user reviews to make personalized recommendations for users with similar preferences. ➢ Collaborative filtering is based on the assumption that people who agreed in the past will agree in the future and that they will like similar kind of objects as they liked in the past. Content Based Recommender System: ➢ A content-based recommender learns a profile of the new user’s interests based on the features present, in objects the user has rated. ➢ It’s basically a keyword specific recommender system here keywords are used to describe the items. ➢ Thus, in a content-based recommender system the algorithms used are such that it recommends users similar items that the user has liked in the past or is examining currently.
  • 54. ***Demographic Based Recommender System: ➢ In Demographic-based recommender system the algorithms first need a proper market research in the specified region accompanied with a short survey to gather data for categorization. ➢ Demographic techniques form “people-to-people” correlations like collaborative ones, but use different data. ➢ The benefit of a demographic approach is that it does not require a history of user ratings like that in collaborative and content based recommender systems.
  • 55. ***Utility Based Recommender System: ➢ Utility based recommender system makes suggestions based on computation of the utility of each object for the user. Of course, the central problem for this type of system is how to create a utility for individual users. ➢ In utility based system, every industry will have a different technique for arriving at a user specific utility function and applying it to the objects under consideration. ➢ The main advantage of using a utility based recommender system is that it can factor non-product attributes, such as vendor reliability and product availability, into the utility computation. ➢ This makes it possible to check real time inventory of the object and display it to the user.
  • 56. ***Knowledge Based Recommender System: ➢ This type of recommender system attempts to suggest objects based on inferences about a user’s needs and preferences. ➢ Knowledge based recommendation works on functional knowledge: they have knowledge about how a particular item meets a particular user need, and can therefore reason about the relationship between a need and a possible recommendation. Hybrid Recommender System: ➢ Combining any of the two systems in a manner that suits a particular industry is known as Hybrid Recommender system. ➢ This is the most sought after Recommender system that many companies look after, as it combines the strengths of more than two Recommender system and also eliminates any weakness which exist when only one recommender system is used. ➢ There are several ways in which the systems can be combined, such as:
  • 57. Weighted Hybrid Recommender: ➢ P-Tango system combines collaborative and content based recommendation systems giving them equal weight in the starting, but gradually adjusting the weighting as predictions about the user ratings are confirmed or disconfirmed. Switching Hybrid Recommender: ➢ Suppose if we combine the content and collaborative based recommender systems then, the switching hybrid recommender can first deploy content based recommender system and if it doesn’t work then it will deploy collaborative based recommender system. Mixed Hybrid Recommender: ➢ Where it’s possible to make a large number of recommendations simultaneously, we should go for Mixed recommender systems. Here recommendations from more than one technique are presented together, so it the user can choose from a wide range of recommendations.
  • 58. Data Driven AI ➢ The Recent development in cheap data storage (hard disk etc.), fast processors (CPU, GPU or TPU) and sophisticated deep learning algorithm has made it possible to extract huge value from data and that has led to the rise of data centric AI systems. ➢ Such AI systems can predict what will happen next based on what they’ve experienced so far, very efficiently. At times these systems have managed to outperform humans. ➢ “Data-Driven” approach makes strategic decisions based on Data Analysis and Interpretation. ➢ Data Driven AI systems, get trained with large datasets, before it makes predictions, forecasts or decisions. ➢ However, the success of such systems depends largely on the availability of correctly labelled large datasets. ➢ Every AI system we know around, are data driven AI system.
  • 59. ➢ Data-driven learning, a learning approach driven by research-like access to data. Scientific methods to extract knowledge from data. ➢ Data-driven testing is the creation of test scripts to run together with their related data sets in a framework. ➢ The framework provides re-usable test logic to reduce maintenance and Data-driven science, an interdisciplinary field of improve test coverage.
  • 60. Autonomous System ➢ Autonomous system is a technology which understands the environment and reacts without human intervention. ➢ The autonomous systems are often based on Artificial intelligence. ➢ For example, self-driving car, space probe rover like Mars rover or a floor cleaning robot etc. are autonomous system. ➢ An IoT device like a smart home system is another example of an autonomous system. ➢ Because it does things like gather information about the temperature of a room, then uses that information to determine whether the AC needs to be switched on or not, then executes a task to achieve the desired result. ➢ Autonomous systems are defined as systems that are able to accomplish a task, achieve a goal, or interact with its surroundings with minimal to no human involvement. ➢ It is also essential that these systems be able to predict, plan, and be aware of the world around them.
  • 61. ➢ The Intelligent Autonomous Systems group investigates methods for cognition-enabled robot control. ➢ The research is at the intersection of robotics and Artificial Intelligence and includes methods for intelligent perception, expert object manipulation, plan-based robot control, and knowledge representation for robots. ➢ Robots performing complex tasks in open domains, such as assisting humans in a household or collaboratively assembling products in a factory, need to have cognitive capabilities for interpreting their sensor data, understanding scenes, selecting and parametrizing their actions, recognizing and handling failures and interacting with humans. ➢ In our research, we are developing solutions for these kinds of issues and implement and test them on the robots in our laboratory.
  • 63.
  • 64. ***Autonomous System An autonomous system (AS) is a collection of connected Internet Protocol (IP) routing prefixes under the control of one or more network operators on behalf of a single administrative entity or domain that presents a common, clearly defined routing policy to the internet. An autonomous system number (ASN) is a unique number that's available globally to identify an autonomous system and which enables that system to exchange exterior routing information with other neighboring autonomous systems. A unique ASN is allocated to each AS for use in Border Gateway Protocol (BGP) routing. ➢ Fixed automation, ➢ Programmable automation,and. ➢ Flexible automation.
  • 65. ***FIXED AUTOMATION It is a system in which the sequence of processing (or assembly) operations is fixed by the equipment configuration. The operations in the sequence are usually simple. It is the integration and coordination of many such operations into one piece of equipment that makes the system complex. The typical features of fixed automation are: ➢ High initial investment for custom–engineered equipment; ➢ High production rates; ➢ The economic justification for fixed automation is found in products with very high demand rates and volumes. ➢ The high initial cost of the equipment can be spread over a very large number of units, thus making the unit cost attractive compared to alternative methods of production. ➢ Examples of fixed automation include mechanized assembly and machining transfer lines.
  • 66. ***PROGRAMMABLE AUTOMATION In this the production equipment is designed with the capability to change the sequence of operations to accommodate different product configurations. The operation sequence is controlled by a program, which is a set of instructions coded so that the system can read and interpret them. New programs can be prepared and entered into the equipment to produce new products. Some of the features that characterize programmable automation are: ➢ High investment in general-purpose equipment; ➢ Low production rates relative to fixed automation; ➢ Flexibility to deal with changes in product configuration; and ➢ Most suitable for batch production. ➢ To produce each new batch of a different product, the system must be reprogrammed with the set of machine instructions that correspond to the new product. The physical setup of the machine must also be changed over: Tools must be loaded, fixtures must be attached to the machine table also be changed machine settings must be entered. This changeover procedure takes time. Consequently, the typical cycle for given product includes a period during which the setup and reprogramming takes place, followed by a period in which the batch is produced. Examples of programmed automation include numerically controlled machine tools and industrial robots.
  • 67. ***FLEXIBLEAUTOMATION It is an extension of programmable automation. A flexible automated system is one that is capable of producing a variety of products (or parts) with virtually no time lost for changeovers from one product to the next. There is no production time lost while reprogramming the system and altering the physical setup (tooling, fixtures, and machine setting). Consequently, the system can produce various combinations and schedules of products instead of requiring that they be made in separate batches. The features of flexible automation can be summarized as follows: ➢ High investment for a custom-engineered system. ➢ Continuous production of variable mixtures of products. ➢ Medium production rates. ➢ Flexibility to deal with product design variations. ➢ The essential features that distinguish flexible automation from programmable automation are: ➢ The capacity to change part programs with no lost production time; and ➢ The capability to changeover the physical setup, again with no lost production time. ➢ Advances in computer systems technology are largely responsible for this programming capability in flexible automation.
  • 69. ➢ Human-Level Intelligence has come to be known as Strong AI or Artificial General Intelligence (AGI). ➢ Humans have the ability to gain and apply general knowledge to problem solving for a wide range of subject domains. ➢ Some of which is acquired by all human beings, such as walking and talking, etc. ➢ Most of the time, when developers claim they've created a “human-like” AI, what they mean is that they've automated a task that humans are often employed for. ➢ Artificial General Intelligence (AGI) refers to a machine that can apply knowledge and skills within different contexts – in short, one that can learn by itself and work out problems like a human. ➢ Sophia: A humanoid robot developed by Hanson Robotics, is one of the most human-like robots. ➢ Sophia is a realistic humanoid robot capable of displaying humanlike expressions, human-like conversation and is able to make many human- like facial expressions and interacting with people. ➢ It's designed for research, education, and entertainment, and helps promote public discussion about AI ethics and the future of robotics.
  • 70. Activity ➢ Does this picture remind you of something? Can you please share your thoughts about this image in the context of AI? -------------------------------------------------------------------------------------------------------- ➢ The dream of making machines that think and act like humans, is not new. We have long tried to create intelligent machines to ease our work. ➢ These machines perform at greater speed, have higher operational ability and accuracy, and are also more capable of undertaking highly tedious and monotonous jobs compared to humans. ➢ Humans do not always depend on pre-fed data as required for AI. Human memory, its computing power, and the human body as an entity may seem insignificant compared to the machine’s hardware and software infrastructure. ➢ But, the depth and layers present in our brains are far more complex and sophisticated, and which the machines still cannot beat at sin the near future. ➢ These days, the AI machines we see, are not true AI. These machines are super good in delivering a specific type of jobs.
  • 71. Think of Jarvis in “Iron Man” and you’ll get a sneak peek of Artificial General Intelligence (AGI) and that is nothing but human-like AI. Although we still have a long way to go in attaining human-like AI.
  • 72. Non-Technical Explanation of Deep Learning ➢ The excitement behind artificial intelligence and deep learning is at its peak. At the same time there is growing perception that these terms are meant for techies to understand, which is a misconception. ➢ Here is a brief overview of deep learning in simple non-technical terms! ➢ Imagine the below scenario. ➢ A courier delivery person (X) has to deliver a box to a destination, which is right across the road from the courier company. In order to carry out the task, X will pick up the box, cross the road and deliver the same to the concerned person. ➢ In neural network terminology, the activity of ‘crossing the road’ is termed as neuron. So, input ‘X’ goes into a single neuron which is ‘Crossing the road’ to produce the output/goal which in this case is the ‘Final Destination’. ➢ In this example the starting and the ending are connected with a straight line. This is an example of a simple neural network.
  • 73. ➢ The life of delivery man is not so simple and straightforward in reality. They start from a particular location in the city, and go to different locations across the city. For instance, as shown below, the delivery man could choose multiple paths to fulfil his/her deliveries as shown in the image below: 1. Option 1: ‘Address 1’ to ‘Destination 1’and then to the Final Destination’. 2. Option 2: ‘Address 2’ to ‘Destination 2’ and then to the ‘Final Destination’ 3. Option 3: ‘Address 3’ to ‘Destination 3’ and then the ‘Final Destination’. ➢ All of the above (and more combinations) are valid paths that take the delivery man from ‘start’ to ‘finish’.
  • 74. However, some routes (or “paths”) are better than the rest. Let us assume that all paths take the same time, but some are really bumpy while some are smooth. Maybe the path chosen in ‘Option3’ is bumpy and the delivery man has to burn a (loss) lot more fuel on the way! Whereas, choosing ‘Option 2’ is perfectly smooth, so the deliveryman does not lose anything! In this case, the deliveryman would definitely prefer ‘Option 2’ (Address 1 – Destination 2 – Final Destination) as he/she will not want to burn extra fuel! So, the goal of Deep Learning then, is to assign ‘weights’ to each path from address to destination so that delivery man will find the most optimum path.
  • 75. How does this work? ➢ As we discussed, the goal of the deep learning exercise is to assign weights to each path from start to finish (start – address – destination – Final Destination). To do this for the example discussed above, each time a delivery man goes from start to finish, the fuel consumption for every path is computed. Based on this parameter (in reality the number of parameters can go up to 100 or more), cost for each path is calculated and this cost is called the ‘Loss Function’ in deep learning. ➢ As we saw above in our example, ‘Option 3’ (Address 3 -- Destination 3 –Final Destination) lost a lot of fuel, so that path had a large loss function. However, ‘Option 2’ (Address 1 – Destination 2 – Final Destination) cost the deliveryman least fuel cost, thereby having a small loss function as well– thereby making it the most efficient route! ➢ The above picture is a representation of a neural network of the most efficient path to be taken by the deliveryman to reach the goal i.e. to go from starting point ‘X’ to ‘Final Destination’ using the best possible route (which is most fuel efficient). This is a very small neural network consisting of 3 neurons and 2 layers (address and destination).
  • 76. ➢ In reality neural networks are not as simple as the above discussed example. They may look like the one shown below! (Source - http://www.rsipvision.com/exploring-deep-learning/ ) ➢ The term ‘deep’ in Deep Learning refers to the various layers you will find in a neural network. This closely relates to how our brains work. ➢ The neural network shown above, is a network of 3 layers (hidden layer1, hidden layer 2 and hidden layer 3) with each layer having 9 neurons each.
  • 77. ***Non-Technical Explanation of Deep Learning: ➢ These Four Capabilities: Classifying what we perceive, Predicting things in a sequence, Reasoning and Planning are some of the key aspect of intelligence. ➢ we were shown just a few pictures of cats and dogs and from that we could easily spot a new picture of cat? We learnt to classify things we perceive ➢ Also, we learnt to predict what would happen next in a sequence of events. For instance, in a peekaboo game when someone hid their face with their hands and took their hands away we expected to see their face. ➢ As we aged, we learnt not to just go by what we see or perceive, but reason. For instance even though we only saw a cat with three legs we would reason it perhaps lost a leg in an unfortunate accident or was born with a congenital deformity. We would not conclude it is a new species of cat. ➢ Lastly we learnt very early to start planning to get what we want from our parents/guardians.
  • 78. The Future with AI, and AI in Action ➢ Transportation: Although it could take a decade or more to perfect them, autonomous cars will one day ferry us from place to place. The place where AI may have the biggest impact in the near future is self-driving cars. Unlike humans, AI drivers never look down at the radio, put on mascara or argue with their kids in the backseat. Thanks to Google, autonomous cars are already here, but watch for them to be universal by 2030. Driverless trains already rule the rails in European cities, and Boeing is building an autonomous jetliner (pilots are still required to put info into the system). ➢ Manufacturing: AI powered robots work alongside humans to perform a limited range of tasks like assembly and stacking, and predictive analysis sensors keep equipment running smoothly. ➢ Medicine: AI algorithms will enable doctors and hospitals to better analyze data and customize their health care to the genes, environment and lifestyle of each patient. From diagnosing brain tumors to deciding which cancer treatment will work best for an individual, AI will drive the personalized medicine revolution. ➢ Education: Textbooks are digitized with the help of AI, early-stage virtual tutors assist human instructors and facial analysis gauges the emotions of students to help determine who’s struggling or bored and better tailor the experience to their individual needs.
  • 79. Entertainment: ➢ In the future, you could sit on the couch and order up a custom movie featuring virtual actors of your choice. Meanwhile, film studios may have a future without flops: Sophisticated predictive programs will analyze a film script’s storyline and forecast its box office potential. ➢ The film industry has given us a fair share of artificial intelligence movies with diverse character roles — big to small, anthropomorphic to robotic, and evil to good. And over the years, this technology has advanced to make its way into the media and entertainment industry. From video games to movies and more, AI technologies have been playing a crucial role in improving efficiencies and contributing to growth therein. ➢ Immersive visual experience is also another popular application of Artificial intelligence. Virtual Reality (VR) and Augmented Reality (AR) are all the rage nowadays. Development of VR content for games, live events, reality shows and other entertainment have been gaining immense popularity and attention Vital Tasks: ➢ AI will assistants the tools to help older people stay independent and live in their own homes longer. AI tools will keep nutritious food available, safely reach objects on high shelves, and monitor movement in a senior’s home. ➢ Could mow lawns, keep windows washed and even help with bathing and hygiene. Many other jobs that are repetitive and physical are perfect for AI-based tools. But the AI-assisted work may be even more critical in dangerous fields like mining, firefighting, clearing mines and handling radioactive materials.
  • 80. Chatbots: ➢ AI-enabled chatbots have become a crucial part of sales and customer service. Helping brands serve their customers and improve the entire support experience, chatbots are bringing a new way for businesses to communicate with the world. Earlier versions of chatbots were configured with a limited number of inputs and weren’t able to process any queries outside of those parameters. As a result, the communication experience was far less appealing and frustrating for a customer. ➢ However, artificial intelligence and its subset of technologies have transformed the overall functionality and intelligence of chatbots. AI-Powered Machine Learning (ML) Algorithms and Natural Language Processing (NLP) provided the ability to Learn and Mimic Human Conversation. Deploying such advanced level of processing makes chatbots extremely helpful in a wide variety of applications ranging from Digital Commerce and Banking to Research, Sales and Brand Building. ➢ Time-consuming and repetitive manual business tasks can be handled by AI-enabled chatbots, thereby improving efficiency, reducing personnel costs, eliminating human error and ensuring quality services. Cybersecurity: ➢ There were about 707 million cybersecurity breaches in 2015, and 554 million in the first half of 2016 alone. Companies are struggling to stay one step ahead of hackers. USC experts say the self-learning and automation capabilities enabled by AI can protect data more systematically and affordably, keeping people safer from terrorism or even smaller-scale identity theft. ➢ AI-based tools look for patterns associated with malicious computer viruses and programs before they can steal massive amounts of information or disaster.
  • 81. Save Our Planet: ➢ Artificial Intelligence can help us create a better world by addressing complex environmental issues such as climate change, pollution, water scarcity, extinction of species, and more. Transforming the traditional sectors and systems, AI can prevent the adverse effects that threaten the existence of biodiversity. ➢ According to a report published by Intel in 2018, 74% of the 200 professionals in this field surveyed agreed that AI can help solve environmental challenges. ➢ AI’s ability to gather and process vast amount of unstructured data and derive patterns from them prove beneficial in resolving many environmental issues. ➢ An autonomous floating garbage truck powered by AI is another case in point. This autonomous truck is designed to remove large amounts of plastic pollution from oceans and safeguard sea life. Customer Service: ➢ Google is working on an AI assistant that can place human-like calls to make appointments at, say, your neighborhood hair salon. In addition to words, the system understands context and hint.
  • 82. Predictive Analytics ➢ Do you know how some of the largest companies in the world make so much money? They use many different strategies, but AI-powered predictive analytics is one of the most important. Take a look at Netflix for example. ➢ A popular media service provider saves more than $1 billion every year thanks to the powerful content recommendation system. ➢ Instead of giving irrelevant suggestions, Netflix analyzes users’ real interests to provide them with highly personalized content suggestions. ➢ Amazon does pretty much the same thing – it utilizes AI to display tailored product ads to each website visitor individually. The system understands users’ earlier interactions with the company and hence designs perfect cross-sales offers. Automotive Industry ➢ Tesla is not only a leader in the electric vehicle market but also a pioneer of AI deployment in the automotive industry. The company is famous for its autonomous car program that relies on AI to design independent vehicles that can operate on their own. ➢ Their cars already amassed more than a billion miles of autopilot data in the US, thus helping the developers to improve the system and add a wide range of other functions. That includes features such as ABS breaking, smart airbags, lane guidance, cruise control, and many more.
  • 83. Healthcare ➢ The entire healthcare industry is getting better due to the influence of AI. Watson for Oncology is a true gem in the crown as it combines years of cancer treatment studies and practice to come up with individualized therapies for patients. There are many other use-cases we should discuss, but let’s just mention a few: ➢ AI is used in biopharmaceutical research. ➢ AI is used for earlier cancer and blood disease detection. ➢ AI is used in radiology for scan analysis and diagnostic. ➢ AI is used for rare disease treatment. Music Industry: ➢ If you are into music, you must know Pandora. A music streaming service is known for its machine learning algorithms that allow the provider to analyze audio content and make incredibly accurate song suggestions. ➢ Namely, Pandora analyzes a database of music to look at over 450 attributes. AI helps it to analyze everything from vocals and percussions to grungy guitars and synthesizer echoes. Such accurate data analytics enables Pandora to create song lists and recommendations that perfectly match listeners’ preferences. ➢ AI helps the musical composer by giving appropriate suggestion in the tunes.
  • 84. Creating Powerful Websites: ➢ Create the website may sound difficult, but today there are so many different website builders that use AI. It’s never been easier to create a website with sophisticated AI design tools. ➢ Most require little to no coding skills, offer nearly endless options of well-designed templates, and AI features to optimize your site seamlessly. ➢ Artificial intelligence has made web design easier than ever. However, you still need to choose the right tools. With AI website builders, designing and optimizing your site is no longer hard or expensive. ➢ We can provide you customized Artificial intelligence mailing list depending on your requirement. Listed below are some of the challenges posed by AI 1. Integrity of AI ➢ AI systems learn by analysing huge volumes of data. What are the consequences of using biased data (via the training data set)) in favors of a particular class/section of customers or users? ➢ In 2016, the professional networking site LinkedIn was discovered to have a gender bias in its system. When a search was made for the female name ‘Andrea’, the platform would show recommendations/results of male users with the name ‘Andrew’ and its variations. However, the site did not show similar recommendations/results for male names. i.e. A search/query for the name ‘Andrew’ did not result in a prompt asking the users if he/she meant to find ‘Andrea’. The company said this was due to a gender bias in their training data which they fixed later.
  • 85. 2. Technological Unemployment ➢ Due to heavy automation, (with the advent of AI and robotics) some sets of people will lose their jobs. These jobs will be replaced by intelligent machines. There will be significant changes in the workforce and the market - there will be creation of some high skilled jobs however some roles and jobs will become obsolete. 3. Disproportionate control over data ➢ Data is the fuel to AI; the more data you have, the more intelligent machine you would be able to develop. Technology giants are investing heavily in AI and data acquisition projects. This gives them an unfair advantage over their smaller competitors. 4. Privacy ➢ In this digitally connected world, privacy will become next to impossible. Numerous consumer products, from smart home appliances to computer applications have features that makes them vulnerable to data exploitation by AI. AI can be utilized to identify, track and monitor individuals across multiple devices, whether they are at work, home, or at a public location. To complicate things further, AI does not forget anything. Once AI knows you, it knows you forever!
  • 86. Cognitive Computing (Perception, Learning, Reasoning) ➢ In summary, Cognitive Computing can be defined as a technology platform that is built on AI and signal processing, to mimic the functioning of a human brain (speech, vision, reasoning etc.) and help humans in decision making. (For additional reference please see: https://www.coursera.org/lecture/introduction-to- ai/cognitive-computing-perception-learning-reasoning-UBtrp)
  • 87. Now that we have understood what Cognitive Computing is, let us explore the need for the same ➢ Enormous amounts of unstructured data and information (Facebook pages, twitter posts, WhatsApp data, sensors data, traffic data, traffic signals data, medical reports so on) are available to us in this digital age., ➢ We need an advanced technology (traditional computing cannot process this amount of data) to make sense of the data in order to help humans to take better decisions. Because Cognitive Computing generates new knowledge using existing knowledge, it is viewed as platform holding the potential of future computing. Applications of Cognitive Computing ➢ In mainstream usage, Cognitive Computing is used to aid humans in their decision-making process. Some examples of Cognitive Computing and its applications include the treatment of disease/’ illness by supporting medical doctors. ➢ For example, ‘The IBM Watson for Oncology’ has been used at the ‘Memorial Sloan Kettering Cancer Centre’ to provide oncologists with evidence- based treatment alternatives for patients having cancer. When medical staff pose questions, Watson generates a list of hypotheses and offers treatment possibilities for doctors. Watch this video : https://www.coursera.org/lecture/introduction-to-ai/cognitive- computing-perception-learning-reasoning-UBtrp
  • 88. Cognitive Computing (Perception, Learning, Reasoning) ➢ Cognitive Computing refers to individual technologies that perform specific tasks to facilitate human intelligence. ➢ Basically, these are smart decision support systems that we have been working with since the beginning of the internet boom. ➢ With recent breakthroughs in technology, these support systems simply use better data, better algorithms in order to get a better analysis of a huge amount of information.
  • 89. Also, you can refer to Cognitive Computing as: ➢ Understanding and simulating reasoning ➢ Understanding and simulating human behavior Using cognitive computing systems help in making better human decisions at work. Some of the applications of cognitive computing include speech recognition, sentiment analysis, face detection, risk assessment, and fraud detection. ➢ The term cognitive computing is typically used to describe AI systems that aim to simulate human thought. ➢ A number of AI technologies are required for a computer system to build cognitive models that mimic human thought processes, including machine learning, deep learning, neural networks, NLP and sentiment analysis. ➢ In the light of the above notions we define cognitive reasoning as modeling the human ability to draw meaningful conclusions despite incomplete and inconsistent knowledge.
  • 90. How Cognitive Computing Works? ➢ Cognitive computing systems synthesize data from various information sources while weighing context and conflicting evidence to suggest suitable answers. ➢ To achieve this, cognitive systems include self-learning technologies using data mining, pattern recognition, and natural language processing (NLP) to understand the way the human brain works.
  • 91. Perception: ➢ Cognitive Perception includes, the senses like listening, seeing, tasting, smiling and feeling the way we which we deal with the information’s. ➢ Perception - Obtaining information's from our environment. ➢ Perception is ability to capture, process and actively make sense of the information that our senses receive. It is a Cognitive Process that make possible to interpret our surrounding with the stimuli that we receive through our sensory organ. ➢ Perception is the process of acquiring, interpreting, selecting, and organizing sensory information. In AI, the study on perception is mostly focused on the reproduction of human perception, especially on the perception of aural and visual signals. ➢ Perception often results in learning information that is directly relevant to the goals. Perception becomes more skillful with practice and experience, and perceptual learning can be thought of as the education of attention.
  • 92. Learning: By use of machine learning AI is able to use learning from a data set to solve problems and give relevant recommendations. Cognitive Computing are Systems that Learn at Scale, Reason with purpose and interact with humans naturally. (e.g.) Proper respect is shown to the National Anthem by standing up when the National Anthem is Played.
  • 93. Reasoning: Reasoning is the mental (cognitive) process of looking for reasons for beliefs, conclusions, actions or feelings. Humans have the ability to engage in reasoning about their own reasoning using introspection. (e.g.) If we saw a cat with three legs we would reason it perhaps lost a leg in an unfortunate accident or was born with a congenital deformity. We would not conclude it is a new species of cat.
  • 94. To achieve these capabilities, cognitive computing systems must have some key attributes. Key Attributes: Adaptive: Cognitive systems must be flexible enough to understand the changes in the information. Also, the systems must be able to digest dynamic data in real-time and make adjustments as the data and environment change. Interactive: Human Computer Interaction (HCI) is a critical component in cognitive systems. Users must be able to interact with cognitive machines and define their needs as those needs change. The technologies must also be able to interact with other processors, devices and cloud platforms. Iterative and Stateful: These systems must be able to identify problems by asking questions or pulling in additional data if the problem is incomplete. The systems do this by maintaining information about similar situations that have previously occurred. Contextual: Cognitive systems must understand, identify and mine contextual data, such as syntax, time, location, domain, requirements, a specific user’s profile, tasks or goals. They may draw on multiple sources of information, including structured and unstructured data and visual, auditory or sensor data.
  • 95. Cognitive Computing Artificial Intelligence ➢ Cognitive Computing focuses on mimicking human behavior and reasoning to solve complex problems. ➢ AI augments human thinking to solve complex problems. It focuses on providing accurate results. ➢ It simulates human thought processes to find solutions to complex problems. ➢ AI finds patterns to learn or reveal hidden information and find solutions. ➢ They simply supplement information for humans to make decisions. ➢ AI is responsible for making decisions on their own minimizing the role of humans. ➢ It is mostly used in sectors like customer service, health care, industries, etc. ➢ It is mostly used in finance, security, healthcare, retail, manufacturing, etc.
  • 96. Applications of Cognitive AI ➢ Smart IoT: This includes connecting and optimizing devices, data and the IoT. But assuming we get more sensors and devices, the real key is what’s going to connect them. ➢ AI-Enabled Cybersecurity: We can fight the cyber-attacks with the use of data security encryption and enhanced situational awareness powered by AI. This will provide a document, data, and network locking using smart distributed data secured by an AI key. ➢ Content AI: A solution powered by cognitive intelligence continuously learns and reasons and can simultaneously integrate location, time of day, user habits, semantic intensity, intent, sentiment, social media, contextual awareness, and other personal attributes ➢ Cognitive Analytics in Healthcare: The technology implements human-like reasoning software functions that perform deductive, inductive and abductive analysis for life sciences applications. ➢ Intent-Based NLP: Cognitive intelligence can help a business become more analytical in their approach to management and decision making. This will work as the next step from machine learning and the future applications of AI will incline towards using this for performing logical reasoning and analysis.
  • 97. AI and Society ➢ Business consultant Ronald van Loon asserts, “The role of Artificial Intelligence devices in augmenting humans and in achieving tasks that were previously considered unachievable is just amazing. ➢ With the world progressing towards an age of unlimited innovations and unhindered progress, we can expect that AI will have a greater role in actually serving us for the better. ➢ Below are a number of ways pundits expect AI to benefit society. In the Workplace: ➢ Many futurists fear AI will make humans redundant dramatically increasing unemployment and causing social unrest. ➢ Many other futurists believe AI and other Information Age technologies will create more jobs than it eliminates. Regardless of which viewpoint is correct, there will be job displacements requiring retraining and reskilling. ➢ Beyond AI’s impact on employment, Peter Daisyme co-founder of Hostt, asserts AI will make the workplace safer. He explains, “Many jobs, despite a focus on health and safety, still involve a considerable amount of risk. ➢ Risks involved for employees of U.S. factories can be alleviated by the use of the Industrial Internet of Things and predictive maintenance. ➢ The role of AI and cognitive computing [involves] improving the link between maintenance and workplace safety. ➢ These technologies address issues before they put factory workers at risk. In addition, AI can take on more roles within the factory, eliminating the need for humans to work in potentially dangerous areas. ➢ AI can also help relieve workers from having to perform tedious jobs freeing them to perform more interesting and satisfying work.
  • 98. In Healthcare: ➢ Ilan Moscovitz asserts, “Healthcare is ground zero for AI. In fact, AI has been quietly helping doctors treat diseases for almost its entire existence. AI is being used to assist medical professionals in their diagnoses and to help develop effective drugs more quickly. ➢ AI can help monitor a patient’s condition 24/7 regardless of their location. It can help predict which patients are likely to make repeat visits to emergency rooms and can even predict when patients are likely to die. In Agriculture: ➢ Artificial intelligence is only one of the Information Age technologies that will help us feed a growing world population using fewer resources. ➢ Precision farming and other modern techniques rely heavily on these technologies. In Transportation: ➢ Driverless cars are being pursued by every major automobile manufacturer. ➢ These vehicles are predicted to dramatically reduce deaths due to accident by reducing human errors and constantly monitoring the physical condition of the car. ➢ In addition, autonomous trucks are being tested in platoons in North Carolina and autonomous ships are predicted to one day fill sea lanes delivering supplies. ➢ Those are only a few of the ways cognitive technologies are predicted to affect our lives in the years ahead.
  • 99. In Education: ➢ Artificial intelligence can be used to personalize education so students can advance at the pace best-suited to their skills, temperament, and mental abilities. ➢ These technologies won’t eliminate the need for talented teachers; rather, they will help free teachers to deal with special needs. In Our Cities: ➢ There is a growing smart cities movement aimed at helping urban residents use scarce resources more wisely. ➢ As the urbanization trend continues to see cities gain population, smart technologies will help make urban life more pleasurable and affordable. ➢ AI systems will help manage traffic, improve public transportation, monitor water systems and electrical grids, and assist law enforcement. ➢ Daisyme notes, “Smart cities, powered by AI platforms, are changing the very fiber of urban areas. AI technology is addressing old social ills in new ways.” In Our Environment: ➢ Despite the doubters, scientists have used big data and sophisticated models to demonstrate how our environment is changing. ➢ Daisyme writes, “Up to now, there has been no way to fully understand the impact that human beings and economic development have on the environment, including their adverse effects on plant and animal life. ➢ But that may soon change, as AI provides the necessary tools to collect, shift and organize extraordinary amounts of data. ➢ AI can help us prevent further damage and better understand how to address development needs while focusing on sustainability.”
  • 100. ***Recommended Deep-Dive in NLP Deep Dive: ➢ A deep dive analysis has been recorded as 'a strategy of immersing a team rapidly into a situation, to provide solutions or create ideas. ➢ Deep dive analysis will normally focus in areas such as process, organization, leadership and culture. Natural Language Processing: ➢ NLP stands for Natural Language Processing, which is a part of Computer Science, Human Language, and Artificial Intelligence. ➢ It is the technology that is used by machines to understand, analyse, manipulate, and interpret human's languages. ➢ It helps developers to organize knowledge for performing tasks such as translation, automatic summarization, Named Entity Recognition (NER), speech recognition, relationship extraction, and topic segmentation. ➢ Natural Language Processing (NLP) is one of the most important Technologies in Data Science, one of the top branches of machine learning and artificial intelligence (AI), and has abundant job prospects. ➢ Applications of NLP are everywhere communication is handled mostly by language: web searches, AI-based.
  • 101. ➢ The Natural Languages Processing started in the year 1940s. ➢ 1948 - In the Year 1948, the first recognizable NLP application was introduced in Birkbeck College, London. ➢ In 1965, MIT AI laboratory-created Eliza, the first Chabot on “Natural Language Processing (NLP)”. Advantages of NLP ➢ NLP helps users to ask questions about any subject and get a direct response within seconds. ➢ NLP offers exact answers to the question means it does not offer unnecessary and unwanted information. ➢ NLP helps computers to communicate with humans in their languages. ➢ It is very time efficient. ➢ Most of the companies use NLP to improve the efficiency of documentation processes, accuracy of documentation, and identify the information from large databases.
  • 102. Components of NLP 1. Natural Language Understanding (NLU) ➢ Natural Language Understanding (NLU) helps the machine to understand and analyse human language by extracting the metadata from content such as concepts, entities, keywords, emotion, relations, and semantic roles. ➢ NLU mainly used in Business applications to understand the customer's problem in both spoken and written language. NLU involves the following tasks: ➢ It is used to map the given input into useful representation. ➢ It is used to analyze different aspects of the language. 2. Natural Language Generation (NLG) ➢ Natural Language Generation (NLG) acts as a translator that converts the computerized data into natural language representation. ➢ It mainly involves Text planning, Sentence planning, and Text Realization.
  • 103. Applications and Future of NLP: 1. Question Answering ➢ Question Answering focuses on building systems that automatically answer the questions asked by humans in a natural language. 2. Spam Detection ➢ Spam detection is used to detect unwanted e-mails getting to a user's inbox. 3. Sentiment Analysis ➢ Sentiment Analysis is also known as opinion mining. It is used on the web to analyse the attitude, behavior, and emotional state of the sender. ➢ This application is implemented through a combination of NLP (Natural Language Processing) and statistics by assigning the values to the text (positive, negative, or natural), identify the mood of the context (happy, sad, angry, etc.) 4. Machine Translation ➢ Machine translation is used to translate text or speech from one natural language to another natural language. 5. Spelling Correction ➢ Microsoft Corporation provides word processor software like MS-word, PowerPoint for the spelling correction.
  • 104. 6. Speech Recognition ➢ Speech Recognition is used for converting spoken words into text. ➢ It is used in applications, such as mobile, home automation, video recovery, dictating to Microsoft Word, voice biometrics, voice user interface, and so on. 7. Chatbot ➢ Implementing the Chatbot is one of the important applications of NLP. It is used by many companies to provide the customer's chat services. 8. Information Extraction ➢ Information extraction is one of the most important applications of NLP. ➢ It is used for extracting structured information from unstructured or semi- structured machine-readable documents. 9. Natural Language Understanding (NLU) ➢ It converts a large set of text into more formal representations such as first- order logic structures that are easier for the computer programs to manipulate notations of the natural language processing.
  • 105.
  • 107. 1. Lexical Analysis and Morphological ➢ The first phase of NLP is the Lexical Analysis. This phase scans the source code as a stream of characters and converts it into meaningful lexemes. It divides the whole text into paragraphs, sentences, and words. 2. Syntactic Analysis (Parsing) ➢ Syntactic Analysis is used to check grammar, word arrangements, and shows the relationship among the words. ➢ Example: Agra goes to the USA ➢ In the real world, Agra goes to the USA, does not make any sense, so this sentence is rejected by the Syntactic analyzer. 3. Semantic Analysis ➢ Semantic analysis is concerned with the meaning full representation. It mainly focuses on the literal meaning of words, phrases, and sentences. 4. Discourse Integration ➢ Discourse Integration depends upon the sentences that proceeds it and also invokes the meaning of the sentences that follow it. 5. Pragmatic Analysis ➢ Pragmatic is the fifth and last phase of NLP. It helps you to discover the intended effect by applying a set of rules that characterize cooperative dialogues. ➢ For Example: "Open the Door" is interpreted as a request instead of an order.
  • 108. NLP Libraries: ➢ Scikit-learn: It provides a wide range of algorithms for building machine learning models in Python. ➢ Natural language Toolkit (NLTK): NLTK is a complete toolkit for all NLP techniques. ➢ Pattern: It is a web mining module for NLP and machine learning. ➢ Text Blob: It provides an easy interface to learn basic NLP tasks like sentiment analysis, noun phrase extraction, or pos-tagging. ➢ Quepy: Quepy is used to transform natural language questions into queries in a database query language. ➢ SpaCy: SpaCy is an open-source NLP library which is used for Data Extraction, Data Analysis, Sentiment Analysis, and Text Summarization. ➢ Gensim: Gensim works with large datasets and processes data streams.
  • 109. ***Recommended Deep-Dive in Computer Vision Computer Vision: ➢ It is an interdisciplinary scientific field that deals with how computers can gain high-level understanding from digital images or videos. ➢ From the perspective of engineering, it seeks to understand and automate tasks that the human visual system can do. Object Classification: What broad category of object is in this photograph? Object Identification: Which type of a given object is in this photograph? Object Verification: Is the object in the photograph? Object Detection: Where are the objects in the photograph(location of the object)? Object Landmark Detection: What are the key points for the object in the photograph? Object Segmentation: What pixels belong to the object in the image? Object Recognition: What objects are in this photograph and where are they?
  • 110. Applications and Future Smartphones and Web: ➢ Google Lens, QR Codes, Snapchat filters (face tracking), Night Sight, Face and Expression Detection, Lens Blur, Portrait mode, Google Photos (Face, Object and scene recognition), Google Maps (Image Stitching). Medical Imaging: ➢ Computer vision helps in MRI reconstruction, automatic pathology, diagnosis, and computer aided surgeries and more. Insurance: ➢ Property Inspection and Damage analysis Optical Character Recognition (OCR): ➢ Optical character recognition or optical character reader is the electronic or mechanical conversion of images of typed, handwritten or printed text into machine-encoded text, whether from a scanned document, a photo of a document, a scene-photo or from subtitle text superimposed on an image. 3D Model Building (Photogrammetry): ➢ 3D modeling refers to the process of creating a mathematical representation of a 3- dimensional object or shape. ➢ Motion pictures, video games, architecture, construction, product development, medical, all these industries are using 3D models for visualizing, simulating and rendering graphic designs. Merging CGI with Live Actors in Movies: ➢ Films that are both live-action and computer-animated tend to have fictional characters or figures represented and characterized by cast members through motion capture and then animated and modeled by animators, while films that are live action and traditionally animated use hand-drawn, computer-generated imagery (CGI).
  • 111. Industry: ➢ In manufacturing or assembly line, computer vision is being used for automated inspections, identifying defective products on the production line, and for remote inspections of machinery. ➢ The technology is also used to increase the efficiency of the production line. Automotive: ➢ Self-driving AI analyzes data from a camera mounted on the vehicle to automate lane finding, detect obstacles, and recognize traffic signs and signals. Automated Lip Reading: ➢ This is one of the practical implementations of computer vision to help people with disabilities or who cannot speak, it reads the movement of lips and compares it to already known movements that were recorded and used to create the model. AR/VR: ➢ Object occlusion, outside-in tracking, and inside-out tracking for virtual and augmented reality. Internet: ➢ Image search, Mapping, Photo captioning, Ariel imaging for maps, video categorization and more.
  • 112. Oil and Natural Gas: ➢ The oil and natural gas companies produce millions of barrels of oil and billions of cubic feet of gas every day but for this to happen, first, the geologists have to find a feasible location from where oil and gas can be extracted. ➢ To find these locations they have to analyze thousands of different locations using images taken on the spot. ➢ Suppose if geologists had to analyze each image manually how long will it take to find the best location? Maybe months or even a year but due to the introduction of computer vision the period of analyzing can be brought down to a few days or even a few hours. ➢ You just need to feed in the images taken to the pre-trained model and it will get the work done. Hiring process: ➢ In the HR world, computer vision is changing how candidates get hired in the interview process. By using computer vision, machine learning, and data science, they’re able to quantify soft skills and conduct early candidate assessments to help large companies shortlist the candidates. Video surveillance: ➢ The Concept of video tagging is used to tag videos with keywords based on the objects that appear in each scene. ➢ Now imagine being that security company who’s asking to look for a suspect in a blue van amongst hours and hours of footage. You will just have to feed the video to the algorithm. ➢ With computer vision and object recognition, searching through videos has become a thing of the past.
  • 113. Construction: ➢ Flying a drone with wires around the electric tower doesn’t sound particularly safe either. ➢ So how could you apply computer vision here? Imagine that if a person on the ground took high-resolution images from different angles. ➢ Then the computer vision specialist could create a custom classifier and use it to detect the flaws and amount of rust or cracks present. Healthcare: ➢ From the past few years, the healthcare industry has adopted many next-generation technologies that include artificial intelligence and machine learning concept. ➢ One of them is computer vision which helps determine or diagnose disease in humans or any living creatures. Agriculture: ➢ The agricultural farms have started using computer vision technologies in various forms such as smart tractors, smart farming equipment, and drones to help monitor and maintain the fields efficiently and easily. ➢ It also helps improve yield and the quality of crops. Military: ➢ For modern armies, Computer Vision is an important technology that helps them to detect enemy troops and it also enhances the targeting capabilities of guided missile systems. ➢ It uses image sensors to deliver battlefield intelligence used for tactical decision- making. ➢ One more important Computer Vision application in the areas of autonomous vehicles like UAV’s and remote-controlled semi-automatic vehicles, which need to navigate challenging terrain.
  • 115. 1. Digitization ➢ It is the process of converting information into a digital format. ➢ The result is the representation of an object, image, sound, document or signal by generating a series of numbers that describe a discrete set of 2. Signal Processing ➢ It is an electrical engineering subfield that focuses on analysing, modifying, and synthesizing signals such as sound, images, and scientific measurements. ➢ Digital Filters, Linear Filters, Gaussian Filters & Smoothing . 3. Edge Detection ➢ Edge detection includes a variety of mathematical methods that aim at identifying points in a digital image at which the image brightness changes sharply or, more formally, has discontinuities. ➢ The points at which image brightness changes sharply are typically organized into a set of curved line segments termed edges. Identifying Edge Pixels, Gradient Operators, Robert’s Operator, Sobel’s Operator, Laplacian Operator & Successive Refinement and Marr’s Primal Sketch.
  • 116. 4. Region Detection ➢ Region Based Convolutional Neural Networks (R-CNN) are a family of machine learning models for computer vision and specifically object detection. ➢ Region Growing, The Problem of Texture, Representing Regions-Quad- Trees & Computational Problems. 5. Reconstructing Objects ➢ In computer vision and computer graphics, 3D reconstruction is the process of capturing the shape and appearance of real objects. ➢ Waltz’s Algorithm & Problems with Labelling. 6. Identifying Objects ➢ Object detection is a computer technology related to computer vision and image processing that deals with detecting instances of semantic objects of a certain class (such as humans, buildings, or cars) in digital images and videos. ➢ Using Bitmaps ,Using Summary Statistics, Using Outlines, Hand Written Recognition &Using Properties of Regions. 7. Multiple Images ➢ This causes multiple images to look out of scale to the other images in an article. ➢ Stereo Vision & Moving Pictures.