1. Course : B.TECH
Branch : CSE
Semester : 5th SEM/ 7th Semester
Subject Name :ArtificialIntelligence
Subject Code :
Lecture No. /Topic : 01-08/Unit 5
Prepared by : Abhishek Singh Sengar
Digital Notes
[Department of Computer Science Engineering]
Maharana Pratap Group of Institutions, Mandhana, Kanpur
(Approved By AICTE, New Delhi And Affiliated To AKTU, Luck now)
2. Unit 5 Syllabus
• V APPLICATIONS: AI applications – Language
Models – Information Retrieval- Information
Extraction – Natural Language Processing –
Machine Translation – Speech Recognition –
Robot – Hardware – Perception – Planning –
Moving
3. Language Models
• A language model is the core component of
modern Natural Language Processing (NLP).
It’s a statistical tool that analyzes the pattern
of human language for the prediction of
words.
• NLP-based applications use language models
for a variety of tasks, such as audio to text
conversion, speech recognition, sentiment
analysis, summarization, spell correction, etc
4. Types of Language Models:
• There are primarily two types of language
models:
• Statistical Language Models
• Neural Language Models
5. 1. Statistical Language Models
• Statistical models include the development of
probabilistic models that are able to predict
the next word in the sequence, given the
words that precede it.
• A number of statistical language models are in
use already. Let’s take a look at some of those
popular models:
6. N-Gram:
• This is one of the simplest approaches to
language modelling. Here, a probability
distribution for a sequence of ‘n’ is created,
where ‘n’ can be any number and defines the size
of the gram (or sequence of words being assigned
a probability). If n=4, a gram may look like: “can
you help me”.
• Basically, ‘n’ is the amount of context that the
model is trained to consider. There are different
types of N-Gram models such as unigrams,
bigrams, trigrams, etc.
7. Unigram:
• The unigram is the simplest type of language
model. It doesn't look at any conditioning context
in its calculations. It evaluates each word or term
independently. Unigram models commonly
handle language processing tasks such as
information retrieval.
• The unigram is the foundation of a more specific
model variant called the query likelihood model,
which uses information retrieval to examine a
pool of documents and match the most relevant
one to a specific query.
8. Bidirectional:
• Unlike n-gram models, which analyze text in one
direction (backwards), bidirectional models
analyze text in both directions, backwards and
forwards. These models can predict any word in a
sentence or body of text by using every other
word in the text.
• Examining text bidirectionally increases result
accuracy. This type is often utilized in machine
learning and speech generation applications. For
example, Google uses a bidirectional model to
process search queries.
9. Exponential:
• This type of statistical model evaluates text by
using an equation which is a combination of n-
grams and feature functions. Here the features
and parameters of the desired results are already
specified.
• The model is based on the principle of entropy,
which states that probability distribution with the
most entropy is the best choice. Exponential
models have fewer statistical assumptions which
mean the chances of having accurate results are
more.
10. Continuous Space:
• Continuous Space: In this type of statistical model,
words are arranged as a non-linear combination of
weights in a neural network. The process of assigning
weight to a word is known as word embedding. This
type of model proves helpful in scenarios where the
data set of words continues to become large and
include unique words.
• In cases where the data set is large and consists of
rarely used or unique words, linear models such as n-
gram do not work. This is because, with increasing
words, the possible word sequences increase, and thus
the patterns predicting the next word become weaker.
11. 2. Neural Language Models
• 2. Neural Language Models
• These language models are based on neural networks and are often
considered as an advanced approach to execute NLP tasks. Neural
language models overcome the shortcomings of classical models
such as n-gram and are used for complex tasks such as speech
recognition or machine translation.
• Language is significantly complex and keeps on evolving. Therefore,
the more complex the language model is, the better it would be at
performing NLP tasks. Compared to the n-gram model, an
exponential or continuous space model proves to be a better option
for NLP tasks because they are designed to handle ambiguity and
language variation.
• Meanwhile, language models should be able to manage
dependencies. For example, a model should be able to understand
words derived from different languages.
12. Some Common Examples of Language
Models
• 1. Speech Recognization
• Voice assistants such as Siri and Alexa are examples of how language models help
machines in processing speech audio.
• 2. Machine Translation
• Google Translator and Microsoft Translate are examples of how NLP models can help
in translating one language to another.
• 3. Sentiment Analysis
• This helps in analyzing the sentiments behind a phrase. This use case of NLP models
is used in products that allow businesses to understand a customer’s intent behind
opinions or attitudes expressed in the text. Hubspot’s Service Hub is an example of
how language models can help in sentiment analysis.
• 4. Text Suggestions
• Google services such as Gmail or Google Docs use language models to help users get
text suggestions while they compose an email or create long text documents,
respectively.
• 5. Parsing Tools
• Parsing involves analyzing sentences or words that comply with syntax or grammar
rules. Spell checking tools are perfect examples of language modelling and parsing.
13. Natural Language Processing (NLP)
• NLP stands for Natural Language Processing, which is a part of Computer Science, Human
language, and Artificial Intelligence. It is the technology that is used by machines to
understand, analyse, manipulate, and interpret human's languages.
• It helps developers to organize knowledge for performing tasks such as translation,
automatic summarization, Named Entity Recognition (NER), speech recognition,
relationship extraction, and topic segmentation .
14. Advantages of NLP
• NLP helps users to ask questions about any subject and
get a direct response within seconds.
• NLP offers exact answers to the question means it does
not offer unnecessary and unwanted information.
• NLP helps computers to communicate with humans in
their languages.
• It is very time efficient.
• Most of the companies use NLP to improve the
efficiency of documentation processes, accuracy of
documentation, and identify the information from
large databases.
15. Disadvantages of NLP
• A list of disadvantages of NLP is given below:
• NLP may not show context.
• NLP is unpredictable
• NLP may require more keystrokes.
• NLP is unable to adapt to the new domain,
and it has a limited function that's why NLP is
built for a single and specific task only.
16. Components of NLP
• Natural Language Understanding (NLU)
• Natural Language Generation (NLG)]
•
• 1. Natural Language Understanding (NLU)
• Natural Language Understanding (NLU) helps the machine to understand and
analyse human language by extracting the metadata from content such as
concepts, entities, keywords, emotion, relations, and semantic roles.
• NLU mainly used in Business applications to understand the customer's problem in
both spoken and written language.
• NLU involves the following tasks -
• It is used to map the given input into useful representation.
• It is used to analyze different aspects of the language.
• 2. Natural Language Generation (NLG)
• Natural Language Generation (NLG) acts as a translator that converts the
computerized data into natural language representation. It mainly involves Text
planning, Sentence planning, and Text Realization
17. Applications of NLP
• There are the following applications of NLP -
• 1. Question Answering
• Question Answering focuses on building
systems that automatically answer the
questions asked by humans in a natural
language.
18. 2. Spam Detection
• Spam detection is used to detect unwanted e-
mails getting to a user's inbox.
19. Applications of NLP
• 3.SentimentAnalysis
Sentiment Analysis is also known as opinion mining. It is
used on the web to analyse the attitude, behaviour, and
emotional state of the sender.
• This application is implemented through a combination of
NLP (Natural Language Processing) and statistics by
assigning the values to the text (positive, negative, or
natural), identify the mood of the context (happy, sad,
angry, etc.)
20. Applications of NLP
• 4. Machine Translation
• Machine translation is used to translate text
or speech from one natural language to
another natural language.
• Example: Google Translator
• 5. Spelling correction
• Microsoft Corporation provides word
processor software like MS-word, PowerPoint
for the spelling correction.
21. • 6. Speech Recognition
• Speech recognition is used for converting spoken
words into text. It is used in applications, such as
mobile, home automation, video recovery,
dictating to Microsoft Word, voice biometrics,
voice user interface, and so on.
• 7. Chatbot
• Implementing the Chatbot is one of the
important applications of NLP. It is used by many
companies to provide the customer's chat
services
Applications of NLP
22. Applications of NLP
• 8. Information extraction
• Information extraction is one of the most important
applications of NLP. It is used for extracting structured
information from unstructured or semi-structured
machine-readable documents.
• 9. Natural Language Understanding (NLU)
• It converts a large set of text into more formal
representations such as first-order logic structures that
are easier for the computer programs to manipulate
notations of the natural language processing.
24. Phases of NLP
• 1. Lexical Analysis and Morphological
• The first phase of NLP is the Lexical Analysis. This phase
scans the source code as a stream of characters and
converts it into meaningful lexemes. It divides the whole
text into paragraphs, sentences, and words.
• 2. Syntactic Analysis (Parsing)
• Syntactic Analysis is used to check grammar, word
arrangements, and shows the relationship among the
words.
• Example: Agra goes to the Poonam
• In the real world, Agra goes to the Poonam, does not make
any sense, so this sentence is rejected by the Syntactic
analyzer.
25. Phases of NLP
• 3. Semantic Analysis
• Semantic analysis is concerned with the
meaning representation. It mainly focuses on
the literal meaning of words, phrases, and
sentences.
• 4. Discourse Integration
• Discourse Integration depends upon the
sentences that proceeds it and also invokes
the meaning of the sentences that follow it.
26. • 5. Pragmatic Analysis
• Pragmatic is the fifth and last phase of NLP. It
helps you to discover the intended effect by
applying a set of rules that characterize
cooperative dialogues.
• For Example: "Open the door" is interpreted
as a request instead of an order.
27. • Scikit-learn: It provides a wide range of algorithms for building
machine learning models in Python.
• Natural language Toolkit (NLTK): NLTK is a complete toolkit for all
NLP techniques.
• Pattern: It is a web mining module for NLP and machine learning.
• TextBlob: It provides an easy interface to learn basic NLP tasks like
sentiment analysis, noun phrase extraction, or pos-tagging.
• Quepy: Quepy is used to transform natural language questions into
queries in a database query language.
• SpaCy: SpaCy is an open-source NLP library which is used for Data
Extraction, Data Analysis, Sentiment Analysis, and Text
Summarization.
• Gensim: Gensim works with large datasets and processes data
streams.
29. History of NLP
• (1940-1960) - Focused on Machine Translation (MT)
• The Natural Languages Processing started in the year 1940s.
• 1948 - In the Year 1948, the first recognisable NLP application was
introduced in Birkbeck College, London.
• 1950s - In the Year 1950s, there was a conflicting view between
linguistics and computer science. Now, Chomsky developed his first
book syntactic structures and claimed that language is generative in
nature.
• In 1957, Chomsky also introduced the idea of Generative Grammar,
which is rule based descriptions of syntactic structures
• 1960-1980) - Flavored with Artificial Intelligence (AI)
• In the year 1960 to 1980, the key developments were:
30. Augmented Transition Networks
(ATN)
• Augmented Transition Networks is a finite state machine
that is capable of recognizing regular languages.
• Case Grammar was developed by Linguist Charles J.
Fillmore in the year 1968. Case Grammar uses languages
such as English to express the relationship between nouns
and verbs by using the preposition.
• In Case Grammar, case roles can be defined to link certain
kinds of verbs and objects.
• For example: "Neha broke the mirror with the hammer". In
this example case grammar identify Neha as an agent,
mirror as a theme, and hammer as an instrument.
• In the year 1960 to 1980, key systems were:
31. • SHRDLU
• SHRDLU is a program written by Terry Winograd in 1968-70. It helps
users to communicate with the computer and moving objects. It
can handle instructions such as "pick up the green boll" and also
answer the questions like "What is inside the black box." The main
importance of SHRDLU is that it shows those syntax, semantics, and
reasoning about the world that can be combined to produce a
system that understands a natural language.
• LUNAR
• LUNAR is the classic example of a Natural Language database
interface system that is used ATNs and Woods' Procedural
Semantics. It was capable of translating elaborate natural language
expressions into database queries and handle 78% of requests
without errors.
32. 1980 - Current
• Till the year 1980, natural language processing systems were based
on complex sets of hand-written rules. After 1980, NLP introduced
machine learning algorithms for language processing.
• In the beginning of the year 1990s, NLP started growing faster and
achieved good process accuracy, especially in English Grammar. In
1990 also, an electronic text introduced, which provided a good
resource for training and examining natural language programs.
Other factors may include the availability of computers with fast
CPUs and more memory. The major factor behind the advancement
of natural language processing was the Internet.
• Now, modern NLP consists of various applications, like speech
recognition, machine translation, and machine text reading. When
we combine all these applications then it allows the artificial
intelligence to gain knowledge of the world. Let's consider the
example of AMAZON ALEXA, using this robot you can ask the
question to Alexa, and it will reply to you.
33. Machine translation (MT)
• machine translation (MT) is a process where a
computer program automatically translates text from
one source language to a different target language.
Machine language translation has a long and
interesting history dating back to the 1950s.
• Over time, the technology has developed into a viable
solution for fast and accurate translations. Advances in
artificial intelligence (AI), natural language processing
(NLP), and computing capabilities brought machine
translation into the mainstream.
34. Benefits of Machine Translation (MT)
• Machine translation is an indispensable tool in the translation process. It can be
used alone or in combination with human post-editing. MT offers three primary
benefits for your translation workflows:
• Fast Translation Speed
• Machine translation can translate millions of words for high-volume translation
projects. But speed isn’t the only benefit! MT uses AI to get smarter as more
content is translated. Plus, MT can work with a TMS to manage and tag high-
volume content. This helps you stay organized when you need to quickly translate
content into multiple languages.
• Excellent Language Selection
• Most major machine language translation providers can translate 50-100
languages. These programs are powerful enough to translate multiple languages at
once so you can roll out global products and documentation updates. MT is well-
suited to language pairs such as English to French or English to Spanish.
• Reduced Costs
• Even when human translators are needed for post-editing, MT cuts translation
delivery times and costs. MT takes care of the initial heavy lifting by producing
basic but useful translations, which a human translator can refine and edit. This
way, the finished versions will adhere more closely to the text’s original intent, and
the content can be effectively localized.
•
35. Types of Machine Translation
• There are four different types of machine
translation–
• Statistical Machine Translation (SMT)
• Rule-based Machine Translation (RBMT)
• Hybrid Machine Translation (HMT)
• Neural Machine Translation (NMT)
36. • Rule-Based Machine Translation (RBMT)
• RBMT— the earliest form of MT— translates content based on
grammatical rules. There have been significant advances in machine
translation technology since RBMT was developed, so it has a few
disadvantages. These drawbacks include the need for large amounts
of human post-editing and adding languages manually. Despite this
low translation quality, RBMT is useful in basic situations where a
quick understanding of meaning is all that is required.
• Statistical Machine Translation (SMT)
• SMT works by building a statistical model of the relationships
between text words, phrases, and sentences. It then applies this
translation model to a second language and converts the same
elements to the new language. SMT improves somewhat on RBMT
but still shares many of the same problems.
37. • Hybrid Machine Translation (HMT)
• HMT is a blend of RBMT and SMT. HMT leverages a
translation memory, making it far more effective in terms
of quality. However, even HMT has its share of drawbacks,
the greatest of which is the need for human editing.
• Neural Machine Translation (NMT)
• NMT employs artificial intelligence to learn languages and
improve that knowledge constantly. In this way, it strives to
mimic the neural networks in the human brain.
• NMT is more accurate than other types of AI translation.
With NMT, it’s easier to add languages and translate
content. Because NMT provides better translations, it is
rapidly becoming the standard in MT tool development.
38. Machine Translation Engine
Google Translate was the first MT engine to use neural
language processing and employ machine learning from
repeated use. It’s generally considered one of the leading
machine translation engines based on usage, number of
languages, and integration with searches.
39. • Amazon Translate is closely integrated with Amazon Web Services (AWS).
Some evidence suggests Amazon Translate provides more accurate
translations of certain languages, notably Chinese.
.Microsoft Translator integrates with products like MS
Office and Skype. This feature provides instant access
to translation in documents and compatible
programs
40. • The Watson Language Translator is the MT tool from IBM. It integrates with IBM
Watson Data and IBM Watson Studio. These tools help manage data and build AI
models.
• DeepL Translate is an independent MT engine produced by a small
company in Germany. Thanks to the company’s proprietary neural AI,
DeepL provides natural-sounding and nuanced translations. Worldwide
use of Deepl has vastly increased in recent years.
41. Speech Recognition
• Speech Recognition or Automatic Speech Recognition
(ASR) is the center of attention for AI projects like robotics.
• Without ASR, it is not possible to imagine a cognitive robot
interacting with a human. However, it is not quite easy to
build a speech recognizer.
• Speech recognition, also known as automatic speech
recognition (ASR), computer speech recognition, or speech-
to-text, is a capability which enables a program to process
human speech into a written format.
• While it’s commonly confused with voice recognition,
speech recognition focuses on the translation of speech
from a verbal format to a text one whereas voice
recognition just seeks to identify an individual user’s voice.
42. Speech recognition algorithms
• the vagaries of human speech have made development challenging. It’s
considered to be one of the most complex areas of computer science –
involving linguistics, mathematics and statistics. Speech recognizers are
made up of a few components, such as the speech input, feature
extraction, feature vectors, a decoder, and a word output. The decoder
leverages acoustic models, a pronunciation dictionary, and language
models to determine the appropriate output.
• Speech recognition technology is evaluated on its accuracy rate, i.e. word
error rate (WER), and speed. A number of factors can impact word error
rate, such as pronunciation, accent, pitch, volume, and background noise.
Reaching human parity – meaning an error rate on par with that of two
humans speaking – has long been the goal of speech recognition
systems. Research from Lippmann (link resides outside IBM) (PDF, 344 KB)
estimates the word error rate to be around 4 percent, but it’s been
difficult to replicate the results from this paper.
43. Speech recognition algorithms
• Various algorithms and computation techniques are
used to recognize speech into text and improve the
accuracy of transcription. Below are brief explanations
of some of the most commonly used methods:
• Natural language processing (NLP): While NLP isn’t
necessarily a specific algorithm used in speech
recognition, it is the area of artificial intelligence which
focuses on the interaction between humans and
machines through language through speech and text.
Many mobile devices incorporate speech recognition
into their systems to conduct voice search—e.g. Siri—
or provide more accessibility around texting
44. Speech recognition algorithms
• Hidden markov models (HMM): Hidden Markov Models
build on the Markov chain model, which stipulates that the
probability of a given state hinges on the current state, not
its prior states.
• While a Markov chain model is useful for observable
events, such as text inputs, hidden markov models allow us
to incorporate hidden events, such as part-of-speech tags,
into a probabilistic model.
• They are utilized as sequence models within speech
recognition, assigning labels to each unit—i.e. words,
syllables, sentences, etc.—in the sequence. These labels
create a mapping with the provided input, allowing it to
determine the most appropriate label sequence.
45. • N-grams: This is the simplest type of language
model (LM), which assigns probabilities to
sentences or phrases.
• An N-gram is sequence of N-words. For
example, “order the pizza” is a trigram or 3-
gram and “please order the pizza” is a 4-gram.
Grammar and the probability of certain word
sequences are used to improve recognition
and accuracy
46. • Speaker Diarization (SD): Speaker diarization algorithms identify and
segment speech by speaker identity. This helps programs better
distinguish individuals in a conversation and is frequently applied at call
centers distinguishing customers and sales agents.
• Neural networks: Primarily leveraged for deep learning algorithms, neural
networks process training data by mimicking the interconnectivity of the
human brain through layers of nodes. Each node is made up of inputs,
weights, a bias (or threshold) and an output.
• If that output value exceeds a given threshold, it “fires” or activates the
node, passing data to the next layer in the network. Neural networks learn
this mapping function through supervised learning, adjusting based on the
loss function through the process of gradient descent. While neural
networks tend to be more accurate and can accept more data, this comes
at a performance efficiency cost as they tend to be slower to train
compared to traditional language models.
47. What are Robots?
• Robots are the artificial agents acting in real world
environment.
• Robots are aimed at manipulating the objects by
perceiving, picking, moving, modifying the physical
properties of object, destroying it, or to have an effect
thereby freeing manpower from doing repetitive functions
without getting bored, distracted, or exhausted.
• What is Robotics?
• Robotics is a branch of AI, which is composed of Electrical
Engineering, Mechanical Engineering, and Computer
Science for designing, construction, and application of
robots.
48. What is Robotics?
• Robotics is a branch of AI, which is composed of
Electrical Engineering, Mechanical Engineering, and
Computer Science for designing, construction, and
application of robots.
• Aspects of Robotics
• The robots have mechanical construction, form, or
shape designed to accomplish a particular task.
• They have electrical components which power and
control the machinery.
• They contain some level of computer program that
determines what, when and how a robot does
something.
49.
50. Robot Locomotion
• Locomotion is the mechanism that makes a
robot capable of moving in its environment.
There are various types of locomotions −
• Legged
• Wheeled
• Combination of Legged and Wheeled
Locomotion
• Tracked slip/skid
51. Legged Locomotion
• This type of locomotion consumes more power while
demonstrating walk, jump, trot, hop, climb up or down, etc.
• It requires more number of motors to accomplish a movement. It is
suited for rough as well as smooth terrain where irregular or too
smooth surface makes it consume more power for a wheeled
locomotion. It is little difficult to implement because of stability
issues.
• It comes with the variety of one, two, four, and six legs. If a robot
has multiple legs then leg coordination is necessary for locomotion.
• The total number of possible gaits (a periodic sequence of lift and
release events for each of the total legs) a robot can travel depends
upon the number of its legs.
• If a robot has k legs, then the number of possible events N = (2k-1)!.
• In case of a two-legged robot (k=2), the number of possible events
is N = (2k-1)! = (2*2-1)! = 3! = 6.
52. Hence there are six possible different
events
• Lifting the Left leg
• Releasing the Left leg
• Lifting the Right leg
• Releasing the Right leg
• Lifting both the legs together
• Releasing both the legs together
• In case of k=6 legs, there are 39916800 possible
events. Hence the complexity of robots is directly
proportional to the number of legs.
53. Wheeled Locomotion
• It requires fewer number of motors to
accomplish a movement. It is little easy to
implement as there are less stability issues in
case of more number of wheels. It is power
efficient as compared to legged locomotion.
• Standard wheel − Rotates around the wheel axle
and around the contact
• Castor wheel − Rotates around the wheel axle
and the offset steering joint.
• Swedish 45o and Swedish 90o wheels − Omni-
wheel, rotates around the contact point, around
the wheel axle, and around the rollers.
• Ball or spherical wheel − Omnidirectional
wheel, technically difficult to implement.
54. Slip/Skid Locomotion
• In this type, the vehicles use tracks as in a
tank. The robot is steered by moving the
tracks with different speeds in the same or
opposite direction. It offers stability because
of large contact area of track and ground
55. Components of a Robot
• Robots are constructed with the following −
• Power Supply − The robots are powered by batteries, solar power,
hydraulic, or pneumatic power sources.
• Actuators − They convert energy into movement.
• Electric motors (AC/DC) − They are required for rotational movement.
• Pneumatic Air Muscles − They contract almost 40% when air is sucked in
them.
• Muscle Wires − They contract by 5% when electric current is passed
through them.
• Piezo Motors and Ultrasonic Motors − Best for industrial robots.
• Sensors − They provide knowledge of real time information on the task
environment. Robots are equipped with vision sensors to be to compute
the depth in the environment. A tactile sensor imitates the mechanical
properties of touch receptors of human fingertips.
56. Computer Vision
• This is a technology of AI with which the robots can see. The computer
vision plays vital role in the domains of safety, security, health, access, and
entertainment.
• Computer vision automatically extracts, analyzes, and comprehends
useful information from a single image or an array of images. This process
involves development of algorithms to accomplish automatic visual
comprehension.
• Hardware of Computer Vision System
• This involves −
• Power supply
• Image acquisition device such as camera
• A processor
• A software
• A display device for monitoring the system
• Accessories such as camera stands, cables, and connectors
57. Tasks of Computer Vision
• OCR − In the domain of computers, Optical Character
Reader, a software to convert scanned documents into
editable text, which accompanies a scanner.
• Face Detection − Many state-of-the-art cameras come with
this feature, which enables to read the face and take the
picture of that perfect expression. It is used to let a user
access the software on correct match.
• Object Recognition − They are installed in supermarkets,
cameras, high-end cars such as BMW, GM, and Volvo.
• Estimating Position − It is estimating position of an object
with respect to camera as in position of tumor in human’s
body.
58. Application Domains of Computer
Vision
• Agriculture
• Autonomous vehicles
• Biometrics
• Character recognition
• Forensics, security, and surveillance
• Industrial quality inspection
• Face recognition
• Gesture analysis
• Geoscience
• Medical imagery
• Pollution monitoring
• Process control
• Remote sensing
• Robotics
• Transport
59. Applications of Robotics
• The robotics has been instrumental in the various domains such as -
• Industries − Robots are used for handling material, cutting, welding,
color coating, drilling, polishing, etc.
• Military − Autonomous robots can reach inaccessible and
hazardous zones during war. A robot named Daksh, developed by
Defense Research and Development Organization (DRDO), is in
function to destroy life-threatening objects safely.
• Medicine − The robots are capable of carrying out hundreds of
clinical tests simultaneously, rehabilitating permanently disabled
people, and performing complex surgeries such as brain tumors.
• Exploration − The robot rock climbers used for space exploration,
underwater drones used for ocean exploration are to name a few.
• Entertainment − Disney’s engineers have created hundreds of
robots for movie making.