The document discusses the history and development of artificial intelligence from its origins in 1956 to modern applications. It covers many foundational concepts in AI research including knowledge representation, machine learning, natural language processing, computer vision, robotics, and the challenges of achieving human-level artificial general intelligence. The document also provides examples of notable AI systems developed for games, speech recognition, and expert systems.
It is technology and a branch of computer science that studies and develops intelligent machines and software. Major AI researchers and textbooks define the field as "the study and design of intelligent agents", where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success. John McCarthy, who coined the term in 1955, defines it as "The science and engineering of making intelligent machines".
The Foundations of Artificial Intelligence, The History of
Artificial Intelligence, and the State of the Art. Intelligent Agents: Introduction, How Agents
should Act, Structure of Intelligent Agents, Environments. Solving Problems by Searching:
problem-solving Agents, Formulating problems, Example problems, and searching for Solutions,
Search Strategies, Avoiding Repeated States, and Constraint Satisfaction Search. Informed
Search Methods: Best-First Search, Heuristic Functions, Memory Bounded Search, and Iterative
Improvement Algorithms.
It is technology and a branch of computer science that studies and develops intelligent machines and software. Major AI researchers and textbooks define the field as "the study and design of intelligent agents", where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success. John McCarthy, who coined the term in 1955, defines it as "The science and engineering of making intelligent machines".
The Foundations of Artificial Intelligence, The History of
Artificial Intelligence, and the State of the Art. Intelligent Agents: Introduction, How Agents
should Act, Structure of Intelligent Agents, Environments. Solving Problems by Searching:
problem-solving Agents, Formulating problems, Example problems, and searching for Solutions,
Search Strategies, Avoiding Repeated States, and Constraint Satisfaction Search. Informed
Search Methods: Best-First Search, Heuristic Functions, Memory Bounded Search, and Iterative
Improvement Algorithms.
Describe what is Artificial Intelligence. What are its goals and Approaches. Different Types of Artificial Intelligence Explain Machine learning and took one Algorithm "K-means Algorithm" and explained
AI is the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of information and rules for the use of information), reasoning (using the rules to reach approximate or final conclusions) and self-correction. Particular applications of the AI include expert system speech recognition and artificial vision.
Artificial intelligence - Approach and MethodRuchi Jain
Human natural intelligence is ubiquitous with human activities, such as solving problems, playing chess, guessing puzzles. AI is new mean to solve such complex problems. We NuAIg is a AI consulting firm, who will help you to create a AI road-map for your business and process automation.
Define artificial intelligence.
Mention the four approaches to AI.
What are the capabilities of AI that have to process with computer?
Mention the foundations of AI?
Mention the crude comparison of the raw computational resources available to computer and human brain.
Briefly explain the history of AI.
What are rational action and intelligent agent?
B7rifubd d f6r f 66 d6b 6e6e6 ed6vyevyd y d ydy dysh d ydy ydy dvuovkvig.ih jvbkkbbibkv kvivibk j j I. J hi i ivy ih ih hi 8h h hy ooh oh h.o i ih hi y8 9u ih ih ih. Ih ihbcjvjvigg. Jgj gj gj fu gi fuf j ffj. Gi.g I gi i.
Describe what is Artificial Intelligence. What are its goals and Approaches. Different Types of Artificial Intelligence Explain Machine learning and took one Algorithm "K-means Algorithm" and explained
AI is the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of information and rules for the use of information), reasoning (using the rules to reach approximate or final conclusions) and self-correction. Particular applications of the AI include expert system speech recognition and artificial vision.
Artificial intelligence - Approach and MethodRuchi Jain
Human natural intelligence is ubiquitous with human activities, such as solving problems, playing chess, guessing puzzles. AI is new mean to solve such complex problems. We NuAIg is a AI consulting firm, who will help you to create a AI road-map for your business and process automation.
Define artificial intelligence.
Mention the four approaches to AI.
What are the capabilities of AI that have to process with computer?
Mention the foundations of AI?
Mention the crude comparison of the raw computational resources available to computer and human brain.
Briefly explain the history of AI.
What are rational action and intelligent agent?
B7rifubd d f6r f 66 d6b 6e6e6 ed6vyevyd y d ydy dysh d ydy ydy dvuovkvig.ih jvbkkbbibkv kvivibk j j I. J hi i ivy ih ih hi 8h h hy ooh oh h.o i ih hi y8 9u ih ih ih. Ih ihbcjvjvigg. Jgj gj gj fu gi fuf j ffj. Gi.g I gi i.
Hdjfy irifiti ititit9 yoyoyo y. T9t 9 to to tt8t I t9t 9titi to to to it ito t ito to yo oy o ot9 59 59t9 to to. Y9t 9t 9l5 95. 958p to8 e86ei6 d6idyi syie dyidou ukryi
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
3. The Term A.I. belongs to a Fifth Generation Computer System
In which the System works in the same manner as human-being.
In another sense,
we can term it
As the study and design of
Intelligent Agents.
Rahul - CA Final Student
4. The field of AI research was founded at a conference on the campus of
Dartmouth College in the summer of 1956. The attendees, including John
McCarthy, Marvin Minsky, Allen Newell and Herbert Simon, became the
leaders of AI research for many decades. They and their students wrote
programs that were, to most people, simply astonishing: computers were
solving word problems in algebra, proving logical theorems and speaking
English.
By the middle of the 1960s, research in the U.S. was heavily funded by the
Department of Defense and laboratories had been established around the
world. AI's founders were profoundly optimistic about the future of the new
field: Herbert Simon predicted that "machines will be capable, within twenty
years, of doing any work a man can do" and Marvin Minsky agreed, writing
that "within a generation ... the problem of creating 'artificial intelligence' will
substantially be solved".
In 1974, in response to the criticism of England's Sir James Lighthill and
ongoing pressure from Congress to fund more productive projects, the U.S.
and British governments cut off all undirected, exploratory research in AI.
The next few years, when funding for projects was hard to find, would later
be called an "AI winter".
Rahul - CA Final Student
5. In the early 1980s, AI research was revived by the commercial success of
expert systems, a form of AI program that simulated the knowledge and
analytical skills of one or more human experts. By 1985 the market for
AI had reached over a billion dollars. At the same time, Japan's fifth
generation computer project inspired the U.S and British governments to
restore funding for academic research in the field. However, beginning
with the collapse of the Lisp Machine market in 1987, AI once again fell
into disrepute, and a second, longer lasting AI winter began.
In the 1990s and early 21st century, AI achieved its greatest successes,
albeit somewhat behind the scenes. Artificial intelligence is used for
logistics, data mining, medical diagnosis and many other areas
throughout the technology industry. The success was due to several
factors: the incredible power of computers today (see Moore's law), a
greater emphasis on solving specific subproblems, the creation of new
ties between AI and other fields working on similar problems, and above
all a new commitment by researchers to solid mathematical methods and
rigorous scientific standards.
Rahul - CA Final Student
6. Automated planning and scheduling
Knowledge representation
Commonsense knowledge
Natural language processing
Reasoning, problem solving
Machine learning
Motion and manipulation
Social intelligence
Computational creativity
General intelligence
Rahul - CA Final Student
7. Intelligent agents must be able to set goals and achieve
them. They need a way to visualize the future (they must
have a representation of the state of the world and be able
to make predictions about how their actions will change it)
and be able to make choices that maximize the utility (or
"value") of the available choices.
In classical planning problems, the agent can assume that it
is the only thing acting on the world and it can be certain
what the consequences of its actions may be. However, if
this is not true, it must periodically check if the world
matches its predictions and it must change its plan as this
becomes necessary, requiring the agent to reason under
uncertainty.
Rahul - CA Final Student
8. The most difficult problems in knowledge representation are:
•Default Reasoning
•Qualification Problem
Knowledge representation and knowledge engineering are central to
AI research. Many of the problems machines are expected to solve
will require extensive knowledge about the world. Among the things
that AI needs to represent are: objects, properties, categories and
relations between objects; situations, events, states and time; causes
and effects; knowledge about knowledge (what we know about what
other people know); and many other, less well researched domains. A
complete representation of "what exists" is an ontology (borrowing a
word from traditional philosophy), of which the most general are
called upper ontologies.
Rahul - CA Final Student
9. The breadth of commonsense knowledge
The number of atomic facts that the average person knows is astronomical.
Research projects that attempt to build a complete knowledge base of
commonsense knowledge (e.g. Cyc) require enormous amounts of laborious
ontological engineering — they must be built, by hand, one complicated
concept at a time. A major goal is to have the computer understand enough
concepts to be able to learn by reading from sources like the internet, and thus
be able to add to its own ontology.
The subsymbolic form of Commonsense knowledge
Much of what people know is not represented as "facts" or "statements" that
they could actually say out loud. For example, a chess master will avoid a
particular chess position because it "feels too exposed" or an art critic can take
one look at a statue and instantly realize that it is a fake. These are intuitions or
tendencies that are represented in the brain non-consciously and sub-
symbolically. Knowledge like this informs, supports and provides a context for
symbolic, conscious knowledge. As with the related problem of sub-symbolic
reasoning, it is hoped that situated AI or computational intelligence will
provide ways to represent this kind of knowledge.
Rahul - CA Final Student
10. Natural language processing gives machines the ability to read
and understand the languages that humans speak.
Many researchers hope that a
sufficiently powerful natural
language processing system would
be able to acquire knowledge on its
own, by reading the existing text
available over the internet. Some
straightforward applications of
natural language processing include
information retrieval (or text mining)
and machine translation.
Rahul - CA Final Student
11. Early AI researchers developed algorithms that imitated the step-by-step
reasoning that humans use when they solve puzzles, play board games or make
logical deductions. By the late 1980s and '90s, AI research had also developed
highly successful methods for dealing with uncertain or incomplete information,
employing concepts from probability and economics.
For difficult problems, most of these algorithms can require enormous
computational resources — most experience a "combinatorial explosion": the
amount of memory or computer time required becomes astronomical when the
problem goes beyond a certain size. The search for more efficient problem
solving algorithms is a high priority for AI research.
Human beings solve most of their problems using fast, intuitive judgments rather
than the conscious, step-by-step deduction that early AI research was able to
model. AI has made some progress at imitating this kind of "sub-symbolic"
problem solving: embodied agent approaches emphasize the importance of
sensorimotor skills to higher reasoning; neural net research attempts to simulate
the structures inside human and animal brains that gives rise to this skill.
Rahul - CA Final Student
12. Machine learning has been central to AI research from the
beginning. Unsupervised learning is the ability to find patterns in
a stream of input. Supervised learning includes both classification
and numerical regression. Classification is used to determine
what category something belongs in, after seeing a number of
examples of things from several categories. Regression takes a
set of numerical input/output examples and attempts to discover a
continuous function that would generate the outputs from the
inputs. In reinforcement learning the agent is rewarded for good
responses and punished for bad ones. These can be analyzed in
terms of decision theory, using concepts like utility. The
mathematical analysis of machine learning algorithms and their
performance is a branch of theoretical computer science known
as computational learning theory.
Rahul - CA Final Student
13. The field of robotics is closely related to AI.
Intelligence is required for robots to be able
to handle such tasks as object
manipulation and navigation,
with sub-problems of localization
(knowing where you are),
mapping (learning what is around you) and
motion planning (figuring out how to get there)
Rahul - CA Final Student
14. Emotion and social skills play two roles for an intelligent
agent.
• First, it must be able to predict the actions of others, by
understanding their motives and emotional states. (This
involves elements of game theory, decision theory, as well
as the ability to model human emotions and the perceptual
skills to detect emotions.)
• Also, for good human-computer interaction, an intelligent
machine also needs to display emotions. At the very least it
must appear polite and sensitive to the humans it interacts
with. At best, it should have normal emotions itself
• Example is Kismet, a robot with rudimentary social skills
Rahul - CA Final Student
15. A sub-field of AI addresses
creativity both theoretically
(from a philosophical
And psychological
perspective)
And practically
(via specific
implementations
of systems that
generate outputs
that can be
considered creative)
TOPIO, a robot that
can play table tennis,
developed by TOSY.
Rahul - CA Final Student
16. Most researchers hope that their work will eventually be incorporated into a
machine with general intelligence (known as strong AI), combining all the
skills above and exceeding human abilities at most or all of them. A few
believe that anthropomorphic features like artificial consciousness or an
artificial brain may be required for such a project. A related area of
computational research is Artificial Intuition and Artificial Imagination.
Many of the problems above are considered AI-complete: to solve one
problem, you must solve them all. For example, even a straightforward,
specific task like machine translation requires that the machine follow the
author's argument (reason), know what is being talked about (knowledge),
and faithfully reproduce the author's intention (social intelligence). Machine
translation, therefore, is believed to be AI-complete: it may require strong AI
to be done as well as humans can do it
Rahul - CA Final Student
17. 1 Brain simulation
2 Cognitive architectures
3 Games
4 Natural language processing
5 Speech recognition
6 Expert System
7 Handwriting Recognition
Rahul - CA Final Student
18. • Artificial_Intelligence_System, a large scale distributed
computing project involving over 10,000 computers
• HNeT (Holographic Neural Technology), a technology
by AND Corporation (Artificial Neural Devices) based
on non linear phase coherence/decoherence principles.
• Hierarchical Temporal Memory, a technology by
Numenta to capture and replicate the properties of the
neocortex.
Rahul - CA Final Student
19. • CALO, a DARPA-funded, 25-institution effort to integrate
numerous artificial intelligence approaches (natural language
processing, speech recognition, machine vision, probabilistic
logic, planning, reasoning, numerous forms of machine
learning) into an AI assistant that learns to help manage your
office environment.
• SHIAI (Semi Human Instinctive Artificial Intelligence), an AI
methodology based on the use of semi-human instincts,
developed at Islamic Azad University in 2004.
• Virtual Woman, the oldest continuous form of virtual life — a
chatterbot, virtual reality, artificial intelligence, video game,
and virtual human.
Rahul - CA Final Student
20. Games
• Chinook, a computer program that plays English draughts; the first
to win the world champion title in the competition against humans.
• Deep Blue, a chess-playing computer developed by IBM which
beat Garry Kasparov in 1997.
• FreeHAL, a self-learning conversation simulator (Chatterbot)
which uses semantic nets to organize its knowledge in order to
imitate a very close human behavior within conversations.
• Poki, research into computer poker by the University of Alberta.
• TD-Gammon, a program that learned to play world-class
backgammon partly by playing against itself (temporal difference
learning with neural networks)
Rahul - CA Final Student
21. AIML, an XML dialect for creating natural language software agents.
A.L.I.C.E., an award-winning natural language processing chatterbot.
ELIZA, a famous 1966 computer program by Joseph Weizenbaum, which parodied
person-centered therapy.
InfoTame, a text analysis search engine originally developed by the KGB for sorting
communications intercepts.
Jabberwacky, a chatterbot by Rollo Carpenter, aiming to simulate a natural human
chat.
KAR-Talk, a chatterbot by I.-A.Industrie.
PARRY, another early famous chatterbot, written in 1972 by Kenneth Colby,
attempting to simulate a paranoid schizophrenic.
Proverb, a system that can solve crossword puzzles better than most humans.
SHRDLU, an early natural language processing computer program developed by Terry
Winograd at MIT from 1968 to 1970.
START, the world's first web-based question answering system, developed at the MIT
CSAIL.
SYSTRAN, a machine translation technology by a company of the same name, used
by Yahoo!, AltaVista and Google, among others.
Texai, an open source project to create artificial intelligence, starting with a bootstrap
English dialog system that intelligently acquires knowledge and behaviors.
Rahul - CA Final Student
22. Speech recognition (also known as automatic speech recognition or
computer speech recognition) converts spoken words to text. The term
"voice recognition" is sometimes used to refer to speech recognition where
the recognition system is trained to a particular speaker - as is the case for
most desktop recognition software, hence there is an element of speaker
recognition, which attempts to identify the person speaking, to better
recognize what is being said. Speech recognition is a broad term which
means it can recognize almost anybody's speech - such as a call-centre
system designed to recognize many voices. Voice recognition is a system
trained to a particular user, where it recognizes their speech based on their
unique vocal sound.
Speech recognition applications include voice dialing (e.g., "Call home"),
call routing (e.g., "I would like to make a collect call"), domotic appliance
control and content-based spoken audio search (e.g., find a podcast where
particular words were spoken), simple data entry (e.g., entering a credit card
number), preparation of structured documents (e.g., a radiology report),
speech-to-text processing (e.g., word processors or emails), and in aircraft
cockpits (usually termed Direct Voice Input)
Rahul - CA Final Student
23. An expert system is software that attempts to
provide an answer to a problem, or clarify
uncertainties where normally one or more human
experts would need to be consulted. Expert systems
are most common in a specific problem domain,
and is a traditional application and/or subfield of
artificial intelligence Expert systems were
introduced by Edward Feigenbaum, the first truly
successful form of AI software
Expert system
Rahul - CA Final Student
24. A wide variety of methods can be used to
simulate the performance of the expert however
common to most or all are
The creation of a so called "knowledge base" which
uses some knowledge representation formalism to capture
the Subject MatterExpert's (SME) knowledge.
A process of gathering that knowledge from the
SME and codifying it according to the formalism,
which is called knowledge engineering.
Rahul - CA Final Student
25. Expert Systems are computer programs that are
derived from a branch of computer science research called
Artificial Intelligence (AI).
AI's scientific goal is to understand intelligence by
building comp. programs that exhibit intelligent behavior.
It is concerned with the concepts and methods of
symbolic inference or reasoning by a computer, and how
the knowledge used to make those inferences will be
represented inside the machine.
Of course, the term intelligence covers many cognitive
skills,including the ability to solve problems, learn, and
understand language; AI addresses all of those.
Rahul - CA Final Student
27. Handwriting recognition
.Handwriting recognition is the ability of a computer to
receive and interpret intelligible handwritten input from
sources such as paper documents, photographs,
touchscreens and other devices.
.Handwriting recognition principally entails optical
character recognition. However, a complete handwriting
recognition system also handles formatting, performs
correct segmentation into characters and finds the
most plausible words
Rahul - CA Final Student
28. Some of the images of handwriting
recognition are shown in the next
few slides :
Rahul - CA Final Student
32. Elements of on line handwriting
recognition
• The elements of an on-line handwriting
recognition interface typically include:
• a pen or stylus for the user to write with.
• a touch sensitive surface, which may be
integrated with, or adjacent to, an output
display.
• a software application which interprets the
movements of the stylus across the writing
surface, translating the resulting strokes into
digital text.
Rahul - CA Final Student
35. ROBOTS
A BASIC DEFINITION
• It is a virtual or mechanical
artificial agent.
• It is an electromechanical
machine which is guided by
computer or electro-
programming and thus it is
able to do tasks on its own.
Rahul - CA Final Student
36. NANO
ROBOTS
RECONFIGURABLE
ROBOTS
SOFT
ROBOTS
SWARM
ROBOTS
These robots can
alter there physical
form to suit a
particular task.
Example-T-1000
TYPES OF ROBOTS
The researchers
have only
produced parts of
this complex
systems. These are
also known as
nano bits.
These robots are
inspired by the
family of insects
such as ants and
bees. Example I-
robot swarm.
These robots have
silicone bodies
and flexible air
muscles.
Rahul - CA Final Student
37. Positive & Negative impacts of
robots
• They perform jobs more
cheaply, with greater
accuracy and with more
reliability than humans.
• They are widely used in
manufacturing, packing,
transport and laboratory
works.
• Robots have their own
entertainment value but
their unsafe use can also
cause danger.
• A heavy industrial robot
can cause harm to
humans. For example
death of Sir Robert
Williams was caused
by a robot only.
Rahul - CA Final Student
38. Some Basic Facts About Robots
• Roughly half of the Robots in
this world are in ASIA of
which 30% are in JAPAN. It
is the ROBOTIC CAPITAL of
the country.
• South Korea aims to put a
Robot in every house by 2015-
2020 so that every country is
technologically equipped as
JAPAN
Rahul - CA Final Student