This document provides an overview of artificial intelligence including definitions of key concepts like intelligence, knowledge, learning, and understanding. It then discusses the major branches of AI such as robotics, vision systems, natural language processing, learning systems, neural networks, and expert systems. The history of AI achievements like Robocop, Deep Blue, and the DARPA Grand Challenge are also summarized. Finally, some current applications of AI like driverless trains, burglary alarm systems, and automatic grading systems are briefly mentioned.
The document discusses 12 major theories of intelligence:
1. Faculty theory which views intelligence as consisting of independent mental faculties.
2. One factor theory which reduces all abilities to a single general intelligence factor.
3. Spearman's two-factor theory comprising a general intelligence ("g") factor and specific factors.
4. Thorndike's multifactor theory which identified four attributes of intelligence.
5. Thurstone's primary mental abilities theory identifying six primary factors.
6. Guilford's structure of intellect model classifying intellectual tasks.
7. Vernon's hierarchical theory describing intelligence at varying levels of generality.
8. Cattell's fluid and crystallized theory distinguishing two types
Learn more about how your mind works and what you can do to make it work better! Easy to understand facts about the human mind and tips to train and stimulate your intellect.
The document discusses various theories of intelligence proposed by different psychologists. It describes Charles Spearman's theory of general intelligence (g factor) and Louis Thurstone's theory of primary mental abilities. It also summarizes Howard Gardner's theory of multiple intelligences consisting of nine distinct intelligences. Robert Sternberg proposed successful intelligence involving analytical, practical and creative abilities. The document also discusses theories by David Perkins involving neural, experiential and reflective intelligence. It covers early research on quantifying mental ability by Galton and Binet's development of intelligence tests. It defines concepts like mental age, chronological age and intelligence quotient. The four branches of emotional intelligence - perceiving, using, understanding and managing emotions - are outlined. Gender
The document discusses Howard Gardner's theory of multiple intelligences. It begins by defining intelligence and outlining some of the major theories of intelligence that preceded Gardner's work. It then explains how Gardner's theory emerged from criticisms of standardized testing. The theory proposes that there are eight distinct intelligences: linguistic, logical-mathematical, spatial, bodily-kinesthetic, musical, interpersonal, intrapersonal, and naturalist. Examples are given of each intelligence along with potential career paths that make use of each intelligence. The document concludes by discussing criticisms of the theory and its implications for education.
White 2004 myth of multiple intelligencesEmma Grice
This lecture examines Howard Gardner's theory of multiple intelligences and argues it may be a myth rather than reality. The theory identifies eight or nine types of intelligence, but the key question is whether there is good evidence they exist as distinct intelligences. Gardner's criteria for identifying an intelligence rely on problematic assumptions about development and symbolism. Specifically, 1) the notion of innate capacities unfolding into mature states does not translate well to mental abilities and 2) considering works of art as symbols is questionable in aesthetics. As a result, Gardner's criteria do not provide strong empirical support for his theory of multiple intelligences.
The document discusses different aspects of knowledge, intelligence, thinking and learning. It defines knowledge as facts, truths or principles gained from memorization, experience and thinking. It compares intelligence to the motor of a car, knowledge to fuel, and thinking to tuning up. There are different types of both intelligence and thinking. Bringing in new ideas and thinking outside the box is called lateral thinking. Emotional intelligence is also important for social skills.
The document discusses various theories and approaches to defining and measuring intelligence. It describes intelligence as a broad concept that is difficult to define, with experts disagreeing on its structure and components. Several theories are outlined, including Sternberg's triarchic theory that identifies analytical, creative, and practical types of intelligence. Gardner's theory of multiple intelligences suggests there are eight identifiable forms. Other theories discussed include those proposed by Thurstone, Cattell, Spearman, and others. Common intelligence tests are also summarized, such as those developed by Binet, Wechsler, and Terman.
The document discusses 12 major theories of intelligence:
1. Faculty theory which views intelligence as consisting of independent mental faculties.
2. One factor theory which reduces all abilities to a single general intelligence factor.
3. Spearman's two-factor theory comprising a general intelligence ("g") factor and specific factors.
4. Thorndike's multifactor theory which identified four attributes of intelligence.
5. Thurstone's primary mental abilities theory identifying six primary factors.
6. Guilford's structure of intellect model classifying intellectual tasks.
7. Vernon's hierarchical theory describing intelligence at varying levels of generality.
8. Cattell's fluid and crystallized theory distinguishing two types
Learn more about how your mind works and what you can do to make it work better! Easy to understand facts about the human mind and tips to train and stimulate your intellect.
The document discusses various theories of intelligence proposed by different psychologists. It describes Charles Spearman's theory of general intelligence (g factor) and Louis Thurstone's theory of primary mental abilities. It also summarizes Howard Gardner's theory of multiple intelligences consisting of nine distinct intelligences. Robert Sternberg proposed successful intelligence involving analytical, practical and creative abilities. The document also discusses theories by David Perkins involving neural, experiential and reflective intelligence. It covers early research on quantifying mental ability by Galton and Binet's development of intelligence tests. It defines concepts like mental age, chronological age and intelligence quotient. The four branches of emotional intelligence - perceiving, using, understanding and managing emotions - are outlined. Gender
The document discusses Howard Gardner's theory of multiple intelligences. It begins by defining intelligence and outlining some of the major theories of intelligence that preceded Gardner's work. It then explains how Gardner's theory emerged from criticisms of standardized testing. The theory proposes that there are eight distinct intelligences: linguistic, logical-mathematical, spatial, bodily-kinesthetic, musical, interpersonal, intrapersonal, and naturalist. Examples are given of each intelligence along with potential career paths that make use of each intelligence. The document concludes by discussing criticisms of the theory and its implications for education.
White 2004 myth of multiple intelligencesEmma Grice
This lecture examines Howard Gardner's theory of multiple intelligences and argues it may be a myth rather than reality. The theory identifies eight or nine types of intelligence, but the key question is whether there is good evidence they exist as distinct intelligences. Gardner's criteria for identifying an intelligence rely on problematic assumptions about development and symbolism. Specifically, 1) the notion of innate capacities unfolding into mature states does not translate well to mental abilities and 2) considering works of art as symbols is questionable in aesthetics. As a result, Gardner's criteria do not provide strong empirical support for his theory of multiple intelligences.
The document discusses different aspects of knowledge, intelligence, thinking and learning. It defines knowledge as facts, truths or principles gained from memorization, experience and thinking. It compares intelligence to the motor of a car, knowledge to fuel, and thinking to tuning up. There are different types of both intelligence and thinking. Bringing in new ideas and thinking outside the box is called lateral thinking. Emotional intelligence is also important for social skills.
The document discusses various theories and approaches to defining and measuring intelligence. It describes intelligence as a broad concept that is difficult to define, with experts disagreeing on its structure and components. Several theories are outlined, including Sternberg's triarchic theory that identifies analytical, creative, and practical types of intelligence. Gardner's theory of multiple intelligences suggests there are eight identifiable forms. Other theories discussed include those proposed by Thurstone, Cattell, Spearman, and others. Common intelligence tests are also summarized, such as those developed by Binet, Wechsler, and Terman.
Different psychologists have proposed competing theories of intelligence over the years. These theories have proven to be useful in our understanding the brain.
Cognitive Learning Theory focuses on thinking and mental processes as behaviors that lead to learning. It examines learning as an intricate process involving thoughts, ideas, realizations, and acquired information rather than just reactions. Approaches discussed in the document include Dual Coding Theory, Gagne's nine events of instruction, Gardner's Theory of Multiple Intelligences, and Bloom's Taxonomy of Learning. The document also provides examples of how teachers can apply concepts from Cognitive Learning Theory, such as Multiple Intelligences, in their classrooms to better understand students' strengths and customize instruction.
Guilford's structure of intellect modelBonnie Crerar
J.P. Guilford proposed the Structure of Intellect (SOI) model to describe 180 different types of intellectual abilities. The SOI model categorizes abilities into three dimensions: operations (6 types of thinking processes), content (5 types of information), and products (6 types of outcomes). Each combination of one operation, one content, and one product defines a specific intellectual ability. The model suggests intelligence involves distinct skills that can be improved through training. It also implies curriculum should incorporate different combinations of operations, content, and products to develop students' intellects based on their individual differences.
1. Research on adult intelligence has shown that while some abilities decline with age, others are maintained or improve through life experiences and learning.
2. Two types of intelligence are identified: fluid intelligence involving short-term memory and abstract thinking tends to decline with age, while crystallized intelligence involving accumulated knowledge and vocabulary tends to increase with age.
3. Adults can maintain cognitive functioning through selective optimization, focusing on areas of expertise and compensating for losses, becoming more proficient in meaningful activities over time through experience.
A project to promote conceptual learning for all;
Dr. Amjad ali arain; University of Sind; Faculty of Education; Pakistan
Major theories of intelligence
The document outlines 9 types of intelligence identified by Howard Gardner: naturalist, musical, logical-mathematical, existential, interpersonal, bodily-kinesthetic, linguistic, intra-personal, and spatial intelligence. Each type is defined in terms of its core capacities. Examples are given of professions and individuals that demonstrate strengths in each area of intelligence as well as traits common to young adults with strengths in each type.
Multiple intelligences theory proposes that intelligence is comprised of at least 8 different intelligences rather than a single general intelligence measured by IQ tests. The 8 intelligences are verbal/linguistic, logical/mathematical, visual/spatial, bodily/kinesthetic, musical/rhythmic, interpersonal, intrapersonal, and naturalist. The theory challenges traditional views of intelligence being fixed and centered around language and logic abilities, instead suggesting intelligences can be strengthened and individuals have unique intelligence profiles.
Knowledge has always been a prime source through which human societies have advanced materially and elevated themselves spiritually. Knowledge comprises many hundreds of fields and sub-fields, known as subjects, which are interlocking and interlinking
This document discusses several theories of intelligence, including:
- Spearman's two-factor theory which proposed a general intelligence factor "g" and specific factors "s".
- Thurstone's multi-factor theory which identified seven primary mental abilities.
- Cattell and Horn's fluid and crystallized intelligence theory distinguishing between innate and learned capacities.
- Vernon's hierarchical theory proposing intelligence exists at different levels of generality from a general factor "g" to specific factors.
It also summarizes Piaget's stage theory of cognitive development and Bruner's emphasis on the social context of learning.
1. The document discusses theories of intelligence from phrenology to modern theories of multiple intelligences.
2. It describes Gall's phrenology theory that different parts of the brain correspond to different mental faculties and abilities. It then discusses early IQ testing by Binet and others.
3. The summary outlines two modern theories - Sternberg's triarchic theory of intelligence and Gardner's theory of multiple intelligences which identifies distinct intelligences like musical, bodily, logical-mathematical abilities.
The document summarizes Howard Gardner's theory of multiple intelligences, which identifies nine distinct types of intelligence: 1) Linguistic intelligence, 2) Logical-mathematical intelligence, 3) Musical intelligence, 4) Spatial intelligence, 5) Bodily-kinesthetic intelligence, 6) Interpersonal intelligence, 7) Intrapersonal intelligence, 8) Naturalistic intelligence, and 9) Existential intelligence. It provides brief descriptions of each type of intelligence and suggests ways to strengthen each one. The theory challenges the traditional view of intelligence as being solely based on IQ tests by recognizing different ways that humans can be smart.
The document discusses several concepts related to intelligence and problem solving:
1. Lateral thinking is an indirect and creative approach to problem solving that uses reasoning not obtained through traditional logic alone. It was coined by Edward De Bono in 1967.
2. De Bono breaks down lateral thinking into technical definitions, including changing concepts/perceptions rather than just playing with existing ideas, and the need to escape local optima to reach global optima.
3. Parallel thinking involves cooperative thinking in the same direction rather than adversarial arguing, allowing contradictory ideas to be laid out and a solution designed from them.
4. Howard Gardner's theory of multiple intelligences originally included 7 types: linguistic, logical-mat
This document discusses the nature and measurement of intelligence. It defines intelligence as the ability to adjust thinking to new problems and environments. Intelligence consists of specific abilities like adaptability, reasoning, and judgment. Intelligence is determined by both heredity and environment. It is measured using individual verbal tests like the Stanford-Binet test and individual performance tests like the Wechsler scales. Group tests can measure intelligence verbally or through performance. Intelligence quotient (IQ) scores classify intelligence levels based on mental age and chronological age.
The document discusses several theories of intelligence proposed by psychologists. It describes Sternberg's triarchic theory which defines three types of intelligence: analytical, creative, and practical. It also outlines Vernon's hierarchical theory of intelligence and Guilford's model involving 150 factors of intelligence defined by operations, contents, and products. Spearman's two-factor theory of general intelligence "g" and specific abilities is explained. Criticisms of Spearman's theory include that it oversimplifies intelligence and fails to account for overlapping abilities between fields.
This document discusses different perspectives on human and artificial intelligence. It covers definitions of intelligence, theories of multiple intelligences, information processing models of intelligence, and limitations of current artificial intelligence. Key perspectives discussed include Spearman's g factor theory, Thurstone's 7 primary abilities, Gardner's theory of multiple intelligences, and Sternberg's triarchic theory of intelligence.
Howard Gardner- Multiple intelligence presentationMuhammad Saleem
Howard Gardner proposed the theory of multiple intelligences which identifies eight distinct types of intelligence: logical-mathematical, linguistic, spatial, musical, bodily-kinesthetic, interpersonal, intrapersonal, and naturalist. Traditionally, intelligence was viewed as consisting of only two types - verbal and mathematical abilities. Gardner's theory posits that individuals possess combinations of these intelligences in varying strengths and that instruction can be tailored to suit different profiles. The document provides descriptions of each type of intelligence and examples. It also notes that intelligences rarely operate independently and typically complement each other when individuals develop skills.
1. The document discusses various sources of knowledge acquisition including sense perceptions, traditions, authority, and different research methods.
2. It explains key concepts like the nature of knowledge and educational research, comparing different types of research.
3. The role of educational research in national development is emphasized, highlighting how research can help address issues like high dropout rates and increasing relevance of education.
This document discusses the epistemological basis of knowledge and education. It begins by explaining that schools play an important role in transmitting knowledge to students and influencing their lives. It then discusses various topics related to the concept of knowledge, including different definitions of knowledge, the structure and forms of knowledge, and ways of acquiring knowledge such as through sense perception and reasoning. It explains the process of moving from perception to conception to develop conceptual knowledge. Finally, it discusses the meanings of related terms like information, wisdom, instruction, teaching, training and skills.
The document summarizes several theories of learning, including:
- Classical conditioning, which involves stimulus-response associations. Pioneered by Ivan Pavlov.
- Operant conditioning, where behavior is shaped by consequences. Introduced by B.F. Skinner.
- Social learning theory, developed by Albert Bandura, which emphasizes observational learning and modeling.
- Cognitive learning theories including assimilation theory and schema theory.
- Piaget's stage theory of child cognitive development.
- Discovery learning pioneered by Jerome Bruner which emphasizes learner discovery.
- Vygotsky's social development theory where social interaction precedes development.
- Situated learning theory developed by Jean Lave which argues learning is
Different psychologists have proposed competing theories of intelligence over the years. These theories have proven to be useful in our understanding the brain.
Cognitive Learning Theory focuses on thinking and mental processes as behaviors that lead to learning. It examines learning as an intricate process involving thoughts, ideas, realizations, and acquired information rather than just reactions. Approaches discussed in the document include Dual Coding Theory, Gagne's nine events of instruction, Gardner's Theory of Multiple Intelligences, and Bloom's Taxonomy of Learning. The document also provides examples of how teachers can apply concepts from Cognitive Learning Theory, such as Multiple Intelligences, in their classrooms to better understand students' strengths and customize instruction.
Guilford's structure of intellect modelBonnie Crerar
J.P. Guilford proposed the Structure of Intellect (SOI) model to describe 180 different types of intellectual abilities. The SOI model categorizes abilities into three dimensions: operations (6 types of thinking processes), content (5 types of information), and products (6 types of outcomes). Each combination of one operation, one content, and one product defines a specific intellectual ability. The model suggests intelligence involves distinct skills that can be improved through training. It also implies curriculum should incorporate different combinations of operations, content, and products to develop students' intellects based on their individual differences.
1. Research on adult intelligence has shown that while some abilities decline with age, others are maintained or improve through life experiences and learning.
2. Two types of intelligence are identified: fluid intelligence involving short-term memory and abstract thinking tends to decline with age, while crystallized intelligence involving accumulated knowledge and vocabulary tends to increase with age.
3. Adults can maintain cognitive functioning through selective optimization, focusing on areas of expertise and compensating for losses, becoming more proficient in meaningful activities over time through experience.
A project to promote conceptual learning for all;
Dr. Amjad ali arain; University of Sind; Faculty of Education; Pakistan
Major theories of intelligence
The document outlines 9 types of intelligence identified by Howard Gardner: naturalist, musical, logical-mathematical, existential, interpersonal, bodily-kinesthetic, linguistic, intra-personal, and spatial intelligence. Each type is defined in terms of its core capacities. Examples are given of professions and individuals that demonstrate strengths in each area of intelligence as well as traits common to young adults with strengths in each type.
Multiple intelligences theory proposes that intelligence is comprised of at least 8 different intelligences rather than a single general intelligence measured by IQ tests. The 8 intelligences are verbal/linguistic, logical/mathematical, visual/spatial, bodily/kinesthetic, musical/rhythmic, interpersonal, intrapersonal, and naturalist. The theory challenges traditional views of intelligence being fixed and centered around language and logic abilities, instead suggesting intelligences can be strengthened and individuals have unique intelligence profiles.
Knowledge has always been a prime source through which human societies have advanced materially and elevated themselves spiritually. Knowledge comprises many hundreds of fields and sub-fields, known as subjects, which are interlocking and interlinking
This document discusses several theories of intelligence, including:
- Spearman's two-factor theory which proposed a general intelligence factor "g" and specific factors "s".
- Thurstone's multi-factor theory which identified seven primary mental abilities.
- Cattell and Horn's fluid and crystallized intelligence theory distinguishing between innate and learned capacities.
- Vernon's hierarchical theory proposing intelligence exists at different levels of generality from a general factor "g" to specific factors.
It also summarizes Piaget's stage theory of cognitive development and Bruner's emphasis on the social context of learning.
1. The document discusses theories of intelligence from phrenology to modern theories of multiple intelligences.
2. It describes Gall's phrenology theory that different parts of the brain correspond to different mental faculties and abilities. It then discusses early IQ testing by Binet and others.
3. The summary outlines two modern theories - Sternberg's triarchic theory of intelligence and Gardner's theory of multiple intelligences which identifies distinct intelligences like musical, bodily, logical-mathematical abilities.
The document summarizes Howard Gardner's theory of multiple intelligences, which identifies nine distinct types of intelligence: 1) Linguistic intelligence, 2) Logical-mathematical intelligence, 3) Musical intelligence, 4) Spatial intelligence, 5) Bodily-kinesthetic intelligence, 6) Interpersonal intelligence, 7) Intrapersonal intelligence, 8) Naturalistic intelligence, and 9) Existential intelligence. It provides brief descriptions of each type of intelligence and suggests ways to strengthen each one. The theory challenges the traditional view of intelligence as being solely based on IQ tests by recognizing different ways that humans can be smart.
The document discusses several concepts related to intelligence and problem solving:
1. Lateral thinking is an indirect and creative approach to problem solving that uses reasoning not obtained through traditional logic alone. It was coined by Edward De Bono in 1967.
2. De Bono breaks down lateral thinking into technical definitions, including changing concepts/perceptions rather than just playing with existing ideas, and the need to escape local optima to reach global optima.
3. Parallel thinking involves cooperative thinking in the same direction rather than adversarial arguing, allowing contradictory ideas to be laid out and a solution designed from them.
4. Howard Gardner's theory of multiple intelligences originally included 7 types: linguistic, logical-mat
This document discusses the nature and measurement of intelligence. It defines intelligence as the ability to adjust thinking to new problems and environments. Intelligence consists of specific abilities like adaptability, reasoning, and judgment. Intelligence is determined by both heredity and environment. It is measured using individual verbal tests like the Stanford-Binet test and individual performance tests like the Wechsler scales. Group tests can measure intelligence verbally or through performance. Intelligence quotient (IQ) scores classify intelligence levels based on mental age and chronological age.
The document discusses several theories of intelligence proposed by psychologists. It describes Sternberg's triarchic theory which defines three types of intelligence: analytical, creative, and practical. It also outlines Vernon's hierarchical theory of intelligence and Guilford's model involving 150 factors of intelligence defined by operations, contents, and products. Spearman's two-factor theory of general intelligence "g" and specific abilities is explained. Criticisms of Spearman's theory include that it oversimplifies intelligence and fails to account for overlapping abilities between fields.
This document discusses different perspectives on human and artificial intelligence. It covers definitions of intelligence, theories of multiple intelligences, information processing models of intelligence, and limitations of current artificial intelligence. Key perspectives discussed include Spearman's g factor theory, Thurstone's 7 primary abilities, Gardner's theory of multiple intelligences, and Sternberg's triarchic theory of intelligence.
Howard Gardner- Multiple intelligence presentationMuhammad Saleem
Howard Gardner proposed the theory of multiple intelligences which identifies eight distinct types of intelligence: logical-mathematical, linguistic, spatial, musical, bodily-kinesthetic, interpersonal, intrapersonal, and naturalist. Traditionally, intelligence was viewed as consisting of only two types - verbal and mathematical abilities. Gardner's theory posits that individuals possess combinations of these intelligences in varying strengths and that instruction can be tailored to suit different profiles. The document provides descriptions of each type of intelligence and examples. It also notes that intelligences rarely operate independently and typically complement each other when individuals develop skills.
1. The document discusses various sources of knowledge acquisition including sense perceptions, traditions, authority, and different research methods.
2. It explains key concepts like the nature of knowledge and educational research, comparing different types of research.
3. The role of educational research in national development is emphasized, highlighting how research can help address issues like high dropout rates and increasing relevance of education.
This document discusses the epistemological basis of knowledge and education. It begins by explaining that schools play an important role in transmitting knowledge to students and influencing their lives. It then discusses various topics related to the concept of knowledge, including different definitions of knowledge, the structure and forms of knowledge, and ways of acquiring knowledge such as through sense perception and reasoning. It explains the process of moving from perception to conception to develop conceptual knowledge. Finally, it discusses the meanings of related terms like information, wisdom, instruction, teaching, training and skills.
The document summarizes several theories of learning, including:
- Classical conditioning, which involves stimulus-response associations. Pioneered by Ivan Pavlov.
- Operant conditioning, where behavior is shaped by consequences. Introduced by B.F. Skinner.
- Social learning theory, developed by Albert Bandura, which emphasizes observational learning and modeling.
- Cognitive learning theories including assimilation theory and schema theory.
- Piaget's stage theory of child cognitive development.
- Discovery learning pioneered by Jerome Bruner which emphasizes learner discovery.
- Vygotsky's social development theory where social interaction precedes development.
- Situated learning theory developed by Jean Lave which argues learning is
The document summarizes several theories of learning, including:
- Classical conditioning, where a stimulus acquires the ability to elicit a response through association. Pioneered by Ivan Pavlov.
- Operant conditioning, where behavior is shaped by consequences. Introduced by B.F. Skinner.
- Social learning theory, which explains how people learn through observation and modeling others. Proposed by Albert Bandura.
- Cognitive learning theories including assimilation theory and schema theory.
The document outlines 12 principles of natural learning based on research from multiple disciplines including neuroscience, psychology, and education. The principles describe how the body, brain, and mind work together in the learning process. They indicate that learning is enhanced when it engages the physiology, is social and meaningful, involves pattern-finding and emotions, and considers individual differences. The principles provide a framework for educators to optimize learning by taking a holistic view of the learner.
The document outlines several theories of learning and development:
- Behaviorism focuses on external stimuli and conditioning, disregarding innate factors. Key theorists included Pavlov, Watson, and Skinner.
- Innate Theory, proposed by Chomsky, posits that humans are born with an innate capacity and "language acquisition device" for learning language.
- Cognitive Theory incorporates internal mental processes and sees learning as involving effort, aptitude and intelligence. Piaget, Gardner and Bloom contributed to this view.
- Social Development Theory, from Vygotsky, emphasizes social interaction and culture as shaping development through tools like language.
- Constructivism views learning as an active process where learners construct their own understandings by
The document outlines several theories of learning and development:
- Behaviorism focuses on external stimuli and conditioning, disregarding innate factors. Key theorists included Pavlov, Watson, and Skinner.
- Innate Theory, proposed by Chomsky, posits that humans are born with an innate capacity and "language acquisition device" for learning language.
- Cognitive Theory considers internal mental processes and sees learning as involving effort, aptitude and intelligence. Piaget, Gardner and Bloom contributed to this view.
- Social Development Theory, from Vygotsky, emphasizes social interaction and culture as shaping development through tools like language.
- Constructivism views learning as an active process where learners construct their own understandings by
This chapter discusses various theories of intelligence, including:
1. General intelligence as proposed by Charles Spearman, which refers to a general cognitive ability measured by performance across different cognitive tests.
2. Multiple intelligences theory by Howard Gardner, which proposes there are different types of intelligences like musical, bodily, interpersonal, and more.
3. Triarchic theory of intelligence by Robert Sternberg, which defines intelligence through analytical, creative, and practical abilities for adaptation.
4. Emotional intelligence conceptualized by Mayer, Salovey, and Goleman as skills in perceiving, understanding, and managing emotions.
The document discusses various definitions and theories of intelligence. It begins by defining intelligence as an umbrella term for mental abilities such as reasoning, problem solving, thinking abstractly, comprehending ideas, using language, and learning. It notes there is no single agreed upon definition. The document then examines several theories of intelligence, including Charles Spearman's theory of general intelligence, Louis Thurstone's theory of primary mental abilities, Howard Gardner's theory of multiple intelligences, Robert Sternberg's triarchic theory, and Daniel Goleman's theory of emotional intelligence. It also lists and describes nine proposed types of intelligence.
Intelligence Testing and Discrimination Assignment 3Julia Hanschell
This paper considers models of intelligence and how intelligence has been interpreted, tested, and perceived over the past century. It examines three psychological models of intelligence - those proposed by Gardner, Sternberg, and Cattell-Horn-Carroll - and how they relate to a scenario involving a client named Marjorie. The paper also discusses factors like bias and discrimination that are important in intelligence testing. It provides an overview of the historical development of intelligence testing and theories, from Spearman's two-factor theory to more modern approaches.
The term "cognitive psychology" was first used in 1967 by American psychologist Ulric Neisser in his book Cognitive Psychology. According to Neisser, cognition involves "all processes by which the sensory input is transformed, reduced, elaborated, stored, recovered, and used.
It is concerned with these processes even when they operate in the absence of relevant stimulation, as in images and hallucinations. Given such a sweeping definition, it is apparent that cognition is involved in everything a human being might possibly do; that every psychological phenomenon is a cognitive phenomenon."
The document discusses issues with cultural bias in IQ tests. It begins by using an analogy of giving different animals the same test of climbing a tree, which would clearly favor some over others. It then provides background on the origins of IQ tests in 1904 when Alfred Binet created the first intelligence test to help identify students struggling in France. The test was later expanded by Lewis Terman into the Stanford-Binet intelligence test used in the US. However, the document notes that IQ tests may be culturally biased as they often reflect the cultural experiences of their creators rather than being a fair assessment of intelligence across different cultures.
The document discusses various sources of knowledge and which source is most important. It outlines several ways knowledge can be acquired, including sensory perception, logical reasoning, deductive and inductive reasoning, authority, traditions, experience, naturalistic inquiry, trial and error, intuition, learning, and the scientific approach. Sensory perception and logical reasoning are described as two important sources. The document also defines research, explaining that it is a systematic inquiry using scientific methods. It outlines several key characteristics of research and different types of research including basic, applied, problem-oriented, problem-solving, qualitative, and quantitative research.
The document discusses various sources of knowledge and which source is most important. It outlines several ways knowledge can be acquired, including sensory perception, logical reasoning, deductive and inductive reasoning, authority, traditions, experience, naturalistic inquiry, trial and error, intuition, learning, and the scientific approach. Sensory perception and logical reasoning are described as two important sources. The document also defines research, explaining that it is a systematic inquiry using scientific methods. It outlines several key characteristics of research and different types of research including basic, applied, problem-oriented, problem-solving, qualitative, and quantitative research.
The Multi Store Model Of Memory And Research Into...Lindsey Campbell
The multi-store model of memory proposes that memory can be divided into three distinct parts: the sensory store, short-term store, and long-term store. According to this model, data is first encountered by the sensory store and then processed into the short-term store if given attention, and finally into the long-term store if rehearsed. Research such as Murdock's serial position effect study provides support for this model. The working memory model examines how information is temporarily stored and manipulated to perform tasks. Working memory allows for the immediate recall of information through rehearsal. Luck and Vogel's change detection experiment found that the capacity of short-term memory is around 3-4 items.
(1) Information processing theory analyzes how humans learn new information through a series of cognitive events that occur quickly in the mind similar to how computers process data. It claims the human mind functions like a computer by analyzing new information, testing it against existing knowledge, and storing it in memory.
(2) Behaviorism is a psychological theory that focuses on observable and measurable behaviors and excludes internal mental processes. It views organisms as responding to environmental stimuli and inner biological drives.
(3) Cognitivism emerged in response to behaviorism to study inner mental processes like thinking, memory, and problem solving. It views cognition as essential to understanding behavior rather than just a behavior itself. Cognitive psychologists study how people
The document discusses different sources of knowledge according to Greek philosophers' perspectives on education. It describes revealed knowledge as coming from supernatural revelation and being the basis for beliefs in God and qualia. Intuitive knowledge is described as based on subjective feelings and insights without reason. Authoritative knowledge comes from experts documented in works. Rationalists believe knowledge comes from reason and logic, while empiricists view it as derived from sensory experience. Socrates used dialectical questioning to arrive at truth, while his students Plato and Aristotle contributed theories of education based on class divisions and virtue.
The document discusses Howard Gardner's theory of multiple intelligences as an alternative to traditional views of intelligence that see it as a single general ability. It argues intelligence is better understood as a set of abilities that are expressed differently in various cultural contexts and domains. Three key points are made:
1. Traditional IQ tests do not capture the full range of human potential and ways of knowing. Intelligence is expressed differently in different cultural activities and fields.
2. Gardner proposes individuals have multiple intelligences rather than a single general intelligence. These include abilities like musical, bodily kinesthetic, interpersonal that are not captured by standard IQ tests.
3. To properly understand human cognition, we must look at the wide range of
Jerome Bruner was an influential American psychologist known for his work on education. He developed theories of cognitive development and learning that emphasized the active role of learners in constructing new ideas based on their existing knowledge. Bruner believed that instruction should be concerned with making students willing and able to learn, structuring information so it can be easily grasped, and facilitating students to go beyond the information given. He proposed that intellectual development progresses through enactive, iconic, and symbolic stages of representing knowledge. Bruner's work influenced constructivist learning theories and the concept of a "spiral curriculum."
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Build the Next Generation of Apps with the Einstein 1 Platform.
Rejoignez Philippe Ozil pour une session de workshops qui vous guidera à travers les détails de la plateforme Einstein 1, l'importance des données pour la création d'applications d'intelligence artificielle et les différents outils et technologies que Salesforce propose pour vous apporter tous les bénéfices de l'IA.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
AI for Legal Research with applications, toolsmahaffeycheryld
AI applications in legal research include rapid document analysis, case law review, and statute interpretation. AI-powered tools can sift through vast legal databases to find relevant precedents and citations, enhancing research accuracy and speed. They assist in legal writing by drafting and proofreading documents. Predictive analytics help foresee case outcomes based on historical data, aiding in strategic decision-making. AI also automates routine tasks like contract review and due diligence, freeing up lawyers to focus on complex legal issues. These applications make legal research more efficient, cost-effective, and accessible.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Software Engineering and Project Management - Software Testing + Agile Method...Prakhyath Rai
Software Testing: A Strategic Approach to Software Testing, Strategic Issues, Test Strategies for Conventional Software, Test Strategies for Object -Oriented Software, Validation Testing, System Testing, The Art of Debugging.
Agile Methodology: Before Agile – Waterfall, Agile Development.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELijaia
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Prediction of Electrical Energy Efficiency Using Information on Consumer's Ac...PriyankaKilaniya
Energy efficiency has been important since the latter part of the last century. The main object of this survey is to determine the energy efficiency knowledge among consumers. Two separate districts in Bangladesh are selected to conduct the survey on households and showrooms about the energy and seller also. The survey uses the data to find some regression equations from which it is easy to predict energy efficiency knowledge. The data is analyzed and calculated based on five important criteria. The initial target was to find some factors that help predict a person's energy efficiency knowledge. From the survey, it is found that the energy efficiency awareness among the people of our country is very low. Relationships between household energy use behaviors are estimated using a unique dataset of about 40 households and 20 showrooms in Bangladesh's Chapainawabganj and Bagerhat districts. Knowledge of energy consumption and energy efficiency technology options is found to be associated with household use of energy conservation practices. Household characteristics also influence household energy use behavior. Younger household cohorts are more likely to adopt energy-efficient technologies and energy conservation practices and place primary importance on energy saving for environmental reasons. Education also influences attitudes toward energy conservation in Bangladesh. Low-education households indicate they primarily save electricity for the environment while high-education households indicate they are motivated by environmental concerns.
Generative AI Use cases applications solutions and implementation.pdfmahaffeycheryld
Generative AI solutions encompass a range of capabilities from content creation to complex problem-solving across industries. Implementing generative AI involves identifying specific business needs, developing tailored AI models using techniques like GANs and VAEs, and integrating these models into existing workflows. Data quality and continuous model refinement are crucial for effective implementation. Businesses must also consider ethical implications and ensure transparency in AI decision-making. Generative AI's implementation aims to enhance efficiency, creativity, and innovation by leveraging autonomous generation and sophisticated learning algorithms to meet diverse business challenges.
https://www.leewayhertz.com/generative-ai-use-cases-and-applications/
2. ARTIFICIAL INTELLIGENCE
1
CONTENTS
1) Intelligence
a) Introduction
i) Knowledge
ii) Learning
iii) Understanding
2) Artificial Intelligence
a) Introduction
b) Major Branches of AI
i) Robotics
ii) Vision Systems
iii) Natural Language Processing
iv) Learning Systems
v) Neural Networks
vi) Expert Systems
3) History of AI
a) Great Achievements
i) Robocop
ii) Deep Blue
iii) DARPA Grand Challenge
4) Today’s AI Applications
a) Driver-Less Trains
b) Burglary Alarm Systems
c) Automatic Grading System in Education
3. ARTIFICIAL INTELLIGENCE
2
INTELLIGENCE
Introduction
Despite a long history of research and debate, there is still no standard
definitionof intelligence. This has led some to believe that intelligence may be
approximatelydescribed, but cannot be fully defined. Some definitions of
Intelligence from different sources are:
As many dictionaries source their definitions from other dictionaries, we have
endeavored to always list theoriginal source.
1.“The ability to use memory, knowledge, experience, understanding,
reasoning, imagination and judgment in order to solve problems and adapt to
new situations.” All Words Dictionary, 2006
2.“The capacity to acquire and apply knowledge.” The American Heritage
Dictionary, fourth edition, 2000
3. “Individuals differ from one another in their ability to understand complex
ideas, to adapt effectively to the environment, to learn from experience, to
engage in various forms of reasoning, to overcome obstacles by taking
thought.” American Psychological Association
4. “The ability to learn, understand and make judgments or have opinions that
are based on reason” Cambridge Advance Learner’s Dictionary, 2006
5. “Intelligence is a very general mental capability that, among other things,
involves the ability to reason, plan, solve problems, think abstractly,
comprehend complex ideas, learn quickly and learn from experience.”
Common statement with 52 expert signatories
6. “The ability to learn facts and skills and apply them, especially when this
ability is highly developed.” Encarta World English Dictionary, 2006
7. “Ability to adapt effectively to the environment, either by making a change
in oneself or by changing the environment or finding a new one intelligence
4. ARTIFICIAL INTELLIGENCE
3
is not a single mental process, but rather a combination of many mental
processes directed toward effective adaptation to the environment.”
Encyclopedia Britannica, 2006
8. “the general mental ability involved in calculating, reasoning, perceiving
relationships and analogies, learning quickly, storing and retrieving
information, using language fluently, classifying, generalizing, and adjusting
to new situations.” Columbia Encyclopedia, sixth edition, 2006
9. “Capacity for learning, reasoning, understanding, and similar forms of
mental activity; aptitude in grasping truths, relationships, facts, meanings, etc.”
Random House Unabridged Dictionary, 2006
10. “The ability to learn, understand, and think about things.” Longman
Dictionary or Contemporary English, 2006
11. “ The ability to learn or understand or to deal with new or trying situations
the skilled use of reason (2) : the ability to apply knowledge to manipulate
one’s environment or to think abstractly as measured by objective criteria (as
tests)” Merriam-Webster Online Dictionary, 2006
12. “The ability to acquire and apply knowledge and skills.” Compact Oxford
English Dictionary, 2006
13. “. . . the ability to adapt to the environment.” World Book Encyclopedia,
2006
14. “Intelligence is a property of mind that encompasses many related mental
abilities, such as the capacities to reason, plan, solve problems, think abstractly,
comprehend ideas and language, and learn.” Wikipedia, 4 October, 2006
15. “Capacity of mind, especially to understand principles, truths, facts or
meanings, acquire knowledge, and apply it to practice; the ability to learn and
comprehend.” Wiktionary, 4 October, 2006
16. “The ability to learn and understand or to deal with problems.” Word
Central Student Dictionary, 2006
17. “The ability to comprehend; to understand and profit from experience.”
Word net2.1, 2006
18. “The capacity to learn, reason, and understand.” Words myth Dictionary,
2006
5. ARTIFICIAL INTELLIGENCE
4
Intelligence has been defined in many different ways including Knowledge,
Learning, and Understanding.
1.Knowledge:Knowledge is a familiarity, awareness or understanding of
someone or something, such as facts, information, descriptions, or skills, which
is acquired through experience or education byperceiving, discovering,
or learning. Knowledge can refer to a theoretical or practical understanding
of a subject. It can be implicit (as with practical skill or expertise) or explicit
(as with the theoretical understanding of a subject); it can be more or less
formal or systematic. In philosophy, the study of knowledge is
called epistemology; the philosopher Plato famously defined knowledge as
"justified true belief". However, no single agreed upon definition of knowledge
exists, though there are numerous theories to explain it.
Knowledge acquisition involves complex cognitive processes: perception,
communication, association and reasoning; while knowledge is also said to be
related to the capacity of acknowledgment in human beings.
The definition of knowledge is a matter of
ongoing debate among philosophers in the field of epistemology. The classical
definition, described but not ultimately endorsed by Plato,specifies that
a statement must meet three criteria in order to be considered knowledge: it
must be justified, true, and believed. Some claim that these conditions are not
sufficient, as Gettier case examples allegedly demonstrate. There are a number
of alternatives proposed, includingRobert Nozick's arguments for a
requirement that knowledge 'tracks the truth' and Simon
Blackburn's additional requirement that we do not want to say that those who
meet any of these conditions 'through a defect, flaw, or failure' have
knowledge. Richard Kirkham suggests that our definition of knowledge
requires that the evidence for the belief necessitates its truth.
2.Understanding:Understanding is a psychological process related to an
abstract or physical object, such as a person, situation, or message whereby
one is able to think about it and use concepts to deal adequately with that
6. ARTIFICIAL INTELLIGENCE
5
object.Understanding is a relation between the knower and an object of
understanding. Understanding implies abilities and dispositions with respect to
an object of knowledge sufficient to support intelligent behavior. An
understanding is the limit of a conceptualization. To understand something is
to have conceptualized it to a given measure.
Examples
1. One understands the weather if one is able to predict and to give
an explanation of some of its features, etc.
2. A psychiatrist understands another person's anxieties if he/she knows
that person's anxieties, their causes, and can give useful advice on how
to cope with the anxiety.
3. A person understands a command if he/she knows who gave it, what is
expected by the issuer, and whether the command is legitimate, and
whether one understands the speaker.
4. One understands a reasoning, an argument, or a language if one can
consciously reproduce the information content conveyed by the
message.
5. One understands a mathematical concept if one can solve problems
using it, especially problems that are not similar to what one has seen
before.
3.Learning:Learning is acquiring new, or modifying and reinforcing,
existing knowledge, behaviors, skills, values, or preferencesand may involve
synthesizing different types of information.The ability to learn is possessed by
humans, animals and some machines. Progress over time tends to
follow learning curves. Learning is not compulsory; it is contextual. It does not
happen all at once, but builds upon and is shaped by what we already know.
To that end, learning may be viewed as a process, rather than a collection of
7. ARTIFICIAL INTELLIGENCE
6
factual and procedural knowledge. Learning produces changes in the
organism and the changes produced are relatively permanent.
Human learning may occur as part of education, personal development,
schooling, or training. It may be goal-oriented and may be aided
by motivation. The study of how learning occurs is part
of neuropsychology, educational psychology, learning theory, and pedagogy.
Learning may occur as a result of habituation or classical conditioning, seen in
many animal species, or as a result of more complex activities such as play,
seen only in relatively intelligent animals. Learning may occur consciously or
without conscious awareness. Learning that an aversive event can't be avoided
nor escaped is called learned helplessness. There is evidence for human
behavioral learning prenatally, in which habituation has been observed as
early as 32 weeks into gestation, indicating that the central nervous system is
sufficiently developed and primed for learning and memory to occur very
early on in development.
Play has been approached by several theorists as the first form of learning.
Children experiment with the world, learn the rules, and learn to interact
through play. Lev Vygotsky agrees that play is pivotal for children's
development, since they make meaning of their environment through play. 85
percent of brain development occurs during the first five years of a child's
life. The context of conversation based on moral reasoning offers some proper
observations on the responsibilities of parents.
ARTIFICIAL INTELLIGENCE
Introduction
Artificial term is given by John McCarthy in 1950.He is known as the Father of
Artificial intelligence.AI is both the intelligence of machines and the branch
8. ARTIFICIAL INTELLIGENCE
7
of computer sciencewhich aims to create it, through "the study and design
of intelligent agents" or "rational agents", where an intelligent agent is a
system that perceives its environment and takes actions which maximize its
chances of success. Among the traits that researchers hope machines will
exhibitare reasoning, knowledge,planning, learning, communication andthe
ability to move and manipulate objects. In the field of artificial intelligence
there is no consensus on how closely the brain should be simulated.
Artificial intelligence (AI) is the intelligence exhibited by machines or
software, and the branch of computer science that develops machines and
software with human-like intelligence. Major AI researchers and textbooks
define the field as "the study and design of intelligent agents”, where
an intelligent agent is a system that perceives its environment and takes
actions that maximize its chances of success. John McCarthy, who coined the
term in 1955, defines it as "the science and engineering of making intelligent
machines".
AI research is highly technical and specialized, and is deeply divided into
subfields that often fail to communicate with each other. Some of the division
is due to social and cultural factors: subfields have grown up around
particular institutions and the work of individual researchers. AI research is
also divided by several technical issues. Some subfields focus on the solution of
specific problems. Others focus on one of several possible approaches or on
the use of a particular tool or towards the accomplishment of
particular applications.
The central goals of AI research include reasoning, knowledge, planning,
learning natural language processing (communication), perception and the
ability to move and manipulate objects. General intelligence (or "strong AI") is
still among the field's long term goals. Currently popular approaches
include statistical methods, computational intelligence and traditional
symbolic AI. There are an enormous number of tools used in AI, including
9. ARTIFICIAL INTELLIGENCE
8
versions of search and mathematical optimization, logic, methods based on
probability and economics, and many others.
The field was founded on the claim that a central property of humans,
intelligence—the sapience ofHomo sapiens—can be sufficiently well described
to the extent that it can be simulated by a machine. This raises philosophical
issues about the nature of the mind and the ethics of creating artificial beings
endowed with human-like intelligence, issues which have been addressed by
myth, fiction and philosophy since antiquity. Artificial intelligence has been
the subject of tremendous optimism but has also suffered
stunning setbacks. Today it has become an essential part of the technology
industry and defines many challenging problems at the forefront of research
in computer science.
Major Branches
1.Robotics:Robotics is the branch of technology that deals with the design,
construction, operation, structural disposition, manufacture and application
of robots as well as computer systems for their control, sensory feedback, and
information processing. These technologies deal with automated machines that
can take the place of humans in dangerous environments or manufacturing
processes, or resemble humans in appearance, behavior, and/or cognition.
Many of today's robots are inspired by nature contributing to the field of bio-
inspired robotics.
The concept of creating machines that can operate autonomously dates back
to classical times, but research into the functionality and potential uses of
robots did not grow substantially until the 20th century. Throughout history,
robotics has been often seen to mimic human behavior, and often manage
tasks in a similar fashion. Today, robotics is a rapidly growing field, as
technological advances continue, research, design, and building new robots
serve various practical purposes, whether domestically, commercially,
10. ARTIFICIAL INTELLIGENCE
9
or militarily. Many robots do jobs that are hazardous to people such as
defusing bombs, mines and exploring shipwrecks.
The word robotics was derived from the word robot, which was introduced to
the public by Czech writer Karel Čapek in his play R.U.R. (Rossum's Universal
Robots), which was published in 1920. The word robot comes from the Slavic
word robota, which means labour. The play begins in a factory that makes
artificial people called robots, creatures who can be mistaken for humans –
similar to the modern ideas of androids. Karel Čapek himself did not coin the
word. He wrote a short letter in reference to an etymology in the Oxford
English Dictionary in which he named his brother Josef Čapek as its actual
originator.
History of Robotics
In 1927 the Maschinenmensch ("machine-human") gynoid humanoid
robot (also called "Parody", "Futura", "Robotrix", or the "Maria impersonator")
was the first depiction of a robot ever to appear on film was played by German
actress Brigitte Helm in Fritz Lang's film Metropolis.
In 1942 the science fiction writer Isaac Asimov formulated his Three Laws of
Robotics.
In 1948 Norbert Wiener formulated the principles of cybernetics, the basis of
practical robotics.
Fully autonomous robots only appeared in the second half of the 20th century.
The first digitally operated and programmable robot, the Unimate, was
installed in 1961 to lift hot pieces of metal from a die casting machine and
stack them. Commercial and industrial robots are widespread today and used
to perform jobs more cheaply, or more accurately and reliably, than humans.
They are also employed in jobs which are too dirty, dangerous, or dull to be
suitable for humans. Robots are widely used in manufacturing, assembly,
packing and packaging, transport, earth and space exploration, surgery,
11. ARTIFICIAL INTELLIGENCE
10
weaponry, laboratory research, safety, and the mass production of consumer
and industrial goods.
2.Vision System:It branch of Artificial Intelligence concerned with computer
processing of images from the real world.Machine vision (MV) is the
technology and methods used to provide imaging-based automatic inspection
and analysis for such applications as automatic inspection, process control,
and robot guidance in industry. The scope of MV is broad. MV is related to,
though distinct from, computer. The primary uses for machine vision are
automatic inspection and industrial robot guidance. Common machine vision
applications include quality assurance, sorting, material handling, robot
guidance, and optical gauging.
Machine vision methods are defined as both the process of defining and
creating an MV solution, and as the technical process that occurs during the
operation of the solution. Here the latter is addressed. As of 2006, there was
little standardization in the interfacing and configurations used in MV. This
includes user interfaces, interfaces for the integration of multi-component
systems and automated data interchange.Nonetheless, the first step in the MV
sequence of operation is acquisition of an image, typically using cameras,
lenses, and lighting that has been designed to provide the differentiation
required by subsequent processing. MV software packages then employ
various digital image processing techniques to extract the required
information, and often make decisions (such as pass/fail) based on the
extracted information. A common output from machine vision systems is
pass/fail decisions. These decisions may in turn trigger mechanisms that reject
failed items or sound an alarm. Other common outputs include object position
and orientation information from robot guidance systems. Additionally, output
types include numerical measurement data, data read from codes and
characters, displays of the process or results, stored images, alarms from
automated space monitoring MV systems, and process control signals.As
recently as 2006, one industry consultant reported that MV represented a $1.5
billion market in North America. However, the editor-in-chief of an MV trade
magazine asserted that "machine vision is not an industry per se" but rather
12. ARTIFICIAL INTELLIGENCE
11
"the integration of technologies and products that provide services or
applications that benefit true industries such as automotive or consumer goods
manufacturing, agriculture, and defense."
As of 2006, experts estimated that MV had been employed in less than 20% of
the applications for which it is potentially useful.
3.Natural Language Processing:Natural language processing(NLP) is a field
of artificial intelligence concerned with the interactions
between computers and human (natural) languages. As such, NLP is related to
the area of human–computer interaction. Many challenges in NLP
involve natural language understanding, that is, enabling computers to derive
meaning from human or natural language input, and others involve natural
language generation. Modern NLP algorithms are based on machine learning,
especially statistical machine learning. The paradigm of machine learning is
different from that of most prior attempts at language processing. Prior
implementations of language-processing tasks typically involved the direct
hand coding of large sets of rules. The machine-learning paradigm calls
instead for using general learning algorithms — often, although not always,
grounded in statistical inference — to automatically learn such rules through
the analysis of large corpora of typical real-world examples. Corpus (plural,
"corpora") is a set of documents (or sometimes, individual sentences) that have
been hand-annotated with the correct values to be learned.
Many different classes of machine learning algorithms have been applied to
NLP tasks. These algorithms take as input a large set of "features" that are
generated from the input data. Some of the earliest-used algorithms, such
as decision trees, produced systems of hard if-then rules similar to the systems
of hand-written rules that were then common. Increasingly, however,
research has focused on statistical models, which make
soft, probabilistic decisions based on attaching real-valued weights to each
input feature. Such models have the advantage that they can express the
relative certainty of many different possible answers rather than only one,
13. ARTIFICIAL INTELLIGENCE
12
producing more reliable results when such a model is included as a
component of a larger system.
Systems based on machine-learning algorithms have many advantages over
hand-produced rules:
The learning procedures used during machine learning automatically
focus on the most common cases, whereas when writing rules by hand it is
often not obvious at all where the effort should be directed.
Automatic learning procedures can make use of statistical
inference algorithms to produce models that are robust to unfamiliar input
(e.g. containing words or structures that have not been seen before) and to
erroneous input (e.g. with misspelled words or words accidentally omitted).
Generally, handling such input gracefully with hand-written rules — or
more generally, creating systems of hand-written rules that make soft
decisions — is extremely difficult, error-prone and time-consuming.
Systems based on automatically learning the rules can be made more
accurate simply by supplying more input data. However, systems based on
hand-written rules can only be made more accurate by increasing the
complexity of the rules, which is a much more difficult task. In particular,
there is a limit to the complexity of systems based on hand-crafted rules,
beyond which the systems become more and more unmanageable.
However, creating more data to input to machine-learning systems simply
requires a corresponding increase in the number of man-hours worked,
generally without significant increases in the complexity of the annotation
process.
4.Learning Systems:Machine learning, a branch of artificial intelligence,
concerns the construction and study of systems that can learn from data. For
example, a machine learning system could be trained on email messages to
learn to distinguish between spam and non-spam messages. After learning, it
can then be used to classify new email messages into spam and non-spam
folders.
14. ARTIFICIAL INTELLIGENCE
13
The core of machine learning deals with representation and generalization.
Representation of data instances and functions evaluated on these instances
are part of all machine learning systems. Generalization is the property that
the system will perform well on unseen data instances; the conditions under
which this can be guaranteed are a key object of study in the subfield
of computational learning theory.
There are a wide variety of machine learning tasks and successful
applications. Optical character recognition, in which printed characters are
recognized automatically based on previous examples, is a classic example of
machine learning.
These two terms are commonly confused, as they often employ the same
methods and overlap significantly. They can be roughly defined as follows:
Machine learning focuses on prediction, based on known properties
learned from the training data.
Data mining focuses on the discovery of (previously) unknown properties
in the data. This is the analysis step of Knowledge Discovery in Databases.
The two areas overlap in many ways: data mining uses many machine
learning methods, but often with a slightly different goal in mind. On the other
hand, machine learning also employs data mining methods as "unsupervised
learning" or as a preprocessing step to improve learner accuracy. Much of the
confusion between these two research communities (which do often have
separate conferences and separate journals, ECML PKDD being a major
exception) comes from the basic assumptions they work with: in machine
learning, performance is usually evaluated with respect to the ability
to reproduce known knowledge, while in Knowledge Discovery and Data
Mining (KDD) the key task is the discovery of previously unknown knowledge.
Evaluated with respect to known knowledge, an uninformed (unsupervised)
method will easily be outperformed by supervised methods, while in a typical
KDD task, supervised methods cannot be used due to the unavailability of
training data.Some machine learning systems attempt to eliminate the need for
human intuition in data analysis, while others adopt a collaborative approach
15. ARTIFICIAL INTELLIGENCE
14
between human and machine. Human intuition cannot, however, be entirely
eliminated, since the system's designer must specify how the data is to be
represented and what mechanisms will be used to search for a
characterization of the data.
5.Neural Networks: A neural network is, in essence, an attempt to simulate
the brain. Neural network theory revolves around the idea that certain key
properties of biological neurons can be extracted and applied to simulations,
thus creating a simulated (and very much simplified) brain.An artificial neural
network (ANN) learning algorithm, usually called "neural network" (NN), is a
learning algorithm that is inspired by the structure and functional aspects
of biological neural networks. Computations are structured in terms of an
interconnected group of artificial neurons, processing information using
a connectionist approach to computation. Modern neural networks arenon-
linear statistical data modeling tools. They are usually used to model complex
relationships between inputs and outputs, to find patterns in data, or to
capture the statistical structure in an unknown joint probability
distribution between observed variables.
In computer science and related fields, artificial neural networks are
computational models inspired by animals' central nervous systems (in
particular the brain) that are capable of machine learning and pattern
recognition. They are usually presented as systems of interconnected "neurons"
that can compute values from inputs by feeding information through the
network.
For example, in a neural network for handwriting recognition, a set of input
neurons may be activated by the pixels of an input image representing a letter
or digit. The activations of these neurons are then passed on, weighted and
transformed by some function determined by the network's designer, to other
neurons, etc., until finally an output neuron is activated that determines which
character was read.
16. ARTIFICIAL INTELLIGENCE
15
Like other machine learning methods, neural networks have been used to solve
a wide variety of tasks that are hard to solve using ordinary rule-based
programming, including computer vision and speech recognition.
6.Expert Systems:In artificial intelligence, an expert system is a computer
system that emulates the decision-making ability of a human expert.Expert
systems are designed to solve complex problems by reasoning about
knowledge, represented primarily as IF-THEN rules rather than through
conventional procedural code. The first expert systems were created in the
1970s and then proliferated in the 1980s. Expert systems were among the first
truly successful forms of AI software.
An expert system is divided into two sub-systems: the inference engine and
the knowledge base. The knowledge base represents facts and rules. The
inference engine applies the rules to the known facts to deduce new facts.
Inference engines can also include explanation and debugging capabilities.
Expert systems were introduced by the Stanford Heuristic Programming
Project led by Edward Feigenbaum, who is sometimes referred to as the "father
of expert systems". The Stanford researchers tried to identify domains where
expertise was highly valued and complex, such as diagnosing infectious
diseases (Mycin) and identifying unknown organic molecules
(Dendral).Dendral was a tool to study hypothesis formation in the
identification of organic molecules. The general problem it solved—designing
a solution given a set of constraints—was one of the most successful areas for
early expert systems applied to business domains such as sales people
configuring Dec Vax computers and mortgage loan application development.
SMH.PAL is an expert system for the assessment of students with multiple
disabilities.
Mistral is an expert system for the monitoring of dam safety developed in the
90's by Ismes (Italy). It gets data from an automatic monitoring system and
performs a diagnosis of the state of the dam.
17. ARTIFICIAL INTELLIGENCE
16
HISTORY OF AI
1950: Turing Test: In 1950 Alan Turing published a landmark paper in which
he speculated about the possibility of creating machines with true
intelligence. He noted that "intelligence" is difficult to define and devised his
famous Turing Test. If a machine could carry on a conversation (over
a teleprinter) that was indistinguishable from a conversation with a human
being, then the machine could be called "intelligent." This simplified version of
the problem allowed Turing to argue convincingly that a "thinking machine"
was at least plausible and the paper answered all the most common objections
to the proposition. The Turing Test was the first serious proposal in
the philosophy of artificial intelligence.
1956-1959:Golden Years:
The Dartmouth Conference of 1956 was organized by Marvin Minsky, John
McCarthy and two senior scientists: Claude Shannon and Nathan
Rochester of IBM. The proposal for the conference included this assertion:
"every aspect of learning or any other feature of intelligence can be so
precisely described that a machine can be made to simulate it". The
participants included Ray Solomonoff, Oliver Selfridge, Trenchard
More, Arthur Samuel, Allen Newell and Herbert A. Simon, all of whom would
create important programs during the first decades of AI research. At the
conference Newell and Simon debuted the "Logic Theorist" and McCarthy
persuaded the attendees to accept "Artificial Intelligence" as the name of the
field.[43]
The 1956 Dartmouth conference was the moment that AI gained its
name, its mission, its first success and its major players, and is widely
considered the birth of AI. In 1958,John McCarthy (Massachusetts Institute of
Technology or MIT) invented the Lisp programming language.In 1959,John
McCarthy and Marvin Minsky founded the MIT AI Lab.
18. ARTIFICIAL INTELLIGENCE
17
1965:ELIZA: ELIZA is a computer program and an early example of
primitive natural language processing. ELIZA operated by processing users'
responses to scripts, the most famous of which was DOCTOR, a simulation of
a Rogerian psychotherapist. Using almost no information about human
thought or emotion, DOCTOR sometimes provided a startlingly human-like
interaction. ELIZA was written atMIT by Joseph Weizenbaum between 1964
and 1966.
When the "patient" exceeded the very small knowledge base, DOCTOR might
provide a generic response, for example, responding to "My head hurts" with
"Why do you say your head hurts?" A possible response to "My mother hates
me" would be "Who else in your family hates you?" ELIZA was implemented
using simple pattern matching techniques, but was taken seriously by several
of its users, even after Weizenbaum explained to them how it worked. It was
one of the first chatterbots in existence.
1972:PROLOG Prologis a general purpose logic programming language
associated withartificial intelligence and computational linguistics.
Prolog has its roots in first-order logic, a formal logic, and unlike many
other programming languages, Prolog is declarative: the program logic is
expressed in terms of relations, represented as facts and rules. A computation
is initiated by running a query over these relations.
The language was first conceived by a group around Alain
Colmerauer in Marseille, France, in the early 1970s and the first Prolog system
was developed in 1972 by Colmerauer with Philippe Roussel.
Prolog was one of the first logic programming languages, and remains the
most popular among such languages today, with many free and commercial
implementations available. While initially aimed at natural language
processing, the language has since then stretched far into other areas
like theorem proving, expert systems, games, automated answering
systems, ontologies and sophisticated control systems. Modern Prolog
19. ARTIFICIAL INTELLIGENCE
18
environments support creating graphical user interfaces, as well as
administrative and networked applications.
1974:MYCIN.MYCIN was an early expert system that used artificial
intelligence to identify bacteria causing severe infections, such as bacteremia
and meningitis, and to recommend antibiotics, with the dosage adjusted for
patient's body weight — the name derived from the antibiotics themselves, as
many antibiotics have the suffix "-mycin". The Mycin system was also used for
the diagnosis of blood clotting diseases.
MYCIN was developed over five or six years in the early 1970s at Stanford
University. It was written in Lisp as the doctoral dissertation of Edward
Shortliffe under the direction of Bruce Buchanan,Stanley N. Cohen and others.
It arose in the laboratory that had created the earlier Dendral expert system.
MYCIN was never actually used in practice but research indicated that it
proposed an acceptable therapy in about 69% of cases, which was better than
the performance of infectious disease experts who were judged using the same
criteria.
1988-93:AI Winter. In the history of artificial intelligence, an AI winter is a
period of reduced funding and interest in artificial intelligence research. The
term was coined by analogy to the idea of a nuclear winter. The field has
experienced several cycles of hype, followed by disappointment and criticism,
followed by funding cuts, followed by renewed interest years or decades later.
There were two major winters in 1974–80 and 1987–93 and several smaller
episodes, including:
1966: The failure of machine translation,
1970: The abandonment of connectionism,
1971–75: DARPA's frustration with the Speech Understanding
Research program at Carnegie Mellon University,
1973: The large decrease in AI research in the United Kingdom in response
to the Light hill report,
20. ARTIFICIAL INTELLIGENCE
19
1973–74: DARPA's cutbacks to academic AI research in general,
1987: The collapse of the Lisp machine market,
1988: The cancellation of new spending on AI by the Strategic Computing
Initiative,
1993: Expert systems slowly reaching the bottom,
1990s: The quiet disappearance of the fifth-generation computer project's
original goals,
The term first appeared in 1984 as the topic of a public debate at the annual
meeting of AAAI (then called the "American Association of Artificial
Intelligence"). It is a chain reaction that begins with pessimism in the AI
community, followed by pessimism in the press, followed by a severe cutback
in funding, followed by the end of serious research. At the meeting, Roger
Schank andMarvin Minsky—two leading AI researchers who had survived the
"winter" of the 1970s—warned the business community that enthusiasm for AI
had spiraled out of control in the '80s and that disappointment would
certainly follow. Three years later, the billion-dollar AI industry began to
collapse.
Hype cycles are common in many emerging technologies, such as the railway
mania or the dot-com bubble. An AI winter is primarily a collapse in
the perception of AI by government bureaucrats and venture capitalists.
Despite the rise and fall of AI's reputation, it has continued to develop new and
successful technologies. AI researcher Rodney Brooks would complain in 2002
that "there's this stupid myth out there that AI has failed, but AI is around you
every second of the day." Ray Kurzweil agrees: "Many observers still think that
the AI winter was the end of the story and that nothing since has come of the
AI field. Yet today many thousands of AI applications are deeply embedded in
the infrastructure of every industry." He adds: "the AI winter is long since
over."
21. ARTIFICIAL INTELLIGENCE
20
Great Achievements
1.Robocup:RoboCup is an international robotics competition founded in
1997.The aim is to promote robotics and AI research, by offering a publicly
appealing, but formidable challenge. The name Robocopis a contraction of the
competition's full name, "Robot Soccer World Cup", but there are many other
stages of the competition such as "RoboCupRescue", "RoboCup@Home" and
"RoboCup Junior". In the U.S robocup is not very big, with the national
competition being at New Jersey every year, but in other countries it is very
popular. In 2013 the world's competition was in the Netherlands. In 2014 the
world competition is in Brazil.
The official goal of the project:
"By the middle of the 21st century, a team of
fully autonomous humanoid robot soccer players shall win
a soccer game, complying with the official rules of FIFA, against the
winner of the most recent World Cup. "
2.Deep Blue:Deep Blue was a chess-playing computer developed by IBM. On
May 11, 1997, the machine, with human intervention between games, won
the second six-game match against world champion Garry Kasparov by two
wins to one with three draws.Kasparov accused IBM of cheating and
demanded a rematch. IBM refused and retired Deep Blue. Kasparov had beaten
a previous version of Deep Blue in 1996.
The project was started as ChipTest at Carnegie Mellon University by Feng-
hsiung Hsu, followed by its successor, Deep Thought. After their graduation
from Carnegie Mellon, Hsu, Thomas Anantharaman, and Murray
Campbell from the Deep Thought team were hired by IBM Research to
continue their quest to build a chess machine that could defeat the world
champion. Hsu and Campbell joined IBM in autumn 1989, with
Anantharaman following later. Anantharaman subsequently left IBM for Wall
22. ARTIFICIAL INTELLIGENCE
21
Street and Arthur Joseph Hoane joined the team to perform programming
tasks. Jerry Brody, a long-time employee of IBM Research, was recruited for
the team in 1990. The team was managed first by Randy Moulic, followed by
Chung-Jen (C J) Tan.
After Deep Thought's 1989 match against Kasparov, IBM held a contest to
rename the chess machine and it became "Deep Blue", a play on IBM's
nickname, "Big Blue". After a scaled down version of Deep Blue, Deep Blue Jr.,
played Grandmaster Joel Benjamin, Hsu and Campbell decided that Benjamin
was the expert they were looking for to develop Deep Blue's opening book, and
Benjamin was signed by IBM Research to assist with the preparations for Deep
Blue's matches against Garry Kasparov.
In 1995 "Deep Blue prototype" (actually Deep Thought II, renamed for PR
reasons) played in the 8th World Computer Chess Championship. Deep Blue
prototype played the computer programWchess to a draw while Wchess was
running on a personal computer. In round 5 Deep Blue prototype had
the white pieces and lost to the computer program Fritz 3 in 39 moves while
Fritz was running on an Intel Pentium 90Mhz personal computer. In the end
of the championship Deep Blue prototype was tied for second place with the
computer program Junior while Junior was running on a personal computer.
3.DARPA Grand challenge:The DARPA Grand Challenge is a prize competition
for American autonomous vehicles, funded by the Defense Advanced Research
Projects Agency,the most prominent research organization of the United
States Department of Defense. Congress has authorized DARPA to award cash
prizes to further DARPA's mission to sponsor revolutionary, high-payoff
research that bridges the gap between fundamental discoveries and military
use. The initial DARPA Grand Challenge was created to spur the development
of technologies needed to create the first fully autonomous ground
vehicles capable of completing a substantial off-road course within a limited
time. The third event, the DARPA Urban Challenge extended the initial
23. ARTIFICIAL INTELLIGENCE
22
Challenge to autonomous operation in a mock urban environment. The most
recent Challenge, the 2012 DARPA Robotics Challenge, focused on
autonomous emergency-maintenance robots. The most recent Challenge, the
2012 DARPA Robotics Challenge, focused on autonomous emergency-
maintenance robots.
Fully autonomous vehicles have been an international pursuit for many years,
from endeavors in Japan (starting in 1977), Germany (Ernst
Dickmanns and VaMP), Italy (the ARGO Project), the European Union
(EUREKA Prometheus Project), the United States of America, and other
countries.
The Grand Challenge was the first long distance competition for driverless
cars in the world; other research efforts in the field of Driverless cars take a
more traditional commercial or academic approach. The U.S. Congress
authorized DARPA to offer prize money ($1 million) for the first Grand
Challenge to facilitate robotic development, with the ultimate goal of making
one-third of ground military forces autonomous by 2015. Following the 2004
event, Dr. Tony Tether, the director of DARPA, announced that the prize
money had been increased to $2 million for the next event, which was claimed
on October 9, 2005. The first, second and third places in the 2007 Urban
Challenge received $2 million, $1 million, and $500,000, respectively.
The competition was open to teams and organizations from around the world,
as long as there were at least one U.S. citizen on the roster. Teams have
participated from high schools, universities, businesses and other
organizations. More than 100 teams registered in the first year, bringing a
wide variety of technological skills to the race. In the second year, 195 teams
from 36 U.S. statesand 4 foreign countries entered the race.
24. ARTIFICIAL INTELLIGENCE
23
TODAY’S AI APPLICATIONS
1.Driver-Less trains and Metros: Driverless metro lines are currently
operational in the variouscities, such as, London,Barcelona,Dubai
etc.Advantages of driverless metros:
Lower expenditure for staff (staff swallows a significant part of the costs
of running a transport system). However, service and security personnel
is common in automated systems.
Trains can be shorter and instead run more frequently without
increasing expenditure for staff.
Service frequency can easily be adjusted to meet sudden unexpected
demands.
Despite common psychological concerns, driverless metros are safer
than traditional ones. None of them ever had a serious accident.
Intruder detection systems can be more effective than humans in
stopping trains if someone is on the tracks.
Financial savings in both energy and wear-and-tear costs because trains
are driven to an optimum specification.
Train turnover time at terminals can be extremely short (train goes into
the holding track and returns immediately), reducing the number of
train sets needed for operation.
2.Burglary Alarm System: A Burglary alarm is a system designed to detect
intrusion – unauthorized entry – into a building or area. Security alarms are used
in residential, commercial, industrial, and military properties for protection
against burglary (theft) or property damage, as well as personal protection
against intruders.Car alarms likewise protect vehicles and their
contents. Prisons also use security systems for control of inmates.
Some alarm systems serve a single purpose of burglary protection;
combination systems provide both fire and intrusion protection. Intrusion
25. ARTIFICIAL INTELLIGENCE
24
alarm systems may also be combined with closed-circuit
television surveillance systems to automatically record the activities of
intruders, and may interface to access control systems for electrically locked
doors. Systems range from small, self-contained noisemakers, to complicated,
multi-area systems with computer monitoring and control.
3.Automatic Essay Scoring in Education:Automated essay scoring (AES) is
the use of specialized computer programs to assign grades to essays written in
an educational setting. It is a method of educational assessment and an
application of natural language processing. Its objective is to classify a large
set of textual entities into a small number of discrete categories, corresponding
to the possible grades—for example, the numbers 1 to 6. Therefore, it can be
considered a problem of statistical classification.
Several factors have contributed to a growing interest in AES. Among them are
cost, accountability, standards, and technology. Rising education costs have led
to pressure to hold the educational system accountable for results by imposing
standards. The advance of information technology promises to measure
educational achievement at reduced cost.
The use of AES for high-stakes testing in education has generated significant
backlash, with opponents pointing to research that computers cannot yet
grade writing accurately and arguing that their use for such purposes
promotes teaching writing in reductive ways (i.e. teaching to the test).
26. ARTIFICIAL INTELLIGENCE
25
Submitted to :- NIIT Residency Road Srinagar
Submitted by:-Masood Ahmad Bhat
Student ID:-S1400D9700032
Batch Code:-B140045
Sig. Of HOC NIIT Sig. Of Concerned Faculty.
----------- -------------