1. Artificial Intelligence
Previous Year Solution : 2020
(2.a) Explain With Diagram the Organization of a natural Language
understanding System.
Ans. Natural language understanding is an artificial intelligence technology who’s
main job is understanding spoken or written words and phrases.
The NLU System represents the overarching system responsible for
understanding natural language input.
The levels of analysis, starting from left to right, are as follows:
Morphological Analysis: Analyzes the internal structure of
words.
Lexical Analysis: Focuses on identifying and categorizing
individual words.
Syntactic Analysis: Analyzes the grammatical structure of
sentences.
Semantic Analysis: Understands the meaning of words, phrases,
and sentences.
Pragmatic Analysis: Interprets language in context, accounting
for intentions and conversational norms.
Discourse Analysis: Understands the relationships between
sentences and larger units of text.
Named Entity Recognition (NER): Identifies and classifies
named entities.
Coreference Resolution: Determines when expressions refer to
the same entity.
Sentiment Analysis: Identifies and classifies the sentiment
expressed in the text.
Contextual Understanding: Incorporates world knowledge and
domain-specific information to interpret language in a broader
context.
Each level of analysis builds upon the previous ones, allowing for a deeper
understanding of the input text. The specific techniques and models
3. 2(b). Describe all the levels of language understanding in natural
language processing system.
Ans. There are following common levels of language understanding in an
NLP system:
1. Morphological Analysis: This level involves analyzing the internal
structure of words to identify their root forms, prefixes, suffixes, and
other morphological properties. It helps in understanding the
grammatical features and variations of words.
2. Lexical Analysis: Lexical analysis focuses on identifying and
categorizing individual words (lexical units) in a sentence. It involves
techniques such as tokenization, stemming, lemmatization, and word
normalization to handle variations and reduce words to their base
forms.
3. Syntactic Analysis: Syntactic analysis, also known as parsing, deals
with the grammatical structure of sentences. It involves analyzing the
relationships between words and phrases, determining their roles
(subject, object, etc.), and building parse trees or dependency graphs
to represent the syntactic structure.
4. Semantic Analysis: Semantic analysis aims to understand the meaning
of words, phrases, and sentences. It involves mapping syntactic
structures to their corresponding semantic representations, such as
logical forms or semantic graphs. This level helps in capturing the
denotational meaning of the text.
5. Pragmatic Analysis: Pragmatic analysis focuses on interpreting
language in context and accounting for the speaker's intentions,
implicatures, and conversational norms. It involves understanding the
speaker's goals, the context of the discourse, and the illocutionary
acts (requests, promises, etc.) conveyed by the text.
6. Discourse Analysis: Discourse analysis involves understanding the
relationships between sentences and larger units of text, such as
paragraphs or documents. It aims to capture coherence and
cohesion, identifying discourse markers, anaphoric references, and
discourse structure to comprehend the overall meaning.
7. Named Entity Recognition (NER): NER identifies and classifies named
entities within the text, such as person names, locations,
organizations, dates, etc. It helps in extracting important information
and understanding the context.
4. 3. (a) What are agents in AI? How do agents work to import
intelligence to a system? Classify the different types of agents and
briefly discuss their properties?
Ans. An Agent can be anything that experiences environment through sensors and act upon
that environment through actuators. An Agent run in the cycle of perceiving , thinking and
acting.
An Agent can be (1.) Human agent.
2. Robotic agent.
3. Software agent.
An agent work to import intelligence to a system by intelligent agents.
An agent is an autonomous entity which act upon an environment using sensors and actuators
for achieving goals.
There are following types of agents in AI.
1. Simple Reflex Agent.
2. Model Based Reflex Agent.
3. Goal Based Agent.
4. Utility Based Agent.
5. Learning Agents.
The Properties of the types of AI Agent.
a) Properties of Simple Reflex Agent :- (i) They have very limited
intelligence.
(ii)They don’t have Knowledge of non-procedural parts of the current
state.
(iii)Mostly to big to generate and to store.
b) Properties of Model based Reflex Agent :- (i) The Model based agent can
work in a partially observable environment, and track the situation
(ii)A Model based agent has tow important factors and the factors are
model and Internal state
c) Properties of Goal based Agent :- (i) The agent need to know its goal
which describes situations.
(ii)They choose an actions, so that they can achieve the goal.
d) Properties of Utility based Agent :- (i) Utility based agent act based not
only goals but also the bast way to achieve the goal.
e) Properties of Learning Agent :- A Learning agent has mainly four
conceptual components, which are :-
1. Learning Element.
5. 2. Critic.
3. Performance element.
4. Problem Generator.
3. (b) Draw the Semantic network of the following
sentences:
Kavita Gives a book to her friend.
Ans:- A/Q, sentence is given that “Kavita gives a book to her friend”.
Let agent = Kavita ,
Action = gives
Object = a book
Receiver = her friend
Diagram of Semantic network of the given sentences are as follows:-
Action
agent object
receiver
Gives
Event
Kavita
Friend
A Book
6. 4.(a) What do you mean by learning? Explain briefly the
learning methods. Discuss the advantages and
disadvantages of rule based system.
Ans:- Learning :- Learning is the branch of Artificial Intelligence and
Computer Science which focus on the work of data and algorithm to imitate the
way that human beings learn , gradually improving its accuracy.
There are the following different methods of learning in AI.
1. Supervised Learning
2. Unsupervised Learning
3. Semi Supervised Learning
Supervised Learning :- Supervised learning, also known as supervised
machine learning, is a subcategory of machine learning and artificial
intelligence.
It is defined by its use of labeled datasets to train algorithms that to classify
data or predict outcomes accurately.
Supervised learning helps organizations solve for a variety of real-world
problems at scale, such as classifying spam in a separate folder from your
inbox.
Desired Output
Actual Output
Error
Environment
Supervisor
LA
7. Unsupervised Learning :- Unsupervised learning, also known as unsupervised
machine learning, uses machine learning algorithms to analyze and cluster
unlabeled datasets.
These algorithms discover hidden patterns or data groupings without the need
for human intervention.
Its ability to discover similarities and differences in information make it the
ideal solution for exploratory data analysis, cross-selling strategies, customer
segmentation, and image recognition.
Output
Semi Supervised Learning-: Semi-supervised machine
learning is a combination of supervised and unsupervised learning. It uses
a small amount of labeled data and a large amount of unlabeled data,
which provides the benefits of both unsupervised and supervised learning
while avoiding the challenges of finding a large amount of labeled data.
Environment
L.A
8. Advantages of Rule based System in AI
1. Cost Efficient
2. Stable Output
3. Low Error Rate
4. Less Risk
5. Instant Output
Dis-Advantages of Rule based System in AI
1. Manual Risk
2. Time Consuming Inputs
3. Low Self Learning Capacity
4. Difficult Pattern Identification
4.(b) Explain the human preferences in encoding uncertainty
during parsing.
Ans: In natural language understanding, parsing refers to the process of
analyzing the grammatical structure of sentences. When it comes to
encoding uncertainty during parsing, human preferences can vary based on
several factors. Here are a few considerations regarding human preferences
in encoding uncertainty:
1. Ambiguity and Disambiguation: Natural language is often
ambiguous, and different interpretations can arise during parsing.
Human preferences tend to lean towards disambiguating the
sentence by selecting the most likely interpretation based on
contextual cues, prior knowledge, or statistical probabilities.
Disambiguation helps reduce uncertainty and improves the overall
understanding of the sentence.
2. Probabilistic Modeling: Humans tend to acknowledge and
incorporate probabilistic models when encoding uncertainty during
parsing. Rather than making absolute determinations, probabilities
are assigned to different parse options, reflecting the likelihood of
each interpretation. This probabilistic approach helps represent
uncertainty and allows for more nuanced understanding.
3. Hedge Expressions: In language, hedge expressions are used to
indicate uncertainty or lack of precision. Humans often employ hedge
expressions, such as "possibly," "likely," "probably," or "might," to
9. indicate the presence of uncertainty during parsing. These
expressions help convey that the parsed interpretation is not
definitive and leaves room for alternative possibilities.
4. Explicit Markers: Humans sometimes use explicit markers to indicate
uncertainty during parsing. For example, phrases like "I'm not sure,
but..." or "It could be that..." signal that the following interpretation is
uncertain or speculative. These markers help the listener or reader
understand that the parsed structure should be considered with
caution due to uncertainty.
5.(a) Explain the procedure of knowledge acquisition with the
help of a diagram.
Ans: Knowledge acquisition refers to the process of acquiring new
knowledge or information. It involves gathering, interpreting, and
organizing information from various sources. Here is a step-by-step
explanation of the procedure:
1. Identify the Knowledge Need: Determine what specific knowledge or
information you are seeking to acquire. Clearly define your goals and
objectives.
2. Select Information Sources: Identify and select the appropriate
sources from which you can acquire the desired knowledge. This may
include books, research papers, websites, experts in the field,
databases, or other relevant resources.
3. Collect Information: Gather information from the selected sources.
Read books, articles, and other written material; conduct online
research; interview experts; attend seminars or conferences; or use
other methods to gather relevant information.
4. Evaluate and Verify Information: Assess the credibility, reliability, and
accuracy of the information collected. Cross-reference multiple
sources to verify the consistency and validity of the information.
5. Organize and Analyze Information: Systematically organize the
acquired information. Categorize, classify, and structure the
knowledge in a way that is logical and easy to understand. Analyze
the information to identify patterns, relationships, or insights.
10. 6. Synthesize and Integrate Knowledge: Combine the acquired
knowledge with your existing knowledge and understanding.
Integrate new information into your existing framework of knowledge
to form a more comprehensive understanding.
5.(b) What Is First Order Predicate Logic(FOPL)? Represent the
following facts in FOPL : “Anyone passing is AI paper and getting an
opportunity to work on live project is Happy. But anyone who
studies sincerely or is Lucky can pass all his exams. Ramu did not
study but he is lucky. Anyone who is Lucky gets a live project.”
Ans:- First-Order Predicate Logic (FOPL), also known as First-Order Logic
(FOL) or First-Order Predicate Calculus, is a formal system used for representing
11. and reasoning about statements in logic. FOPL includes predicates, variables,
quantifiers, and logical connectives to express relationships and properties.
Let's represent the given facts in FOPL:
Constants:
Passing(x): Represents that person x has passed his AI paper.
Working(x): Represents that person x has an opportunity to work on a live
project.
Happy(x): Represents that person x is happy.
Studying(x): Represents that person x is studying.
Lucky(x): Represents that person x is lucky.
Exam(x): Represents that there is an exam x.
Ramu: Represents the person named Ramu.
Predicates:
Has Opportunity(x): Represents that person x has an opportunity to work on
a live project.
Facts:
1. "Anyone passing his AI paper and getting an opportunity to work on a live
project is happy." ∀x (Passing(x) ∧ Working(x) → Happy(x))
2. "But anyone who studies sincerely or is lucky can pass all his exams." ∀x
((Studying(x) ∨ Lucky(x)) → (∀y Exam(y) → Passing(x)))
Or ∀(x) ∀(y)[study(x) ∨ Lucky(x) →pass(x,y)
3. "Ramu did not study but he is lucky." ¬Studying(Ramu) ∧ Lucky(Ramu)
4. "Anyone who is lucky gets a live project." ∀x (Lucky(x) →
HasOpportunity(x))
These are the statements represented in FOPL based on the given facts.
6.Write short notes on the following.
(a) Knowledge
(b).Intelligence
(c).Inheritance
(d)Knowledge management
12. 6(a) ans:- Knowledge:- In the context of artificial intelligence (AI), knowledge
refers to information, facts, rules, and relationships that an AI system acquires or
possesses. It is the understanding or awareness that an AI system has about the
world or a specific domain.
Knowledge in AI can be classified into two main types:
1. Explicit Knowledge: This is knowledge that is explicitly represented in a
structured or symbolic form, making it easily understandable and
processable by AI systems.
2. Tacit Knowledge: Tacit knowledge refers to the expertise, intuition, or skills
that are not explicitly expressed or easily verbalized. It is typically acquired
through experience, practice, or interaction with the environment.
Overall, knowledge in AI plays a crucial role in enabling intelligent decision-
making, problem-solving, and reasoning capabilities within AI systems. It
serves as the foundation for understanding and interpreting data, generating
insights, and making informed predictions or recommendations.
6(b) Intelligence:- In the context of artificial intelligence (AI), intelligence
refers to the ability of an AI system to exhibit human-like cognitive abilities,
such as learning, reasoning, problem-solving, perception, language
understanding, and decision-making. AI aims to create systems that can perform
tasks that would typically require human intelligence.
Intelligence in AI can be broadly classified into two main categories:
1. Narrow or Weak AI
2. General AI
6.(c) Ans:- Inheritance Knowledge:- Inheritance knowledge, also known as
hierarchical or taxonomic knowledge, is a concept in artificial intelligence (AI) that
involves organizing and representing knowledge in a hierarchical manner. It is based
on the idea that objects or concepts can inherit properties, attributes, and
relationships from more general or abstract categories.
Inheritance knowledge allows AI systems to model relationships between concepts or
objects and leverage the shared characteristics and behaviors within a hierarchy. This
approach is particularly useful when dealing with large and complex domains where
there are commonalities and dependencies among different entities.
13. 6.(d) Knowledge management:- Knowledge management in AI refers to the
practices and techniques used to effectively acquire, organize, store, access, and
utilize knowledge within AI systems. It involves the systematic management of
knowledge resources to enable better decision-making, problem-solving, and overall
performance of AI applications.
7.Describe the following with suitable examples:
(a) Logistic Regression
(b) Back propagation algorithm
o Ans:- Logistic Regression:- Logistic regression is one of the most
popular Machine Learning algorithms, which comes under the Supervised
Learning technique. It is used for predicting the categorical dependent variable
using a given set of independent variables.
o Logistic regression predicts the output of a categorical dependent variable.
Therefore the outcome must be a categorical or discrete value. It can be either
Yes or No, 0 or 1, true or False, etc. but instead of giving the exact value as 0
and 1, it gives the probabilistic values which lie between 0 and 1.
Example:- Gaming is one of the best example for logistic regression.
Speed is one of the advantages of logistic regression, and it is extremely useful in
the gaming industry. Speed is very important in a game. Very popular today are the
games where you can use in-game purchases to improve the gaming qualities of
your character, or for fancy appearance and communication with other players. In-
game purchases are a good place to introduce a recommendation system.
Tencent is the world's largest gaming company. It uses such systems to suggest
gamers' equipment which they would like to buy. Their algorithm analyzes a very
large amount of data about user behavior and gives suggestions about equipment a
particular user may want to acquire on the run. This algorithm is logistic
regression.
Ans:- Back propagation Algorithm:- Backpropagation is a key algorithm
used in training neural networks. It allows the network to learn from
14. examples and adjust its weights to minimize the difference between
predicted and target outputs. Here's an overview of the backpropagation
algorithm along with an example:
1. Forward Pass:
Start by initializing the weights and biases of the neural
network randomly.
Feed the input example through the network and compute the
output.
Apply the activation function to the outputs of each neuron in
the network.
2. Calculate Error:
Compare the network's output with the target output to
calculate the error.
This is typically done using a loss function such as mean
squared error or cross-entropy loss.
3. Backward Pass:
Start from the output layer and propagate the error gradients
backward through the network.
Compute the gradient of the error with respect to the weights
and biases of each neuron.
Update the weights and biases using gradient descent or a
variant of it, adjusting them to reduce the error.
4. Repeat:
Repeat steps 1-3 for all training examples in the dataset.
Continue iterating through the dataset multiple times (epochs)
to further refine the network's weights.
5. Evaluation:
After training, use the trained network to make predictions on
new, unseen examples.
Evaluate the network's performance using appropriate metrics
such as accuracy or precision/recall.
Example:
Let's consider a simple example of training a neural network to perform
binary classification on a dataset of flower images. Assume the network has
one input layer, one hidden layer with two neurons, and one output layer.
15. 1. Forward Pass:
Input example: An image of a flower with features (petal length,
petal width).
Weights and biases: Randomly initialized values for the
connections between neurons.
Compute the output of the network using the weighted sum
and activation function.
2. Calculate Error:
Compare the network's output (predicted class probabilities)
with the target output (actual class label).
Calculate the error using a suitable loss function like cross-
entropy.
3. Backward Pass:
Starting from the output layer, calculate the gradient of the
error with respect to the weights and biases.
Propagate the gradients backward to the hidden layer and
calculate their gradients.
Update the weights and biases using the gradients and a
learning rate to adjust them in the direction that reduces the
error.
4. Repeat:
Repeat steps 1-3 for all flower images in the training dataset.
Iterate through the dataset multiple times (epochs) to refine
the network's weights.
5. Evaluation:
Once training is complete, use the trained network to predict
the class label (e.g., whether a flower is iris or not) for new,
unseen flower images.
Evaluate the network's performance using metrics like accuracy,
precision, or recall.
Backpropagation allows the neural network to iteratively adjust its weights
based on the error calculated during training. By propagating the error
gradients backward through the network, the algorithm enables the
network to learn and improve its predictions over time.
16. 9.(b)What is the goal of support vector machine(SVM)/ How to
compute the margin?
Ans:- The goal of Support Vector Machines (SVM) is to find the optimal
hyperplane that maximally separates different classes in a binary
classification problem. The margin refers to the distance between the
hyperplane and the nearest data points of each class. The larger the margin,
the more robust and better the generalization of the SVM model. The goal
of SVM is to find the hyperplane that maximizes this margin.
Here's a high-level overview of how to compute the margin in SVM:
1. Data Preparation:
Start with a labeled training dataset consisting of feature
vectors and corresponding class labels.
Ensure that the classes are linearly separable or nearly
separable with a margin.
2. Hyperplane Representation:
Represent the hyperplane using the equation: w^T * x + b = 0,
where w is the normal vector to the hyperplane and b is the
bias term.
3. Margin Calculation:
Identify the support vectors, which are the data points that lie
closest to the hyperplane.
Compute the distance between the hyperplane and the support
vectors to determine the margin.
The margin in SVM represents the separation between classes and plays a
crucial role in determining the generalization ability of the SVM model. By
maximizing the margin, SVM aims to find the best decision boundary that
can effectively classify new, unseen data points. The use of support vectors,
which are the closest points to the decision boundary, allows SVM to focus
on the most critical data points in determining the optimal hyperplane.
Solved By Deepak Kumar(20cse09)