Class: TY B. Tech Semester: VI
Course Name: Artificial Intelligence
and Neural Network (CMCR0604)
Lab Name: Artificial Intelligence and
Neural Network Lab (CMLR0604)
This Photo by Unknown Author is licensed under CC BY
This Photo by Unknown Author is licensed under CC BY-SA
• Artificial Intelligence is a broad field in computer science focused on creating
systems that can perform tasks that normally require human intelligence. These
tasks include problem-solving, learning, reasoning, perception, language
understanding, and decision-making.
• AI Components:
•Knowledge Representation: How information about the world is represented.
•Reasoning: How AI systems draw conclusions or make decisions.
•Learning: How AI systems improve from experience (which is where Machine
Learning fits in).
Artificial Intelligence (AI)
This Photo by Unknown Author is licensed under CC BY-ND
• Machine Learning is a subset of AI that allows systems to learn from data and
improve performance over time without being explicitly programmed. Instead of
relying on pre-defined rules or instructions, machine learning models use data
patterns and statistical techniques to make decisions and predictions.
• The goal of ML is to enable machines to learn from experience (data) and
generalize this knowledge to make accurate predictions on new, unseen data.
• ML is a major technique used to build AI systems. While traditional AI systems
may rely heavily on logic and human expertise, machine learning allows AI
systems to evolve and adapt through data, making them more flexible and powerful.
Machine Learning(ML)
This Photo by Unknown Author is
licensed under CC BY
Types of Machine Learning
1 Supervised Learning
Algorithms learn from
labeled data to predict future
outcomes.
2 Unsupervised Learning
Algorithms discover patterns
and structures in unlabeled
data.
4 Reinforcement Learning
Algorithms learn through trial and error, optimizing actions based on
rewards and penalties.
3 Semi Supervised Learning
Algorithms learn from labeled and
unlabeled data, effectively utilizing
limited labeled data to predict
outcome.
Supervised Learning: Learning from Labels
Definition
Supervised learning trains models on labeled datasets, allowing them
to predict outputs based on input features. It's akin to a teacher
guiding a student with examples and answers.
Examples
Image classification, spam detection, and fraud detection are common
applications of supervised learning.
Supervised Learning: Learning from Labels
Definition
Supervised learning trains models on labeled datasets, allowing them
to predict outputs based on input features. It's akin to a teacher
guiding a student with examples and answers.
Examples
Image classification, spam detection, and fraud detection are common
applications of supervised learning.
Unsupervised Learning:
Discovering Hidden Patterns
1 No Labels
Unsupervised learning operates
on unlabeled datasets, seeking to
identify patterns and structures
within the data.
2 Applications
Clustering, anomaly detection,
and dimensionality reduction are
examples of unsupervised
learning tasks.
Semi-Supervised Learning:
Bridging the Gap
Combining Strengths
Semi-supervised learning leverages
both labeled and unlabeled data,
effectively utilizing limited labeled
data to improve model
performance.
Real-world Benefits
This approach is especially
valuable when labeled data is
scarce, making it practical for
many real-world applications.
Reinforcement Learning: Learning Through Interactions
Trial and Error
Reinforcement learning trains agents to learn through trial and error,
making decisions based on maximizing rewards and minimizing
penalties.
Applications
This paradigm is ideal for tasks involving control, robotics, game
playing, and optimization problems.
Supervised Learning: A Closer Look
1 Regression
Predicts continuous values, such as stock prices or
house prices.
2 Classification
Categorizes data into discrete classes, such as spam
detection or image recognition.
3 Deep Learning
Utilizes artificial neural networks with multiple layers
to extract complex patterns from data.
Unsupervised Learning: Key
Algorithms
Clustering
Groups similar data points together based on their
characteristics.
Dimensionality Reduction
Simplifies data by reducing the number of features
while preserving essential information.
Association Rule Mining
Discovers relationships and dependencies between data
elements.
 A neural network is a method in artificial intelligence (AI) that teaches computers
to process data in a way that is inspired by the human brain.
 It is a type of machine learning (ML) process, called deep learning, that uses
interconnected nodes or neurons in a layered structure that resembles the human
brain.
 It creates an adaptive system that computers use to learn from their mistakes and
improve continuously.
 Thus, artificial neural networks attempt to solve complicated problems, like
summarizing documents or recognizing faces, with greater accuracy.
Introduction to Neural Network (NN)
 AI is the overarching goal of creating intelligent systems, capable of performing
tasks that typically require human-level cognitive abilities. AI can be achieved
through various approaches, one of which is Machine Learning.
 Machine Learning is a method of achieving AI by enabling systems to learn from
data and improve over time without explicit programming. ML is often the method
of choice for building intelligent systems, as it allows machines to adapt to new
data and experiences.
 Neural Networks are a powerful technique within ML, often used when the
complexity of the task requires modeling complex relationships, patterns, or
unstructured data. Neural networks enable deep learning, which has become the
foundation of many advanced AI applications, including self-driving cars, facial
recognition, language translation, etc.
AI, ML and NN connection
• Artificial Intelligence (AI): The goal is to develop a self-driving car that can
navigate and make decisions like a human driver, perceiving the environment and
taking actions.
• Machine Learning (ML): To achieve this, the self-driving car uses machine
learning to learn from vast amounts of driving data (sensor data, images, and
videos). The system uses this data to train models that allow the car to identify
objects like pedestrians, other cars, and traffic signs.
• Neural Networks (NN): Neural networks, specifically Convolutional Neural
Networks (CNNs), are used for image recognition tasks. The neural network
processes data from cameras and LiDAR sensors to identify objects in the car's
environment, like detecting pedestrians on the road. Additionally, Recurrent
Neural Networks (RNNs) may be used for handling sequential data, such as
predicting future actions based on the car's current trajectory and past movements.
Scenario: Self-Driving Cars (AI Application)
 A neural network is a method in artificial intelligence (AI) that teaches computers
to process data in a way that is inspired by the human brain.
 It is a type of machine learning (ML) process, called deep learning, that uses
interconnected nodes or neurons in a layered structure that resembles the human
brain.
 It creates an adaptive system that computers use to learn from their mistakes and
improve continuously.
 Thus, artificial neural networks attempt to solve complicated problems, like
summarizing documents or recognizing faces, with greater accuracy.
Introduction to Neural Network (NN)
• Neural networks can help computers make intelligent decisions with limited human
assistance.
• This is because they can learn and model the relationships between input and output
data that are nonlinear and complex.
• NN can make generalizations and inferences
• Neural networks can comprehend unstructured data and make general observations
without explicit training.
• For instance, they can recognize that two different input sentences have a similar
meaning
• Can you tell me how to make the payment?
• How do I transfer money?
• A neural network would know that both sentences mean the same thing.
Importance of Neural Network (NN)
Neural networks have several use cases across many industries, such as the following:
• Medical diagnosis by medical image classification
• Targeted marketing by social network filtering and behavioral data analysis
• Financial predictions by processing historical data of financial instruments
• Electrical load and energy demand forecasting
• Process and quality control
• Chemical compound identification
Use cases of Neural Network (NN)
Computer vision
Computer vision is the ability of computers to extract information and insights from images
and videos. With neural networks, computers can distinguish and recognize images similar to
humans. Computer vision has several applications, such as the following:
• Visual recognition in self-driving cars so they can recognize road signs and other road users
• Content moderation to automatically remove unsafe or inappropriate content from image
and video archives
• Facial recognition to identify faces and recognize attributes like open eyes, glasses, and
facial hair
• Image labeling to identify brand logos, clothing, safety gear, and other image details
Applications of Neural Network (NN)
Speech recognition
Neural networks can analyze human speech despite varying speech patterns, pitch, tone,
language, and accent. Virtual assistants like Amazon Alexa and automatic transcription
software use speech recognition to do tasks like these:
• Assist call center agents and automatically classify calls
• Convert clinical conversations into documentation in real time
• Accurately subtitle videos and meeting recordings for wider content reach
Applications of Neural Network (NN)
Natural language processing
Natural language processing (NLP) is the ability to process natural, human-created text.
Neural networks help computers gather insights and meaning from text data and
documents. NLP has several use cases, including in these functions:
• Automated virtual agents and chatbots
• Automatic organization and classification of written data
• Business intelligence analysis of long-form documents like emails and forms
• Indexing of key phrases that indicate sentiment, like positive and negative comments on
social media
• Document summarization and article generation for a given topic
Applications of Neural Network (NN)
Recommendation engines
Recommendation engines powered by neural networks have become a crucial part of many
services we use today. They help personalize content and suggest products based on a user's
preferences, behaviors, and past interactions.
Example: Movie Recommendation Engine (Netflix), E-commerce Recommendation Engine
(Amazon), Music Recommendation Engine (Spotify)
Applications of Neural Network (NN)
• The human brain is the inspiration behind neural network architecture.
• Human brain cells, called neurons, form a complex, highly interconnected network and send
electrical signals to each other to help humans process information.
• Similarly, an artificial neural network is made of artificial neurons that work together to solve
a problem.
• Artificial neurons are software modules, called nodes, and artificial neural networks are
software programs or algorithms that, at their core, use computing systems to solve
mathematical calculations.
Working of Neural Network (NN)
Input Layer
Information from the outside world enters the artificial neural network from the input layer. Input nodes process
the data, analyze or categorize it, and pass it on to the next layer.
Hidden Layer
Hidden layers take their input from the input layer or other hidden layers. Artificial neural networks can have a
large number of hidden layers. Each hidden layer analyzes the output from the previous layer, processes it
further, and passes it on to the next layer.
Output Layer
The output layer gives the final result of all the data processing by the artificial neural network. It can have single
or multiple nodes. For instance, if we have a binary (yes/no) classification problem, the output layer will have
one output node, which will give the result as 1 or 0. However, if we have a multi-class classification problem,
the output layer might consist of more than one output node.
Simple Neural Network architecture
How do our brains work?
 A processing element
Dendrites: Input
Cell body: Processor
Synaptic: Link
Axon: Output
How do our brains work?
 A processing element
A neuron is connected to other neurons through about 10,000
synapses
How do our brains work?
 A processing element
A neuron receives input from other neurons. Inputs are combined.
How do our brains work?
 A processing element
Once input exceeds a critical level, the neuron discharges a spike ‐
an electrical pulse that travels from the body, down the axon, to
the next neuron(s)
How do our brains work?
 A processing element
The axon endings almost touch the dendrites or cell body of the
next neuron.
How do our brains work?
 A processing element
Transmission of an electrical signal from one neuron to the next is
effected by neurotransmitters.
How do our brains work?
 A processing element
Neurotransmitters are chemicals which are released from the first neuron
and which bind to the
Second.
How do our brains work?
 A processing element
This link is called a synapse. The strength of the signal that
reaches the next neuron depends on factors such as the amount of
neurotransmitter available.
How do ANNs work?
An artificial neuron is an imitation of a human neuron
How do ANNs work?
• Now, let us have a look at the model of an artificial neuron.
How do ANNs work?
Output
x1
x2
xm
∑
y
Processing
Input
∑= X1+X2 + ….+Xm =y
. . . . . . . . . . . .
How do ANNs work?
Not all inputs are equal
Output
x1
x2
xm
∑
y
Processing
Input
∑= X1w1+X2w2 + ….+Xmwm
=y
w1
w2
wm
weights
. . . . . . . . . . . .
. . . . .
How do ANNs work?
The signal is not passed down to the
next neuron verbatim
Transfer Function
(Activation Function)
Output
x1
x2
xm
∑
y
Processing
Input
w1
w2
wm
weights
. . . . . . . . . . .
.
f(vk)
. . . .
.
The output is a function of the input, that is affected by
the weights, and the transfer functions
McCulloch-Pitts (M-P) Neuron Model
• The McCulloch-Pitts neuron is a simple and first computational model of a
neuron model of a biological neuron that was first proposed by Warren
McCulloch and Walter Pitts in 1943.
• It is a binary neuron (activation function is binary), meaning that it can only have
two states: on or off
• It can be divided into two parts:
• Aggregation: The neuron aggregates multiple boolean inputs (0 or 1).
• Threshold Decision: Based on the aggregated value, the neuron makes a
decision using a threshold function.
• Weights associated with link can be Excitatory (positive) or inhibitory (negative )
• Mostly used in logic functions
McCulloch-Pitts Model of Neuron
The McCulloch-Pitts neuron has three components:
• Inputs: The inputs are the signals that the neuron
receives from other neurons.
• Threshold: The threshold is the value that the
weighted sum of the inputs must exceed in order for
the neuron to fire.
• Output: The output is the signal that the neuron sends
to other neurons.
McCulloch-Pitts Model of Neuron
McCulloch-Pitts Model of Neuron
The McCulloch-Pitts neuron works as follows:
1. The inputs are multiplied by their corresponding
weights.
2. The weighted sums of the inputs are summed.
3. If the summed value is greater than or equal to the
threshold, the neuron fires and outputs a 1.
Otherwise, the neuron does not fire and outputs a 0.
McCulloch-Pitts Model of Neuron
The McCulloch-Pitts neuron additional things
• The McCulloch-Pitts neuron is a binary neuron,
which means that it cannot represent real-valued
data.
• The McCulloch-Pitts neuron is not a very accurate
model of biological neurons. Biological neurons
have a variety of features that are not captured by the
McCulloch-Pitts neuron, such as the ability to
integrate inputs over time.
• The McCulloch-Pitts neuron is not very powerful
and cannot be used to solve complex problems.
More complex neural network models are needed to
solve complex problems
McCulloch-Pitts Model of Neuron
Overall, the McCulloch-Pitts neuron is a simple and
easy-to-understand model of a biological neuron.
It has been used to simulate the behavior of neural
networks and to develop artificial neural networks.
However, it is not a very accurate model of biological
neurons and is not very powerful
McCulloch-Pitts Model of Neuron: AND function
Truth table
McCulloch-Pitts Model of Neuron: AND function
Truth table
Assuming w1=1 and w2=1
No particular training algorithm only analysis
McCulloch-Pitts Model of Neuron: AND function
Truth table
Assuming w1=1 and w2=1
No particular training algorithm only analysis
McCulloch-Pitts Model of Neuron: AND function
Truth table
Assuming w1=1 and w2=1
No particular training algorithm only analysis
McCulloch-Pitts Model of Neuron: AND function
Truth table
Assuming w1=1 and w2=1
Threshold
Calculation
McCulloch-Pitts Model of Neuron: AND function
Truth table
Assuming w1=1 and w2=1
McCulloch-Pitts Model of Neuron: OR function
Truth table
McCulloch-Pitts Model of Neuron: OR function
McCulloch-Pitts Model of Neuron: OR function
Threshold to 0.5
o For A=0,B=0A = 0, B = 0, the sum is 0, which is less than 0.5, so the output is 0 (correct).
o For A=0,B=1A = 0, B = 1, the sum is 1, which is greater than or equal to 0.5, so the output is 1 (correct).
o For A=1,B=0A = 1, B = 0, the sum is 1, which is greater than or equal to 0.5, so the output is 1 (correct).
o For A=1,B=1A = 1, B = 1, the sum is 2, which is greater than or equal to 0.5, so the output is 1 (correct).
Not possible to fire neuron for (1,0) so weights are not suitable
Consider w1=1 and w2=-1
• Crucial components in artificial neural networks because they determine the output of a neuron given an input or a weighted sum
of inputs.
• Different activation functions are used depending on the task, such as classification or regression, and the desired properties of the
network, such as non-linearity, differentiability, and output range.
Activation functions
Activation functions
Activation functions
Activation functions
Calculate activation for
Activation functions
Calculate activation for
Activation functions
Calculate activation for
Activation functions
Activation functions
Activation functions
Practice
Practice
Practice
Practice
Practice
Practice
MP model for X-OR gate
MP model for X-OR gate
MP model for X-OR gate
MP model for X-OR gate
Perceptron
MP model Perceptron
MP model Perceptron
• The weights (same) and thresholds are fixed and do not
adjust based on input or output (no learning).
• Inputs and outputs are binary (0 or 1).
• could model simple logical functions (AND, OR, etc.)
• Provided a theoretical foundation for understanding how
neurons could perform computations, but it was not
practical for solving real-world problems.
• Could learn to classify input data into categories by adjusting
its weights based on feedback (learning rule)
• The inputs could be continuous values, not just binary. The
output, however, was still binary (0 or 1).
• designed for pattern recognition tasks.
• Introduced a practical, trainable model that could be applied to
simple classification tasks.
Neural Network architectures
module 3 Artificial Intelligence and ML.
module 3 Artificial Intelligence and ML.
module 3 Artificial Intelligence and ML.
module 3 Artificial Intelligence and ML.

module 3 Artificial Intelligence and ML.

  • 1.
    Class: TY B.Tech Semester: VI Course Name: Artificial Intelligence and Neural Network (CMCR0604) Lab Name: Artificial Intelligence and Neural Network Lab (CMLR0604) This Photo by Unknown Author is licensed under CC BY
  • 2.
    This Photo byUnknown Author is licensed under CC BY-SA
  • 8.
    • Artificial Intelligenceis a broad field in computer science focused on creating systems that can perform tasks that normally require human intelligence. These tasks include problem-solving, learning, reasoning, perception, language understanding, and decision-making. • AI Components: •Knowledge Representation: How information about the world is represented. •Reasoning: How AI systems draw conclusions or make decisions. •Learning: How AI systems improve from experience (which is where Machine Learning fits in). Artificial Intelligence (AI) This Photo by Unknown Author is licensed under CC BY-ND
  • 9.
    • Machine Learningis a subset of AI that allows systems to learn from data and improve performance over time without being explicitly programmed. Instead of relying on pre-defined rules or instructions, machine learning models use data patterns and statistical techniques to make decisions and predictions. • The goal of ML is to enable machines to learn from experience (data) and generalize this knowledge to make accurate predictions on new, unseen data. • ML is a major technique used to build AI systems. While traditional AI systems may rely heavily on logic and human expertise, machine learning allows AI systems to evolve and adapt through data, making them more flexible and powerful. Machine Learning(ML) This Photo by Unknown Author is licensed under CC BY
  • 10.
    Types of MachineLearning 1 Supervised Learning Algorithms learn from labeled data to predict future outcomes. 2 Unsupervised Learning Algorithms discover patterns and structures in unlabeled data. 4 Reinforcement Learning Algorithms learn through trial and error, optimizing actions based on rewards and penalties. 3 Semi Supervised Learning Algorithms learn from labeled and unlabeled data, effectively utilizing limited labeled data to predict outcome.
  • 11.
    Supervised Learning: Learningfrom Labels Definition Supervised learning trains models on labeled datasets, allowing them to predict outputs based on input features. It's akin to a teacher guiding a student with examples and answers. Examples Image classification, spam detection, and fraud detection are common applications of supervised learning.
  • 12.
    Supervised Learning: Learningfrom Labels Definition Supervised learning trains models on labeled datasets, allowing them to predict outputs based on input features. It's akin to a teacher guiding a student with examples and answers. Examples Image classification, spam detection, and fraud detection are common applications of supervised learning.
  • 13.
    Unsupervised Learning: Discovering HiddenPatterns 1 No Labels Unsupervised learning operates on unlabeled datasets, seeking to identify patterns and structures within the data. 2 Applications Clustering, anomaly detection, and dimensionality reduction are examples of unsupervised learning tasks.
  • 14.
    Semi-Supervised Learning: Bridging theGap Combining Strengths Semi-supervised learning leverages both labeled and unlabeled data, effectively utilizing limited labeled data to improve model performance. Real-world Benefits This approach is especially valuable when labeled data is scarce, making it practical for many real-world applications.
  • 15.
    Reinforcement Learning: LearningThrough Interactions Trial and Error Reinforcement learning trains agents to learn through trial and error, making decisions based on maximizing rewards and minimizing penalties. Applications This paradigm is ideal for tasks involving control, robotics, game playing, and optimization problems.
  • 16.
    Supervised Learning: ACloser Look 1 Regression Predicts continuous values, such as stock prices or house prices. 2 Classification Categorizes data into discrete classes, such as spam detection or image recognition. 3 Deep Learning Utilizes artificial neural networks with multiple layers to extract complex patterns from data.
  • 17.
    Unsupervised Learning: Key Algorithms Clustering Groupssimilar data points together based on their characteristics. Dimensionality Reduction Simplifies data by reducing the number of features while preserving essential information. Association Rule Mining Discovers relationships and dependencies between data elements.
  • 18.
     A neuralnetwork is a method in artificial intelligence (AI) that teaches computers to process data in a way that is inspired by the human brain.  It is a type of machine learning (ML) process, called deep learning, that uses interconnected nodes or neurons in a layered structure that resembles the human brain.  It creates an adaptive system that computers use to learn from their mistakes and improve continuously.  Thus, artificial neural networks attempt to solve complicated problems, like summarizing documents or recognizing faces, with greater accuracy. Introduction to Neural Network (NN)
  • 19.
     AI isthe overarching goal of creating intelligent systems, capable of performing tasks that typically require human-level cognitive abilities. AI can be achieved through various approaches, one of which is Machine Learning.  Machine Learning is a method of achieving AI by enabling systems to learn from data and improve over time without explicit programming. ML is often the method of choice for building intelligent systems, as it allows machines to adapt to new data and experiences.  Neural Networks are a powerful technique within ML, often used when the complexity of the task requires modeling complex relationships, patterns, or unstructured data. Neural networks enable deep learning, which has become the foundation of many advanced AI applications, including self-driving cars, facial recognition, language translation, etc. AI, ML and NN connection
  • 20.
    • Artificial Intelligence(AI): The goal is to develop a self-driving car that can navigate and make decisions like a human driver, perceiving the environment and taking actions. • Machine Learning (ML): To achieve this, the self-driving car uses machine learning to learn from vast amounts of driving data (sensor data, images, and videos). The system uses this data to train models that allow the car to identify objects like pedestrians, other cars, and traffic signs. • Neural Networks (NN): Neural networks, specifically Convolutional Neural Networks (CNNs), are used for image recognition tasks. The neural network processes data from cameras and LiDAR sensors to identify objects in the car's environment, like detecting pedestrians on the road. Additionally, Recurrent Neural Networks (RNNs) may be used for handling sequential data, such as predicting future actions based on the car's current trajectory and past movements. Scenario: Self-Driving Cars (AI Application)
  • 21.
     A neuralnetwork is a method in artificial intelligence (AI) that teaches computers to process data in a way that is inspired by the human brain.  It is a type of machine learning (ML) process, called deep learning, that uses interconnected nodes or neurons in a layered structure that resembles the human brain.  It creates an adaptive system that computers use to learn from their mistakes and improve continuously.  Thus, artificial neural networks attempt to solve complicated problems, like summarizing documents or recognizing faces, with greater accuracy. Introduction to Neural Network (NN)
  • 22.
    • Neural networkscan help computers make intelligent decisions with limited human assistance. • This is because they can learn and model the relationships between input and output data that are nonlinear and complex. • NN can make generalizations and inferences • Neural networks can comprehend unstructured data and make general observations without explicit training. • For instance, they can recognize that two different input sentences have a similar meaning • Can you tell me how to make the payment? • How do I transfer money? • A neural network would know that both sentences mean the same thing. Importance of Neural Network (NN)
  • 23.
    Neural networks haveseveral use cases across many industries, such as the following: • Medical diagnosis by medical image classification • Targeted marketing by social network filtering and behavioral data analysis • Financial predictions by processing historical data of financial instruments • Electrical load and energy demand forecasting • Process and quality control • Chemical compound identification Use cases of Neural Network (NN)
  • 24.
    Computer vision Computer visionis the ability of computers to extract information and insights from images and videos. With neural networks, computers can distinguish and recognize images similar to humans. Computer vision has several applications, such as the following: • Visual recognition in self-driving cars so they can recognize road signs and other road users • Content moderation to automatically remove unsafe or inappropriate content from image and video archives • Facial recognition to identify faces and recognize attributes like open eyes, glasses, and facial hair • Image labeling to identify brand logos, clothing, safety gear, and other image details Applications of Neural Network (NN)
  • 25.
    Speech recognition Neural networkscan analyze human speech despite varying speech patterns, pitch, tone, language, and accent. Virtual assistants like Amazon Alexa and automatic transcription software use speech recognition to do tasks like these: • Assist call center agents and automatically classify calls • Convert clinical conversations into documentation in real time • Accurately subtitle videos and meeting recordings for wider content reach Applications of Neural Network (NN)
  • 26.
    Natural language processing Naturallanguage processing (NLP) is the ability to process natural, human-created text. Neural networks help computers gather insights and meaning from text data and documents. NLP has several use cases, including in these functions: • Automated virtual agents and chatbots • Automatic organization and classification of written data • Business intelligence analysis of long-form documents like emails and forms • Indexing of key phrases that indicate sentiment, like positive and negative comments on social media • Document summarization and article generation for a given topic Applications of Neural Network (NN)
  • 27.
    Recommendation engines Recommendation enginespowered by neural networks have become a crucial part of many services we use today. They help personalize content and suggest products based on a user's preferences, behaviors, and past interactions. Example: Movie Recommendation Engine (Netflix), E-commerce Recommendation Engine (Amazon), Music Recommendation Engine (Spotify) Applications of Neural Network (NN)
  • 28.
    • The humanbrain is the inspiration behind neural network architecture. • Human brain cells, called neurons, form a complex, highly interconnected network and send electrical signals to each other to help humans process information. • Similarly, an artificial neural network is made of artificial neurons that work together to solve a problem. • Artificial neurons are software modules, called nodes, and artificial neural networks are software programs or algorithms that, at their core, use computing systems to solve mathematical calculations. Working of Neural Network (NN)
  • 29.
    Input Layer Information fromthe outside world enters the artificial neural network from the input layer. Input nodes process the data, analyze or categorize it, and pass it on to the next layer. Hidden Layer Hidden layers take their input from the input layer or other hidden layers. Artificial neural networks can have a large number of hidden layers. Each hidden layer analyzes the output from the previous layer, processes it further, and passes it on to the next layer. Output Layer The output layer gives the final result of all the data processing by the artificial neural network. It can have single or multiple nodes. For instance, if we have a binary (yes/no) classification problem, the output layer will have one output node, which will give the result as 1 or 0. However, if we have a multi-class classification problem, the output layer might consist of more than one output node. Simple Neural Network architecture
  • 30.
    How do ourbrains work?  A processing element Dendrites: Input Cell body: Processor Synaptic: Link Axon: Output
  • 31.
    How do ourbrains work?  A processing element A neuron is connected to other neurons through about 10,000 synapses
  • 32.
    How do ourbrains work?  A processing element A neuron receives input from other neurons. Inputs are combined.
  • 33.
    How do ourbrains work?  A processing element Once input exceeds a critical level, the neuron discharges a spike ‐ an electrical pulse that travels from the body, down the axon, to the next neuron(s)
  • 34.
    How do ourbrains work?  A processing element The axon endings almost touch the dendrites or cell body of the next neuron.
  • 35.
    How do ourbrains work?  A processing element Transmission of an electrical signal from one neuron to the next is effected by neurotransmitters.
  • 36.
    How do ourbrains work?  A processing element Neurotransmitters are chemicals which are released from the first neuron and which bind to the Second.
  • 37.
    How do ourbrains work?  A processing element This link is called a synapse. The strength of the signal that reaches the next neuron depends on factors such as the amount of neurotransmitter available.
  • 38.
    How do ANNswork? An artificial neuron is an imitation of a human neuron
  • 39.
    How do ANNswork? • Now, let us have a look at the model of an artificial neuron.
  • 40.
    How do ANNswork? Output x1 x2 xm ∑ y Processing Input ∑= X1+X2 + ….+Xm =y . . . . . . . . . . . .
  • 41.
    How do ANNswork? Not all inputs are equal Output x1 x2 xm ∑ y Processing Input ∑= X1w1+X2w2 + ….+Xmwm =y w1 w2 wm weights . . . . . . . . . . . . . . . . .
  • 42.
    How do ANNswork? The signal is not passed down to the next neuron verbatim Transfer Function (Activation Function) Output x1 x2 xm ∑ y Processing Input w1 w2 wm weights . . . . . . . . . . . . f(vk) . . . . .
  • 43.
    The output isa function of the input, that is affected by the weights, and the transfer functions
  • 44.
    McCulloch-Pitts (M-P) NeuronModel • The McCulloch-Pitts neuron is a simple and first computational model of a neuron model of a biological neuron that was first proposed by Warren McCulloch and Walter Pitts in 1943. • It is a binary neuron (activation function is binary), meaning that it can only have two states: on or off • It can be divided into two parts: • Aggregation: The neuron aggregates multiple boolean inputs (0 or 1). • Threshold Decision: Based on the aggregated value, the neuron makes a decision using a threshold function. • Weights associated with link can be Excitatory (positive) or inhibitory (negative ) • Mostly used in logic functions
  • 45.
    McCulloch-Pitts Model ofNeuron The McCulloch-Pitts neuron has three components: • Inputs: The inputs are the signals that the neuron receives from other neurons. • Threshold: The threshold is the value that the weighted sum of the inputs must exceed in order for the neuron to fire. • Output: The output is the signal that the neuron sends to other neurons.
  • 46.
  • 47.
    McCulloch-Pitts Model ofNeuron The McCulloch-Pitts neuron works as follows: 1. The inputs are multiplied by their corresponding weights. 2. The weighted sums of the inputs are summed. 3. If the summed value is greater than or equal to the threshold, the neuron fires and outputs a 1. Otherwise, the neuron does not fire and outputs a 0.
  • 48.
    McCulloch-Pitts Model ofNeuron The McCulloch-Pitts neuron additional things • The McCulloch-Pitts neuron is a binary neuron, which means that it cannot represent real-valued data. • The McCulloch-Pitts neuron is not a very accurate model of biological neurons. Biological neurons have a variety of features that are not captured by the McCulloch-Pitts neuron, such as the ability to integrate inputs over time. • The McCulloch-Pitts neuron is not very powerful and cannot be used to solve complex problems. More complex neural network models are needed to solve complex problems
  • 49.
    McCulloch-Pitts Model ofNeuron Overall, the McCulloch-Pitts neuron is a simple and easy-to-understand model of a biological neuron. It has been used to simulate the behavior of neural networks and to develop artificial neural networks. However, it is not a very accurate model of biological neurons and is not very powerful
  • 50.
    McCulloch-Pitts Model ofNeuron: AND function Truth table
  • 51.
    McCulloch-Pitts Model ofNeuron: AND function Truth table Assuming w1=1 and w2=1 No particular training algorithm only analysis
  • 52.
    McCulloch-Pitts Model ofNeuron: AND function Truth table Assuming w1=1 and w2=1 No particular training algorithm only analysis
  • 53.
    McCulloch-Pitts Model ofNeuron: AND function Truth table Assuming w1=1 and w2=1 No particular training algorithm only analysis
  • 54.
    McCulloch-Pitts Model ofNeuron: AND function Truth table Assuming w1=1 and w2=1 Threshold Calculation
  • 55.
    McCulloch-Pitts Model ofNeuron: AND function Truth table Assuming w1=1 and w2=1
  • 56.
    McCulloch-Pitts Model ofNeuron: OR function Truth table
  • 57.
    McCulloch-Pitts Model ofNeuron: OR function
  • 58.
    McCulloch-Pitts Model ofNeuron: OR function Threshold to 0.5 o For A=0,B=0A = 0, B = 0, the sum is 0, which is less than 0.5, so the output is 0 (correct). o For A=0,B=1A = 0, B = 1, the sum is 1, which is greater than or equal to 0.5, so the output is 1 (correct). o For A=1,B=0A = 1, B = 0, the sum is 1, which is greater than or equal to 0.5, so the output is 1 (correct). o For A=1,B=1A = 1, B = 1, the sum is 2, which is greater than or equal to 0.5, so the output is 1 (correct).
  • 62.
    Not possible tofire neuron for (1,0) so weights are not suitable
  • 63.
  • 65.
    • Crucial componentsin artificial neural networks because they determine the output of a neuron given an input or a weighted sum of inputs. • Different activation functions are used depending on the task, such as classification or regression, and the desired properties of the network, such as non-linearity, differentiability, and output range. Activation functions
  • 66.
  • 67.
  • 68.
  • 69.
  • 70.
  • 71.
  • 72.
  • 73.
  • 74.
  • 75.
  • 76.
  • 77.
  • 78.
  • 79.
  • 80.
    MP model forX-OR gate
  • 81.
    MP model forX-OR gate
  • 82.
    MP model forX-OR gate
  • 83.
    MP model forX-OR gate
  • 90.
  • 91.
  • 92.
    MP model Perceptron •The weights (same) and thresholds are fixed and do not adjust based on input or output (no learning). • Inputs and outputs are binary (0 or 1). • could model simple logical functions (AND, OR, etc.) • Provided a theoretical foundation for understanding how neurons could perform computations, but it was not practical for solving real-world problems. • Could learn to classify input data into categories by adjusting its weights based on feedback (learning rule) • The inputs could be continuous values, not just binary. The output, however, was still binary (0 or 1). • designed for pattern recognition tasks. • Introduced a practical, trainable model that could be applied to simple classification tasks.
  • 93.