2. Hard Computing
Traditional computing approaches
Deterministic algorithms
Mathematical models
Results are expected to be accurate and predictable.
2
3. Hard Computing Characteristics
Deterministic algorithms
Exact solution
Binary Logic
Mathematical models
Digital processing.
3
4. Hard Computing Applications
Numerical Analysis
Optimization Problems
Control Systems
Signal processing
Scientific Simulations (fluid dynamics, structural analysis)
Computer vision (object extraction)
DBMS (efficient data retrieval)
Cryptography
4
5. Hard Computing Applications
Robotics (path controlling, robotics movements)
Operations Research (decision making, resource allocations,
optimizations )
Artificial Intelligence (rule based system)
5
7. Soft Computing components
Fuzzy logic : Degree of truthness( decision making with
incomplete information)
Neural Network : interconnected nodes( learn and adapt
data) similar to neurons of brain used in pattern recognition,
NLP and ML.
Genetic Algorithms : optimization algorithms based on natural
selection used in scheduling , routing and parameter tuning
for optimization.
7
8. Soft Computing components
Probabilistic Reasoning : reasoning under uncertainty
(representation and manipulation of uncertain information in
a systematic way).
Uncertainty addressed (finance, medical diagnosis,
autonomous systems)
Evolutionary computations : optimization and search
algorithms inspired by the process of natural evolution
(engineering design, financial modeling, game playing)
8
9. Soft Computing unique features
Tolerance for Uncertainty (work with uncertain or
incomplete information)
Flexibility and Adaptability (update their knowledge and
improve performance)
Approximate reasoning (not precise but acceptable with
certain degree of tolerance)
Parallel processing ( simultaneous processing of large data
set for complex computation)
9
10. Soft Computing unique features
Human like reasoning (attempt to mimic aspects of human
cognition and reasoning , involve subjective judgments,
linguistic expressions, and decision-making based on
intuition)
Learning and adaptation(learning from experience and
adapting to changes in the environment)
Integration of Multiple Techniques(combining fuzzy logic
with neural networks or genetic algorithms)
10
11. Fuzzy Computing / Fuzzy Logic
branch of soft computing that deals with reasoning and
decision-making under uncertainty and imprecision.
Fuzzy logic allows for the representation of vague or
ambiguous information by using degrees of truth rather than
strict binary values (true or false).
11
12. Fuzzy Computing key components
Fuzzy sets : allowing elements to have degrees of membership between 0
and 1, indicating the extent to which an element belongs to the set.
We will be using two fuzzy sets Young and Very Young to represent different
age ranges.
A = {young} ∈ [0, 90]
B = {Very Young} ∈ [0, 60]
if the range of the young set is from 0 to 90, then as we move away from 0,
the youngness will keep decreasing and become 0 at the age 90. The age of
30 is in the upper half of the entire range and hence it has a higher
membership value for being considered young.
For set Very Young, the range is from 0 to 60, so age 30 is at the center and
hence it’s obvious it takes a membership value of 0.5.
12
13. Fuzzy Computing key components
Membership Functions: The fuzzy membership function is the graphical
way of visualizing the degree of membership of any value in a given fuzzy
set. In the graph, X-axis represents the universe of discourse and the Y-axis
represents the degree of membership in the range [0, 1]
Singleton membership function: assigns membership value 1 to a particular
value of x and assigns value 0 to the rest of all.
Triangular membership function:The triangle which fuzzifies the input can be
defined by three parameters a, b and c, where c defines the base and b defines
the height of the triangle.
Trapezoidal membership function:The trapezoidal membership function is
defined by four parameters: a, b, c and d. Span b to c represents the highest
membership value that element can take. And if x is between (a, b) or (c, d),
then it will have a membership value between 0 and 1.
13
15. Evolutionary Computation
Evolutionary computation is a family of optimization algorithms inspired by the
principles of natural selection and evolution.
These algorithms are used to find approximate solutions to optimization and
search problems where an exact solution may be impractical or impossible to
find.
There are several types of evolutionary computation algorithms, and they
share common principles derived from biological evolution. Some of the key
concepts include:
1) Genetic Algorithms (GAs): These are among the most well-known
evolutionary algorithms. GAs use a population of potential solutions,
represented as individuals with a set of parameters (genetic information).
Through processes like selection, crossover (recombination), and mutation,
new generations of individuals evolve, with the aim of improving the overall
quality of solutions over successive generations.
15
16. 2) Genetic programming (GP):
Similar to genetic algorithms, genetic programming evolves computer
programs rather than fixed-length strings of parameters.
It starts with a population of randomly generated programs and evolves them
through genetic operations.
3) Evolutionary Strategies (ES):
GAs use a population of potential solutions, represented as individuals with a
set of parameters (genetic information). Through processes like selection,
crossover (recombination), and mutation,
new generations of individuals evolve, with the aim of improving the overall
quality of solutions over successive generations.
4) Differential Evolution (DE):
DE is a population-based optimization algorithm that utilizes differences
between individuals in the population to guide the search for better solutions.
It is particularly effective in continuous parameter spaces.
16
17. 5 ) Particle Swarm Optimization (PSO):
While not strictly an evolutionary algorithm, PSO is often considered in the
same category.
It is inspired by the social behavior of birds and fish and involves a population
of individuals (particles) that move through the search space to find optimal
solutions.
Examples:
Swarm optimization :
inspired by the collective behavior of social organisms, such as
swarms of birds, schools of fish, or colonies of ants.
model the interaction and collaboration among individuals in a
group to solve complex problems
17
18. The goal is to find the best solution or approximate the optimal solution through
the collective efforts of the swarm.
(PSO) Each particle adjusts its position based on its own experience and the
experience of its neighbors.
Key Components :
1. Particles: Individuals representing potential solutions to the optimization
problem.
2. Position and Velocity: Each particle has a position in the search space, which
corresponds to a potential solution, and a velocity, which determines the
direction and speed of its movement.
3. Fitness Function: A measure of how well a particle's position corresponds to an
optimal solution. The goal is to minimize or maximize this fitness function.
4. Personal Best (pBest) and Global Best (gBest): Each particle keeps track of its
best-known position (pBest), and the best-known position among all particles
in the swarm (gBest) is also maintained.
5. Update Rules: The position and velocity of each particle are updated based on
mathematical equations that take into account the particle's previous
experience, the influence of its neighbors, and global information.
18
19. Examples:
Ant Colony optimization :
inspired by the based on the foraging behavior of ants
goal is to find the best solution from a finite set of possible solutions
inspired by the behavior of real ant colonies, where ants collectively find the
shortest path between their nest and a food source by depositing and following
chemical pheromones on the ground.
The basic components of the ACO algorithm include:
1. Ants
2. Pheromones: desirability of a particular solution. Ants deposit pheromones on
the paths they traverse, and these pheromone trails influence the movement of
other ants.
3. Solution Construction:Ants probabilistically build solutions by considering both
the pheromone levels on paths and a heuristic measure that guides the search
based on domain-specific knowledge. Solutions are constructed incrementally.
4. Pheromone Update: Shorter and better solutions receive higher levels of
pheromones, while longer or suboptimal solutions receive lower levels.
5. Evaporation: adaptations over changing conditions.
19
20. Neural Network :
A neural network is a computational model inspired by the structure and
functioning of the human brain.
It consists of interconnected nodes, called neurons, organized into layers.
Each neuron receives input signals, processes them, and produces an output
signal.
The output of one layer serves as the input to the next layer, creating a
hierarchy of layers that can learn complex patterns and relationships from
data.
There are several types of neural networks, including:
1) Feedforward Neural Networks (FNN): In this type of neural network,
information travels in one direction, from input to output layer, without any
feedback loops. They are commonly used for tasks like classification and
regression.
2) Recurrent Neural Networks (RNN): RNNs have connections that form
cycles, allowing them to exhibit temporal dynamic behavior. They are well-
suited for tasks involving sequential data, such as time series prediction,
speech recognition, and natural language processing.
20
21. 3) Convolutional Neural Networks (CNN): CNNs are designed to process
structured grid-like data, such as images. They consist of layers of
convolutional filters followed by pooling layers, enabling them to learn
spatial hierarchies of features.
4) Generative Adversarial Networks (GAN): GANs consist of two neural
networks, a generator and a discriminator, which are trained
simultaneously. The generator generates new data instances, while the
discriminator tries to distinguish between real and generated data. GANs
are used for generating realistic synthetic data, image-to-image translation,
and other tasks.
5) Long Short-Term Memory (LSTM) Networks: A special type of RNN,
LSTMs are designed to capture long-term dependencies in sequential data
by using a more sophisticated memory cell structure. They are widely used
in applications where remembering past information for a long time is
crucial, such as machine translation and speech recognition.
21
22. Machine learning :
Focuses on the development of algorithms and statistical models that enable computers
to learn from and make predictions or decisions based on data.
There are three main types of machine learning:
1) Supervised Learning:
In supervised learning, the algorithm is trained on a labeled dataset, meaning
that each input data point is associated with a corresponding output label.
The goal is for the algorithm to learn a mapping from input to output so that it
can predict the correct output for new, unseen data. Examples of supervised
learning tasks include classification (e.g., spam detection, image recognition)
and regression (e.g., predicting house prices, stock prices).
2) Unsupervised Learning:
Unsupervised learning involves training algorithms on unlabeled data, where
the algorithm must find patterns or structure within the data on its own.
The goal is to uncover hidden patterns, group similar data points together, or
reduce the dimensionality of the data. Clustering (e.g., customer
segmentation, image segmentation) and dimensionality reduction (e.g.,
principal component analysis, t-distributed stochastic neighbor embedding) are
common unsupervised learning tasks.
22
23. 3) Reinforcement Learning:
Reinforcement learning is a type of learning where an agent learns to interact
with an environment by taking actions and receiving feedback in the form of
rewards or penalties.
The agent's goal is to learn the optimal policy (i.e., sequence of actions) that
maximizes cumulative reward over time. Reinforcement learning has
applications in areas such as robotics, game playing, and autonomous
systems.
23
24. Associative Memory :
information is retrieved based on its content rather than its storage location.
In other words, associative memory allows for accessing information by
providing cues or partial information related to the desired content, enabling
the retrieval of associated memories.
There are several types of associative memory:
Content Addressable Memory (CAM): CAM is a hardware-based
implementation of associative memory commonly used in computer systems.
In CAM, data is stored along with associated tags, and the memory can be
searched using a content-based query. CAM is particularly useful for tasks
such as caching and routing in computer networks.
Neural Associative Memory: Neural associative memory is a type of
artificial neural network (ANN) designed to store and retrieve patterns based
on their content. One of the most well-known models of neural associative
memory is the Hopfield network, proposed by John Hopfield in 1982. Hopfield
networks can store binary patterns and retrieve them from partial or noisy
inputs by converging to stable states through iterative updates.
24
25. Semantic Associative Memory:
Semantic associative memory is a concept in cognitive psychology that
describes how memories are interconnected based on semantic
relationships.
According to this model, memories are organized in a network where related
concepts are linked to each other, facilitating the retrieval of associated
information. Semantic associative memory plays a crucial role in human
cognition, influencing processes such as language comprehension, problem-
solving, and decision-making.
25
26. Adaptive Resonance Theory (ART) :
ART is a framework developed by Stephen Grossberg and Gail Carpenter in
the 1980s to model how neural networks learn and adapt to new information
while maintaining stability and plasticity.
ART networks are a class of artificial neural networks that exhibit properties
of self-organization, unsupervised learning, and adaptive response to novel
stimuli.
The key idea behind ART is to address the stability-plasticity dilemma in
neural networks.
Stability refers to the ability of a network to maintain learned representations
in the face of new information, while plasticity refers to the ability to adapt and
learn from new experiences.
ART networks achieve this balance by dynamically adjusting their response
to input stimuli based on the level of familiarity or novelty.
26
27. Adaptive Resonance Theory (ART) Components :
1) Recognition and Comparison: When presented with input data, ART
networks compare the input pattern to previously learned patterns stored in
memory. This comparison is done using a similarity measure, such as the
cosine similarity or Euclidean distance.
2) Vigilance Parameter: ART networks use a parameter called vigilance to
control the sensitivity to input patterns. A higher vigilance value results in a
stricter matching criterion, while a lower vigilance value allows for more
flexibility in recognizing patterns. The vigilance parameter influences the
network's ability to learn new patterns while maintaining stability.
3) Adaptation and Learning: If the input pattern matches an existing
memory representation above a certain threshold (determined by the
vigilance parameter), the network reinforces the corresponding memory
trace. If the input pattern is sufficiently novel or dissimilar, the network
creates a new memory representation.
27
28. Adaptive Resonance Theory (ART) Components :
Reset Mechanism: In situations where the input pattern is too dissimilar to
existing memories and fails to activate any network units, ART networks
employ a reset mechanism to adapt to novel input patterns. The reset
mechanism allows the network to create new memory representations and
adapt its internal state to accommodate new information.
28
29. Deep Learning :
Deep learning is a subset of machine learning, which in turn is a subset of
artificial intelligence (AI).
It involves algorithms known as artificial neural networks, which are inspired
by the structure and function of the human brain. These neural networks are
capable of learning from data, identifying patterns, and making predictions or
decisions.
Deep learning algorithms are particularly effective for tasks such as image
and speech recognition, natural language processing, and many other types
of pattern recognition tasks.
They have gained significant attention and popularity due to their ability to
automatically learn features from raw data, without the need for manual
feature extraction, which was a common practice in traditional machine
learning approaches.
Some of the popular deep learning architectures include convolutional neural
networks (CNNs) for image recognition, recurrent neural networks (RNNs) for
sequential data processing, and transformers for natural language processing
tasks.
29