This document summarizes a lecture on using neural networks for constrained optimization problems. It introduces the Boltzmann machine and continuous Hopfield nets, which are neural network architectures that can find solutions to constrained problems like the Traveling Salesman Problem (TSP). The Boltzmann machine uses a probabilistic update procedure and simulated annealing to search for optimal solutions represented by the network weights, which encode the problem constraints. Its architecture has units arranged in rows and columns with weights that encourage at most one unit active per row and column. The algorithm iteratively proposes state changes and accepts or rejects them probabilistically based on consensus function changes.
Basic definitions, terminologies, and Working of ANN has been explained. This ppt also shows how ANN can be performed in matlab. This material contains the explanation of Feed forward back propagation algorithm in detail.
In machine learning, a convolutional neural network is a class of deep, feed-forward artificial neural networks that have successfully been applied fpr analyzing visual imagery.
Part 2 of the Deep Learning Fundamentals Series, this session discusses Tuning Training (including hyperparameters, overfitting/underfitting), Training Algorithms (including different learning rates, backpropagation), Optimization (including stochastic gradient descent, momentum, Nesterov Accelerated Gradient, RMSprop, Adaptive algorithms - Adam, Adadelta, etc.), and a primer on Convolutional Neural Networks. The demos included in these slides are running on Keras with TensorFlow backend on Databricks.
Introducing the use of the machine learning in the Matlab Environment. This technique is related to the Artificial Intelligence. Machine Learning is a discussed topic in the field of Computer Science, Robotics, Artificial Vision.
Basic definitions, terminologies, and Working of ANN has been explained. This ppt also shows how ANN can be performed in matlab. This material contains the explanation of Feed forward back propagation algorithm in detail.
In machine learning, a convolutional neural network is a class of deep, feed-forward artificial neural networks that have successfully been applied fpr analyzing visual imagery.
Part 2 of the Deep Learning Fundamentals Series, this session discusses Tuning Training (including hyperparameters, overfitting/underfitting), Training Algorithms (including different learning rates, backpropagation), Optimization (including stochastic gradient descent, momentum, Nesterov Accelerated Gradient, RMSprop, Adaptive algorithms - Adam, Adadelta, etc.), and a primer on Convolutional Neural Networks. The demos included in these slides are running on Keras with TensorFlow backend on Databricks.
Introducing the use of the machine learning in the Matlab Environment. This technique is related to the Artificial Intelligence. Machine Learning is a discussed topic in the field of Computer Science, Robotics, Artificial Vision.
Recurrent Neural Networks have shown to be very powerful models as they can propagate context over several time steps. Due to this they can be applied effectively for addressing several problems in Natural Language Processing, such as Language Modelling, Tagging problems, Speech Recognition etc. In this presentation we introduce the basic RNN model and discuss the vanishing gradient problem. We describe LSTM (Long Short Term Memory) and Gated Recurrent Units (GRU). We also discuss Bidirectional RNN with an example. RNN architectures can be considered as deep learning systems where the number of time steps can be considered as the depth of the network. It is also possible to build the RNN with multiple hidden layers, each having recurrent connections from the previous time steps that represent the abstraction both in time and space.
Object classification using CNN & VGG16 Model (Keras and Tensorflow) Lalit Jain
Using CNN with Keras and Tensorflow, we have a deployed a solution which can train any image on the fly. Code uses Google Api to fetch new images, VGG16 model to train the model and is deployed using Python Django framework
Part 1 of the Deep Learning Fundamentals Series, this session discusses the use cases and scenarios surrounding Deep Learning and AI; reviews the fundamentals of artificial neural networks (ANNs) and perceptrons; discuss the basics around optimization beginning with the cost function, gradient descent, and backpropagation; and activation functions (including Sigmoid, TanH, and ReLU). The demos included in these slides are running on Keras with TensorFlow backend on Databricks.
An overview of Deep Learning With Neural Networks. Use cases of Deep learning and it's development. Basic introduction tp the layers of Neural Networks.
This presentation on Recurrent Neural Network will help you understand what is a neural network, what are the popular neural networks, why we need recurrent neural network, what is a recurrent neural network, how does a RNN work, what is vanishing and exploding gradient problem, what is LSTM and you will also see a use case implementation of LSTM (Long short term memory). Neural networks used in Deep Learning consists of different layers connected to each other and work on the structure and functions of the human brain. It learns from huge volumes of data and used complex algorithms to train a neural net. The recurrent neural network works on the principle of saving the output of a layer and feeding this back to the input in order to predict the output of the layer. Now lets deep dive into this presentation and understand what is RNN and how does it actually work.
Below topics are explained in this recurrent neural networks tutorial:
1. What is a neural network?
2. Popular neural networks?
3. Why recurrent neural network?
4. What is a recurrent neural network?
5. How does an RNN work?
6. Vanishing and exploding gradient problem
7. Long short term memory (LSTM)
8. Use case implementation of LSTM
Simplilearn’s Deep Learning course will transform you into an expert in deep learning techniques using TensorFlow, the open-source software library designed to conduct machine learning & deep neural network research. With our deep learning course, you'll master deep learning and TensorFlow concepts, learn to implement algorithms, build artificial neural networks and traverse layers of data abstraction to understand the power of data and prepare you for your new role as deep learning scientist.
Why Deep Learning?
It is one of the most popular software platforms used for deep learning and contains powerful tools to help you build and implement artificial neural networks.
Advancements in deep learning are being seen in smartphone applications, creating efficiencies in the power grid, driving advancements in healthcare, improving agricultural yields, and helping us find solutions to climate change. With this Tensorflow course, you’ll build expertise in deep learning models, learn to operate TensorFlow to manage neural networks and interpret the results.
And according to payscale.com, the median salary for engineers with deep learning skills tops $120,000 per year.
You can gain in-depth knowledge of Deep Learning by taking our Deep Learning certification training course. With Simplilearn’s Deep Learning course, you will prepare for a career as a Deep Learning engineer as you master concepts and techniques including supervised and unsupervised learning, mathematical and heuristic aspects, and hands-on modeling to develop algorithms. Those who complete the course will be able to:
Learn more at: https://www.simplilearn.com/
https://telecombcn-dl.github.io/2017-dlsl/
Winter School on Deep Learning for Speech and Language. UPC BarcelonaTech ETSETB TelecomBCN.
The aim of this course is to train students in methods of deep learning for speech and language. Recurrent Neural Networks (RNN) will be presented and analyzed in detail to understand the potential of these state of the art tools for time series processing. Engineering tips and scalability issues will be addressed to solve tasks such as machine translation, speech recognition, speech synthesis or question answering. Hands-on sessions will provide development skills so that attendees can become competent in contemporary data analytics tools.
Recurrent Neural Networks have shown to be very powerful models as they can propagate context over several time steps. Due to this they can be applied effectively for addressing several problems in Natural Language Processing, such as Language Modelling, Tagging problems, Speech Recognition etc. In this presentation we introduce the basic RNN model and discuss the vanishing gradient problem. We describe LSTM (Long Short Term Memory) and Gated Recurrent Units (GRU). We also discuss Bidirectional RNN with an example. RNN architectures can be considered as deep learning systems where the number of time steps can be considered as the depth of the network. It is also possible to build the RNN with multiple hidden layers, each having recurrent connections from the previous time steps that represent the abstraction both in time and space.
Object classification using CNN & VGG16 Model (Keras and Tensorflow) Lalit Jain
Using CNN with Keras and Tensorflow, we have a deployed a solution which can train any image on the fly. Code uses Google Api to fetch new images, VGG16 model to train the model and is deployed using Python Django framework
Part 1 of the Deep Learning Fundamentals Series, this session discusses the use cases and scenarios surrounding Deep Learning and AI; reviews the fundamentals of artificial neural networks (ANNs) and perceptrons; discuss the basics around optimization beginning with the cost function, gradient descent, and backpropagation; and activation functions (including Sigmoid, TanH, and ReLU). The demos included in these slides are running on Keras with TensorFlow backend on Databricks.
An overview of Deep Learning With Neural Networks. Use cases of Deep learning and it's development. Basic introduction tp the layers of Neural Networks.
This presentation on Recurrent Neural Network will help you understand what is a neural network, what are the popular neural networks, why we need recurrent neural network, what is a recurrent neural network, how does a RNN work, what is vanishing and exploding gradient problem, what is LSTM and you will also see a use case implementation of LSTM (Long short term memory). Neural networks used in Deep Learning consists of different layers connected to each other and work on the structure and functions of the human brain. It learns from huge volumes of data and used complex algorithms to train a neural net. The recurrent neural network works on the principle of saving the output of a layer and feeding this back to the input in order to predict the output of the layer. Now lets deep dive into this presentation and understand what is RNN and how does it actually work.
Below topics are explained in this recurrent neural networks tutorial:
1. What is a neural network?
2. Popular neural networks?
3. Why recurrent neural network?
4. What is a recurrent neural network?
5. How does an RNN work?
6. Vanishing and exploding gradient problem
7. Long short term memory (LSTM)
8. Use case implementation of LSTM
Simplilearn’s Deep Learning course will transform you into an expert in deep learning techniques using TensorFlow, the open-source software library designed to conduct machine learning & deep neural network research. With our deep learning course, you'll master deep learning and TensorFlow concepts, learn to implement algorithms, build artificial neural networks and traverse layers of data abstraction to understand the power of data and prepare you for your new role as deep learning scientist.
Why Deep Learning?
It is one of the most popular software platforms used for deep learning and contains powerful tools to help you build and implement artificial neural networks.
Advancements in deep learning are being seen in smartphone applications, creating efficiencies in the power grid, driving advancements in healthcare, improving agricultural yields, and helping us find solutions to climate change. With this Tensorflow course, you’ll build expertise in deep learning models, learn to operate TensorFlow to manage neural networks and interpret the results.
And according to payscale.com, the median salary for engineers with deep learning skills tops $120,000 per year.
You can gain in-depth knowledge of Deep Learning by taking our Deep Learning certification training course. With Simplilearn’s Deep Learning course, you will prepare for a career as a Deep Learning engineer as you master concepts and techniques including supervised and unsupervised learning, mathematical and heuristic aspects, and hands-on modeling to develop algorithms. Those who complete the course will be able to:
Learn more at: https://www.simplilearn.com/
https://telecombcn-dl.github.io/2017-dlsl/
Winter School on Deep Learning for Speech and Language. UPC BarcelonaTech ETSETB TelecomBCN.
The aim of this course is to train students in methods of deep learning for speech and language. Recurrent Neural Networks (RNN) will be presented and analyzed in detail to understand the potential of these state of the art tools for time series processing. Engineering tips and scalability issues will be addressed to solve tasks such as machine translation, speech recognition, speech synthesis or question answering. Hands-on sessions will provide development skills so that attendees can become competent in contemporary data analytics tools.
What is Quantum Computing
What is Quantum bits (Qubit)
What is Reversible Logic gates and Logic Circuits
What is Quantum Neuron (Quron)
What are the methods of implementing ANN using Quantum computing
This is the draft slides we use for DAC 2014 presentation.
Abstract: We proposed MATEX, a distributed framework for transient simulation of power distribution networks (PDNs). MATEX utilizes matrix exponential kernel with Krylov subspace approximations to solve differential equations of linear circuit. First, the whole simulation task is divided into subtasks based on decompositions of current sources, in order to reduce the computational overheads. Then these subtasks are distributed to different computing nodes and processed in parallel. Within each node, after the matrix factorization at the beginning of simulation, the adaptive time stepping solver is performed without extra matrix re-factorizations. MATEX overcomes the stiffness hinder of previous matrix exponential-based circuit simulator by rational Krylov subspace method, which leads to larger step sizes with smaller dimensions of Krylov subspace bases and highly accelerates the whole computation. MATEX outperforms both traditional fixed and adaptive time stepping methods, e.g., achieving around 13X over the trapezoidal framework with fixed time step for the IBM power grid benchmarks.
Avionics 738 Adaptive Filtering at Air University PAC Campus by Dr. Bilal A. Siddiqui in Spring 2018. This lecture covers background material for the course.
HEAP SORT ILLUSTRATED WITH HEAPIFY, BUILD HEAP FOR DYNAMIC ARRAYS.
Heap sort is a comparison-based sorting technique based on Binary Heap data structure. It is similar to the selection sort where we first find the minimum element and place the minimum element at the beginning. Repeat the same process for the remaining elements.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
2. 2
Neural Nets for Constrained Optimization.
Introduction
Boltzmann machine
– Introduction
– Architecture and Algorithm
Boltzmann machine: application
to the TSP
Continuous Hopfield nets
Continuous Hopfield nets:
application to the TSP
References and suggested
reading
Introduction
Boltzmann machine
– Introduction
– Architecture and Algorithm
Boltzmann machine: application
to the TSP
Continuous Hopfield nets
Continuous Hopfield nets:
application to the TSP
References and suggested
reading
3. 3
Introduction
Fausett
There are nets that are designed for constrained
optimization problems (such as the Traveling
Salesman Problem, TSP).
These nets have fixed weights that incorporate
information concerning the constraints and the
quantity to be optimized.
The nets iterate to find a pattern of o/p signals
that represents a solution to the problem.
E.g of such nets are the Boltzmann machine
(without learning), the continuous Hopfield net,
and several variations (Gaussian and Cauchy
nets).
Other optimization problems to which this type of
NNs can be applied to are: job shop scheduling,
space allocation,…
4. 4
Traveling Salesman Problem (TSP)
The aim of the TSP is to find a tour of a
given set of cities that is of minimum length.
A tour consists of visiting each city exactly
once and returning to the starting city.
The tour of minimum distance is desired.
The difficulty of finding a solution increases
rapidly as the number of cities increases.
Many approaches other than NNs to solve
this problem are extensively reported in the
literature.
5. 5
Introduction… NN approach to constrained optimization
Fausett
Each unit represents a hypothesis, with the unit
“on” if the hypothesis is true, “off” if the hypothesis
is false.
The weights are fixed to represent both the
constraints of the problem and the function to be
optimized.
The solution of the problem corresponds to the
minimum of an energy function or the maximum of
a consensus function for the net.
NNs have several potential advantages over
traditional techniques for certain types of
optimization problems.
–They can find near optimal solutions quickly
for large problems.
6. 6
Introduction… NN approach to constrained optimization
Fausett
–They can also handle situations in which
some constraints are weak (desirable but not
absolutely required).
For e.g. in the TSP, it is physically impossible to visit
2 cities simultaneously, but it may be desirable to visit
each city only once.
The difference in these types of constraints could be
reflected by making the penalty for having 2 units in
the same column “on” simultaneously larger than the
penalty for having 2 units in the same row “on”
simultaneously.
If it is more important to visit some cities than others,
these cities can be given larger self-connection
weights.
7. 7
Introduction… NN architecture for the TSP
Fausett
For n cities, we use n2 units, arranged in a square array.
A valid tour is represented by exactly one unit being “on”
in each row and in each column.
– Two units being “on” in a row indicates that the
corresponding city was visited twice;
– Two units being “on” in a column shows that the
salesman was in two cities at the same time.
The units in each row are fully interconnected; similarly
The units in each column are fully interconnected.
The weights are set so that units within the same row (or
the same column) will tend not to be “on” at the same
time.
In addition, there are connections (see later).
– between units in adjacent columns and
– between units in the first and last columns,
corresponding to the distances between cities
9. 9
Neural Nets for Constrained Optimization.
Introduction
Boltzmann machine
– Introduction
– Architecture and Algorithm
Boltzmann machine: application
to the TSP
Continuous Hopfield nets
Continuous Hopfield nets:
application to the TSP
References and suggested
reading
Introduction
Boltzmann machine
– Introduction
– Architecture and Algorithm
Boltzmann machine: application
to the TSP
Continuous Hopfield nets
Continuous Hopfield nets:
application to the TSP
References and suggested
reading
10. 10
Boltzmann machine
Fausett
The states of the units of a Boltzmann machine NNs are
binary valued, with probabilistic state transitions.
The configuration of the net is the vector of the states of the units.
The Boltzmann machine described in this lecture has fixed
weight wij, which express the degree of desirability that units
Xi and Xj both be “on”.
In applying Boltzmann machine to constrained optimization
problems, the weights represent the constraints of the problem
and the quantity to be optimized. Note that the description
presented here is based on the maximization of a consensus
function (rather than the minimization of a cost function).
The architecture of a Boltzmann machine is quite general,
consisting of
– a set of units (Xi and Xj are 2 representative units)
– a set of bi-directional connections between pairs of units.
If units Xi and Xj are connected, wij != 0.
The bi-directional nature of the connection is often represented
as wij = wji
11. 11
Boltzmann machine
Fausett
A unit may also have a self-connection wii (or equivalently, there
may be a bias unit, which is always “on” and connected to
every other unit; in this interpretation, the self-connection
weight would be replaced by the bias weight).
The state xi of unit Xi is either 1 (“on”) or 0 (“off”). The objective
of the NN is to maximize the consensus function:
The sum runs over all units of the net.
The net finds this maximum (or at least a local maximum) by
letting each unit attempt to change its state (from “on” to “off” or
vice versa).
The attempts may be made either sequentially (one unit at a
time) or in parallel (several units simultaneously).
Only the sequential Boltzmann machine will be
discussed here.
∑ ∑
=
≤i ij
jiij xxwC
13. 13
Boltzmann machine
Fausett
The change in consensus if unit Xi were to change its state (from
1 to 0 or from 0 to 1) is
where xi is the current state of unit Xi.
The coefficient will be +1 if unit Xi is currently “off”
and –1 if unit Xi is currently “on”
NOTE: that if unit Xi were to change its activation, the resulting
change in consensus can be computed from information that is
local to unit Xi , i.e. from weights on connections and activations
of units to which unit Xi is connected (with wii = 0 If unit Xi is not
connected to unit Xi ).
[ ]
+−=∆ ∑≠ij
jijiii xwwxiC 21)(
[ ]ix21−
Xi = 0
Xi = 1
Contribution from all nodes xj which
are “on” and connected to xi thru wij
14. 14
Boltzmann machine
Fausett
The control parameter T (called
temperature) is gradually reduced as
the net searches for a maximal
consensus
However, unit Xi does not necessarily change its state, even if
doing so would increase the consensus of the net.
The probability of the net accepting a change in state for unit Xi is
Lower values of T make it more likely that the net will accept a
change of state that increases its consensus and less likely that it
will accept a change that reduces its consensus.
∆
−+
=
T
iC
TiA
)(
exp1
1
),(
0)C(assuming1),(0exp0 >∆→⇒→
∆
−⇒→ TiA
T
C
T
15. 15
Boltzmann machine
Fausett
The use of a probabilistic update procedure for the activations,
with the control parameter decreasing as the net searches for the
optimal solution to the problem represented by its weights,
reduces the chances of the net getting stuck in a local maximum.
This process of gradually reducing T is called simulated
annealing.
It is analogous to the physical annealing process used to produce
a strong metal (with a regular crystalline structure).
During annealing a molten metal is cooled gradually in order to
avoid imperfections in the crystalline structure of the metal due to
freezing.
16. 16
Neural Nets for Constrained Optimization.
Introduction
Boltzmann machine
– Introduction
– Architecture and Algorithm
Boltzmann machine: application
to the TSP
Continuous Hopfield nets
Continuous Hopfield nets:
application to the TSP
References and suggested
reading
Introduction
Boltzmann machine
– Introduction
– Architecture and Algorithm
Boltzmann machine: application
to the TSP
Continuous Hopfield nets
Continuous Hopfield nets:
application to the TSP
References and suggested
reading
17. 17
Boltzmann machine Architecture
Fausett
Here is the architecture of a Boltzmann machine for units in a 2D
array.
The units within each row are fully interconnected.
Similarly, the units within each column are also fully
interconnected.
The weights on each of the connections is –p (where p>0).
Each unit has a self-connection, with weight b>0
A typical unit is labelled Ui,j
bb
U 1,1 U 1,j U 1,n
U i,1 U i,j U i,n
U n,1 U n,j U n,n
-p
-p
-p
-p
-p
-p
-p
-p
-p
-p
-p
-p
-p-p
-p
-p
-p
-p
b
b
b
b b b
b
18. 18
Boltzmann machine Algorithm
Fausett
Setting the weights:
The weights for a Boltzmann machine are fixed so that the net
will tend to make state transitions toward a maximum of the
consensus function defined above.
If we wish the net (shown in previous slide) to have exactly one
unit “on” in each row and in each column, we must choose the
values of the weights p and b so that improving the configuration
corresponds to increasing the consensus.
Each unit is connected to every other unit in the same row with
weight –p (p > 0)
Similarly, each unit is connected to every other unit in the same
column with weight –p.
The weights are penalties for violating the condition at most one
unit be “on” in each row and each column.
In addition, each unit has a self-connection, of weight b>0.
19. 19
Boltzmann machine Algorithm
Fausett
The self-connection weight is an incentive (bonus) to encourage
a unit to turn “on” if it can do so without causing more than one
unit to be on in a row or column.
If p > b, the net will function as desired:
– If unit Uij is “off” (uij = 0) and none of the units connected to
Uij is “on”, changing the status of Uij to “on” will increase the
consensus of the net by the amount b (which is a desirable
change).
– On the other hand, if one of the units in row i or in column j
(say, Ui, j+1 is already “on”), attempting to turn unit Uij “on”
would result in a change of consensus by the amount b-p.
Thus, for b-p < 0 (i.e. p>b), the effect would be to decrease
the consensus (the net will tend to reject this unfavorable
change).
– Bonus and penalty connections, with p>b, will be used in the
net for the TSP to represent the constraints for a valid tour.
20. 20
Boltzmann machine Algorithm
Fausett
Application procedure:
The weight between unit Uij and UIJ is denoted w(i, j; I,J)
The application procedure is as follows:
Step 0 Initialize weights to represent the constraints of the problem
Initialize the control parameter (temperature) T
Initialize activations of units (random binary values).
Step 1 While stopping condition is false, do steps 2-8
Step 2 Do steps 3-6 n2 times (this constitutes an epoch)
Step 3 Choose integers I and J at random between 1 and
n (unit UIJ is the current candidate to change its
state)
Step 4 Compute the change in consensus that would result:
bjijiw
JjIipJIjiw
=
==−=
),;,(
both);not(butorif),;,(
[ ]
+−=∆ ∑∑≠ji JI
ijIJ uJIjiwJIJIwuiC
, ,
),;,(),;,(21)(
21. 21
Boltzmann machine Algorithm
Fausett
Step 5 Compute the probability of acceptance of the change:
Step 6 Determine whether or not to accept the change
Let R be a random number between 0 and 1.
If R<A, accept the change:
If R>= A, reject the proposed change.
Step 7 Reduce the control parameter
Step 8 Test stopping condition:
If there has been no change of state for a specified number of epochs, or if
the temperature has reached a specified value, stop; otherwise continue.
∆
−+
=
T
iC
TA
)(
exp1
1
)(
)unit Uofstatethechanges(This1 JI,,, JIJI uu −=
)(95.0)( oldTnewT =
22. 22
Boltzmann machine Algorithm
Fausett
Initial Temperature:
The initial temperature should be taken large enough
so that the probability of accepting a change of state is
approximately 0.5, regardless of whether the change
is beneficial or detrimental.
However, since a high starting temperature increases
the required computation time significantly, a lower
initial temperature may be more practical in some
applications.
5.0),(1exp →⇒→
∆
−⇒∞→ TiA
T
C
T
23. 23
Boltzmann machine Algorithm
Fausett
Cooling schedule
Theoretical results show that the temperature should be cooled
slowly according to the logarithmic formula:
where k is an epoch.
Exponential cooling schedule can be used:
where the temperature is reduced after each epoch.
– A larger α (such as α = 0.98) allows for fewer epochs at
each temperature
– A smaller α (such as α = 0.9) may require more epochs at
each temperature.
)1log(
)( 0
k
T
kTB
+
=
)()( oldTnewT α=
24. 24
Neural Nets for Constrained Optimization.
Introduction
Boltzmann machine
– Introduction
– Architecture and Algorithm
Boltzmann machine: application
to the TSP
Continuous Hopfield nets
Continuous Hopfield nets:
application to the TSP
References and suggested
reading
Introduction
Boltzmann machine
– Introduction
– Architecture and Algorithm
Boltzmann machine: application
to the TSP
Continuous Hopfield nets
Continuous Hopfield nets:
application to the TSP
References and suggested
reading
25. 25
Boltzmann machine Application TSP
Fausett
Nomenclature:
n number of cities in the tour (there are n2 unit in the net)
i index designating a city;
j index designating position in tour, mod n; i.e.
Ui,j unit representing the hypothesis that the ith city is visited at the jth
step of the tour
ui,j activation of unit Ui,j ;
ui,j=1 if the hypothesis is true,
ui,j=0 if the hypothesis is false
di,k distance between city i and city k,
d maximum distance between 2 cities.
ni ≤≤1
njj
jnj
=→=
=→+=
0
,11
ik ≠
26. 26
Boltzmann machine Application TSP
Fausett
Architecture:
For this application it is convenient to arrange the units of the NN
in a grid (Fig. below).
– The rows of the grid represent cities to be visited
– The columns the position of a city in the tour.
J,10J,9J,8J,7J,6J,5J,4J,3J,2J,1
I,10I,9I,8I,7I,6I,5I,4I,3I,2I,1
H,10H,9H,8H,7H,6H,5H,4H,3H,2H,1
G,10G,9G,8G,7G,6G,5G,4G,3G,2G,1
F,10F,9F,8F,7F,6F,5F,4F,3F,2F,1
E,10E,9E,8E,7E,6E,5E,4E,3E,2E,1
D,10D,9D,8D,7D,6D,5D,4D,3D,2D,1
C,10C,9C,8C,7C,6C,5C,4C,3C,2C,1
B,10B,9B,8B,7B,6B,5B,4B,3B,2B,1
A,10A,9A,8A,7A,6A,5A,4A,3A,2A,1
UUUUUUUUUU
UUUUUUUUUU
UUUUUUUUUU
UUUUUUUUUU
UUUUUUUUUU
UUUUUUUUUU
UUUUUUUUUU
UUUUUUUUUU
UUUUUUUUUU
UUUUUUUUUU
10987654321City
J
I
H
G
F
E
D
C
B
A
Position
27. 27
Boltzmann machine Application TSP
Fausett
Ui,j has a self-connecton of weight b;
this represents the desirability of visiting city, i at stage j.
Ui,j is connected to all other units in row i with penalty weight –p;
this represents the constraints that the same city is not to be
visited twice.
Ui,j is connected toall other units in column j with penalty weight –p;
this represents the constraint that 2 cities cannot be visited
simultaneously.
Ui,j is connected to Uk,j+1 for with weight
This represents the distance traveled in making the
transition from city i at stage j to city k at stage j+1
Ui,j is connected to Uk,j-1 for with weight
This represents the distance traveled in making the
transition from city k at stage j-1 to city i at stage j
,,1 iknk ≠≤≤ kid ,−
,,1 iknk ≠≤≤ kid ,−
28. 28
Boltzmann machine Application TSP
Fausett
Setting the weights: The desired net will be constructed in 2 steps
First, a NN will be formed for which the maximum consensus occurs
whenever the constraints of the problem are satisfied.
i.e. when exactly one unit is “on” in each row and in each column.
Second, we will add weighted connections to represent the distances
between the cities.
In order to treat the problem as a maximum consensus problem, the
weights representing distances will be negative.
A Boltzmann machine with weights representing the constraints (but
not the distances) for the TSP is shown below.
If p>b, the net will function as desired (as explained earlier).
To complete the formulation of a Boltzmann NN for the TSP,
weighted connections representing distances must be included.
For this purpose, a typical unit Ui,j is connected to the units Uk,j-1 and
Uk,j+1 (for all ) by weights that represent the distances between
city i and city k
ik ≠
29. 29
Boltzmann machine Application TSP
Fausett
bb
U 1,1 U 1,j U 1,n
U i,1 U i,j U i,n
U n,1 U n,j U n,n
-p
-p
-p
-p
-p
-p
-p
-p
-p
-p
-p
-p
-p-p
-p
-p
-p
-p
b
b
b
b b b
b
30. 30
Boltzmann machine Application TSP
Fausett
The distance weights are shown on the figure below for the typical
unit Ui,j
U 1,j-1
U k,j-1
U i,j-1
U 1,j
U 1,j+1
U k,j+1
U i,j+1
-d1,i
-dk,i
-dn,i
-d1,i
-dk,i
-dn,i
U i,j
U n,j+1U n,jU n,j-1
Boltzmann NN for the TSP; weights represent the distances for unit Ui,j
31. 31
Boltzmann machine Application TSP
Fausett
NOTE:
Units in the last column are connected to units in the first column by
connections representing the appropriate distances.
However, units in a particular column are not connected to units in
columns other than those immediately adjacent to the said column.
We now consider the relation between the constraint weight b and the
distance weights.
Let d denote the maximum distance between any 2 cities in the tour.
Assume that no city is visited in the jth position of the tour and that
no city is visited twice.
In this case, some city, say i is not visited at all; i.e. no unit is “on” in
column j or in row i.
Since allowing Ui,j to turn on should be encouraged, the weights
should be set so that the consensus will be increased if it turns on.
The change in consensus will be where
– k1 indicates the city visited at stage j-1 of the tour
– k2 denotes the city visited at stage j+1 (and city i is visited at
stage j).
This change >= b-2d (this change should be >= even for
maximum distance between cities, d)
2,1, kiki ddb −−
32. 32
Boltzmann machine Application TSP
Fausett
However equality will occur only if the cities visited in positions j-1
and j+1 are both the maximum distance d, away from city i
In general, requiring the change in consensus to be positive will
suffice, so we take b>2d.
Thus, we see that if p>b, the consensus function has a higher value
for a feasible solution (one that satisfies the constraints) than for a
non-feasible solution
If b>2d the consensus will be higher for a short feasible solution
than for a longer tour.
In summary: p>b >2d
33. 33
Boltzmann machine Analysis
Fausett
The TSP is a nice model for a variety of constrained optimization
problems.
It is however a difficult problem for the Boltzmann machine,
because in order to go from one valid tour to another, several
invalid tours must be accepted.
By contrast, the transition from valid solutions to valid solution
may not be as difficult in other constrained optimization
problems.
Equilibrium:
– The net is in thermal equilibrium (at a particular
temperature) when the probs Pα and Pβ of 2 configurations
of the net, α and β, obey the Boltzmann distribution
−
=
T
EE
P
P αβ
β
α
exp
β
α
β
α
configofenergy
configofenergy
=
=
E
E
34. 34
Boltzmann machine Analysis
Fausett
At higher temperatures, the probs of different configs are more
nearly equal
At lower temperatures, there is a stronger bias toward confs with
lower energy.
Starting at a sufficiently high temperature ensures that the net will
have approximately equal probs of accepting or rejecting any
proposed state transition.
If the temp is reduced slowly, the net will remain in equilibrium at
lower temps.
It is not practical to verify directly the equilibrium condition at
each temp, as there are too many possible configurations.
βα PT ~P1exp(.) ⇒→⇒∞→
35. 35
Boltzmann machine Analysis
Fausett
Energy function:
The energy of a configuration can be defined as:
Where θi is a threshold and self-connections (or biases) are not
used.
The difference in energy between a config with unit Xk “off” and
one with Xk “on” (and the state of all other units remaining
unchanged) is
If the units change their activations randomly and
asynchronously and the net always moves to a lower energy
(rather than moving to a lower energy with a probability that is
less than 1), the discrete Hopfield net results.
∑∑ ∑<
+−=
i ij i
iijiij xxxwE θ
∑+−=∆
i
iikk xwkE θ)(
36. 36
Boltzmann machine Analysis
Fausett
To simplify notation, one may include a unit in the net that is
connected to every other unit and is always “on”.
This allows the threshold to be treated as any other weight, so
that
The energy gap between the conf with unit Xk “off” and that with
unit Xk“on” is
∑∑<
−=
i ij
jiij xxwE
∑=∆
i
iik xwkE )(
37. 37
Neural Nets for Constrained Optimization.
Introduction
Boltzmann machine
– Introduction
– Architecture and Algorithm
Boltzmann machine: application
to the TSP
Continuous Hopfield nets
Continuous Hopfield nets:
application to the TSP
References and suggested
reading
Introduction
Boltzmann machine
– Introduction
– Architecture and Algorithm
Boltzmann machine: application
to the TSP
Continuous Hopfield nets
Continuous Hopfield nets:
application to the TSP
References and suggested
reading
38. 38
Continuous Hopfield net
Fausett
A modification of the discrete Hopfield net, with continuous-
valued output functions, can be used either for associative
memory problems (as with the discrete form) or constrained
optimization problems such as the TSP.
As with the discrete Hopfield net, the connections between units
are bidirectional. So that the weight matrix is symmetric;
– i.e. the connection from unit Ui to unit Uj (with weight wij) is the
same as the connection from Uj to Ui (with weight wji).
For the continuous Hopfield net, we denote
– the internal activity of a neuron as ui;
– Its output signal is
If we define an energy function
)( ii ugv =
∑∑ ∑= = =
+=
n
i
n
j
n
i
iijiij vvvwE
1 1 1
5.0 θ
39. 39
Continuous Hopfield net
Fausett
( )
( ) ( )221112212112
2211
2
1
2211
1 1 1
5.0
for5.0
2
for5.0
vvvvwvvwE
jivvvvwvvwE
n
jivvvwE
i
iiii
n
i
n
j
n
i
iijiij
θθ
θθ
θ
++++=
≠++
+=
⇒=
≠+=
∑
∑∑ ∑
=
= = =
2
1
U
U
1θ
2θ
12w
21w
40. 40
Continuous Hopfield net
Fausett
Then the net will converge to a stable configuration that is a
minimum of the energy function as long as
0≤
dt
dE
∑∑ ∑= = =
+=
n
i
n
j
n
i
iijiij vvvwE
1 1 1
5.0But θ
i
i
i
i
i
i
i
ij
ijij
i
i
i
i
i i
v
E
net
dt
du
ug
du
dv
netvw
v
E
dt
du
du
dv
v
E
dt
dE
∂
∂
−=−=
>=
=+=
∂
∂
⋅⋅
∂
∂
=
∑
∑
≠
,0)('
rule)(chain
θ
∑ ≤−=
i
ii netug
dt
dE
0))(('Hence 2
E is a Lyapounov
Energy function
41. 41
Continuous Hopfield net
Fausett
For this form of the energy function, the net will converge if the
activity of each neuron changes with time according to the
differential equation
In the original presentation of the continuous Hopfield net, the
energy function was:
∑=
−−=
∂
∂
−=
n
j
ijij
i
i
vw
v
E
dt
du
1
θ
∑∫∑∑ ∑ =
−
= = =
+−−=
n
i
v
i
n
i
n
j
n
i
iijiij
i
dvvgvvvwE
1 0
1
1 1 1
)(
1
5.0
τ
θ
Time constant
42. 42
Continuous Hopfield net
Fausett
If the activity of each neuron changes with time according to the
differential equation
the net will converge.
In the Hopfield-Tank solution of the TSP, each unit has 2 indices.
– The first index –x, y, etc.– denotes the city,
– The second –i,j, etc –, denotes the position in the tour.
The Hopfield-Tank energy function for the TSP is:
∑=
++−=
n
j
ijij
ii
vw
u
dt
du
1
θ
τ
∑∑∑
∑∑∑∑∑∑∑∑
≠
−+
≠≠
++
−++=
x xy i
iyiyixyx
x i
ix
i x xy
iyix
x i ij
jxix
vvvd
D
vN
C
vv
B
vv
A
E
)(
2
222
1,1,,,
2
,,,,,
43. 43
Continuous Hopfield net
Fausett
The differential equation for the activity of unit UX,I is
The o/p signal is given by applying the sigmoid function (with
range between 0 and 1), which Hopfield and Tank expressed as
∑
∑ ∑ ∑∑
≠
−+
≠ ≠
+−
−+−−−=
Xy
IyIyyX
Ij Xy x i
ixIyXJ
IXIX
vvdD
vNCvBvA
u
dt
du
)( 1,1,,
,,
,,
τ
[ ]iii uugv αtanh(15.0)( +==
44. 44
Neural Nets for Constrained Optimization.
Introduction
Boltzmann machine
– Introduction
– Architecture and Algorithm
Boltzmann machine: application
to the TSP
Continuous Hopfield nets
Continuous Hopfield nets:
application to the TSP
References and suggested
reading
Introduction
Boltzmann machine
– Introduction
– Architecture and Algorithm
Boltzmann machine: application
to the TSP
Continuous Hopfield nets
Continuous Hopfield nets:
application to the TSP
References and suggested
reading
45. 45
Continuous Hopfield net
Fausett
Approach
Formulate the problem in terms of a Hopfield
energy of the form:
∑∑ ∑= = =
+=
n
i
n
j
n
i
iijiij vvvwE
1 1 1
5.0 θ
Solution
State
Problem
Formulation by
Hopfield
Energy
Energy
Minimization
By Hopfield
46. 46
Continuous Hopfield net
Fausett
Architecture for TSP
The units used to solve the 10-city TSP are arranged as shown
J,10J,9J,8J,7J,6J,5J,4J,3J,2J,1
I,10I,9I,8I,7I,6I,5I,4I,3I,2I,1
H,10H,9H,8H,7H,6H,5H,4H,3H,2H,1
G,10G,9G,8G,7G,6G,5G,4G,3G,2G,1
F,10F,9F,8F,7F,6F,5F,4F,3F,2F,1
E,10E,9E,8E,7E,6E,5E,4E,3E,2E,1
D,10D,9D,8D,7D,6D,5D,4D,3D,2D,1
C,10C,9C,8C,7C,6C,5C,4C,3C,2C,1
B,10B,9B,8B,7B,6B,5B,4B,3B,2B,1
A,10A,9A,8A,7A,6A,5A,4A,3A,2A,1
UUUUUUUUUU
UUUUUUUUUU
UUUUUUUUUU
UUUUUUUUUU
UUUUUUUUUU
UUUUUUUUUU
UUUUUUUUUU
UUUUUUUUUU
UUUUUUUUUU
UUUUUUUUUU
10987654321City
J
I
H
G
F
E
D
C
B
A
Position
47. 47
Continuous Hopfield net
Fausett
Architecture for TSP
The connection weights are fixed and are usually not shown or even
explicitly stated.
The weights for inter-row connections correspond to the parameter A in
the energy equation;
– There is a contribution to the energy if 2 units in the same row are
“on”.
Similarly, the inter-columnar connections have weights B;
The distance connections appear in the fourth term of the energy
equation.
More explicitly, the weights between units Uxi and Uyi are
In addition each unit receives an external input signal
1,1,()1()1(),:,( −+ +−+−−−−= jijixyxyijijxy DdCBAjyixw δδδδδδ
=
=
otherwise0
if1
DeltaDiractheis
ji
ijδ
CNIxi += The parameter N is usually taken to be
somewhat larger than the number of cities n
48. 48
Continuous Hopfield net
Fausett
Algorithm for TSP
Step 0 Initialize activations of all units
Initialize ∆t to a small value
Step 1 While stopping condition is false, do steps 2-6
Step 2 Perform steps 3-5 n2 times (n is the number of cities)
Step 3 choose a unit at random
Step 4 change activity on selected unit:
Step 5 apply output function
Step 6 check stopping condition
+−
−+−−−∆+= ∑ ∑ ∑∑∑≠ ≠ ≠
−+
ij xy xy
iyiyyx
x j
xjiyxjixixix vvdDvNCvBvAoldutoldunewu 1,1,,,,,, ()()()(
[ ]ixix uv ,, tanh(15.0 α+=
49. 49
Continuous Hopfield net
Fausett
Algorithm for TSP
Hopfield and Tank used the following parameter values in their solution
of the problem:
The large value of α gives a very steep sigmoid function, which
approximates a step function.
The large coefficients and a correspondingly small ∆t result in very little
contribution from the decay term ( )
The initial activity levels (ux,i) were chosen so that (the
desired total activation for a valid tour). However, some noise was
included so that not all units started with the same activity (or o/p signal).
50,15,500,200,500 ====== αNDCBA
toldu ix ∆)(,
∑∑ =
x i
ixv 10,
50. 50
Neural Nets for Constrained Optimization.
Introduction
Boltzmann machine
– Introduction
– Architecture and Algorithm
Boltzmann machine: application
to the TSP
Continuous Hopfield nets
Continuous Hopfield nets:
application to the TSP
References and suggested
reading
Introduction
Boltzmann machine
– Introduction
– Architecture and Algorithm
Boltzmann machine: application
to the TSP
Continuous Hopfield nets
Continuous Hopfield nets:
application to the TSP
References and suggested
reading