Abstract
This report is an introduction to Artificial Neural
Networks. The various types of neural networks are
explained and demonstrated, applications of neural
networks like ANNs in medicine are described, and a
detailed historical background is provided. The
connection between the artificial and the real thing is
also investigated and explained. Finally, the
mathematical models involved are presented and
demonstrated.
1. Artificial Neural Network and its Applications
Shritosh Kumar [1], Tarun Kumawat [2], Naresh kumar Marwal [3], [4] Brijesh Kumar Singh
[1] Shritosh Kumar [CSE] Arya College of Engg.& IT, Kukas, Rajasthan, India
[2] Tarun Kumawat [CSE] JECRC UDML College of Engineering, Kukas, Rajasthan, India
[3] Naresh Kumar Marwal [CSE] JECRC UDML College of Engineering, Kukas, Rajasthan
[4] Brijesh Kumar Singh [CSE] JECRC UDML College of Engineering, Kukas, Rajasthan
Abstract
This report is an introduction to Artificial Neural
Networks. The various types of neural networks are
explained and demonstrated, applications of neural
networks like ANNs in medicine are described, and a
detailed historical background is provided. The
connection between the artificial and the real thing is
also investigated and explained. Finally, the
mathematical models involved are presented and
demonstrated.
Introduction
This paper presents a neural network based artificial
vision and its applications. The concept of neural
networks is modeled after biological sensory
mechanisms where the neuron signals are transmitted
to the brain and processed. This concept moves away
from traditional statistical models where data are
analyzed based upon holding everything else constant
(ceteris paribus). The weakness in statistical models
lies in their inability to model the changing
relationships between variables (non-linear problem)
and thus presents challenges in making a predictive
analysis where the underlying relationships are not
constant. A neural network overcomes this problem
by being adaptive to real sets of data. Much like
living organisms, a neural network gets training and
learns the tricks of the trade by observation and
readjusts its learning against new sets of data
iteratively.
Other reasons why we make use of Neural Networks
include;
• Adaptive learning: An ability to learn how
to do tasks based on the data given for
training or initial experience.
• Self-Organization: An ANN can create its
own organization or representation of the
information it receives during learning time.
• Real Time Operation: ANN computations
may be carried out in parallel, and special
hardware devices are being designed and
manufactured which take advantage of this
capability.
• Fault Tolerance via Redundant Information
Coding: Partial destruction of a network
leads to the corresponding degradation of
performance. However, some network
capabilities may be retained even with major
network damage.
The Definition of Neural Network
Artificial neurons are similar to their biological
counterparts. They have input connections which are
summed together to determine the strength of their
output, which is the result of the sum being fed into
an activation function. Though many activation
International Journal of Computer Science and Management Research Vol 2 Issue 2 February 2013
ISSN 2278-733X
Shritosh Kumar et.al. www.ijcsmr.org1621
2. functions exist, the most common is the sigmoid
activation function, which outputs a number between
0 (for low input values) and 1 (for high input values).
An Artificial Neural Network (ANN), usually called
Neural Network has been found to hold great
potential as the front – end of expert system and have
made remarkable success in providing real time
response to complex pattern recognition problems.
Artificial neural networks (ANNs or briefly NNs)
belong to this group of models. Artificial neural
network models try to model the operation of
biological neural networks and especially those of the
human brain that can perform a series of tasks, like
pattern recognition, classification etc., with an
amazing speed and accuracy, despite the fact that
inputs to the human brain consist most of the time of
noisy and erroneous data.
Using computer systems to replicate the learning and
recall methods of the human brain has
researchers in a variety of disciplines for half a
century. Neural network computing is the closest
approximation of brain function to evolve to a stage
where practical application is attainable.
The technology of artificial neural network has
developed from a biological model of the brain. A
neural network consists of set of connected cells: the
neurons A neuron is a real function of the input
vector (y1,……,yk). The output is obtained as
f(xj)=f(aj+ Σk
i=1wijyj),where f is a function , typically
the sigmoid function. A graphical presentation of
neuron is given in figure below.
Figure: A single neuron
The brain and nervous system provide the structural
model for a neural network. A neural net
number of processing elements known as neurons,
each of which can have multiple inputs and only one
output. Single outputs do branch out and become
input to many other neurons, thus affording the many
incoming signals each neuron receives. In a typical
neural network, neurons receive most of their input
functions exist, the most common is the sigmoid
activation function, which outputs a number between
0 (for low input values) and 1 (for high input values).
l Network (ANN), usually called
Neural Network has been found to hold great
end of expert system and have
made remarkable success in providing real time
response to complex pattern recognition problems.
ANNs or briefly NNs)
belong to this group of models. Artificial neural
network models try to model the operation of
biological neural networks and especially those of the
human brain that can perform a series of tasks, like
ion etc., with an
amazing speed and accuracy, despite the fact that
inputs to the human brain consist most of the time of
Using computer systems to replicate the learning and
been a goal of
searchers in a variety of disciplines for half a
computing is the closest
approximation of brain function to evolve to a stage
application is attainable.
The technology of artificial neural network has
a biological model of the brain. A
connected cells: the
A neuron is a real function of the input
output is obtained as
,where f is a function , typically
function. A graphical presentation of
Figure: A single neuron
The brain and nervous system provide the structural
model for a neural network. A neural net consists of a
number of processing elements known as neurons,
inputs and only one
output. Single outputs do branch out and become
neurons, thus affording the many
incoming signals each neuron receives. In a typical
network, neurons receive most of their inputs
from other neurons; the rest are from the outside
world -- -- -- data describing events.
Advantages:
• A neural network can perform tasks that a
linear program can not.
• When an element of the neural network
fails, it can continue without any problem by
their parallel nature.
• A neural network learns and does not need
to be reprogrammed.
• It can be implemented in any application.
• It can be implemented without any problem.
Disadvantages:
• The neural network needs training to
operate.
• The architecture of a ne
different from the architecture of
microprocessors therefore needs to be
emulated.
• Requires high processing time for large
neural networks.
Architecture of neural networks
Feed-forward (associative) networks
Feed-forward nets are the most well
widely-used class of neural network. The popularity
of feed-forward networks derives from the fact that
they have been applied successfully to a wide range
of information processing tasks in such diverse fields
as speech recognition, financial prediction, image
compression, medical diagnosis and
prediction; new applications are being discovered all
the time.
Feedback (auto associative) networks
Recurrent neural networks
feedback connections. Contrary to feed
networks, the dynamical properties of the network
are important. In some cases, the activation values of
the units undergo a relaxation process such that the
neural network will evolve to a stable state in which
these activations do not change anymore. In other
applications, the change of the activation values of
the output neurons are significant, such that
dynamical behaviour constitutes the output of the
neural network (Pearlmutter, 1990)
from other neurons; the rest are from the outside
data describing events.
A neural network can perform tasks that a
When an element of the neural network
fails, it can continue without any problem by
A neural network learns and does not need
It can be implemented in any application.
It can be implemented without any problem.
The neural network needs training to
The architecture of a neural network is
different from the architecture of
microprocessors therefore needs to be
Requires high processing time for large
Architecture of neural networks
forward (associative) networks
forward nets are the most well-known and
used class of neural network. The popularity
forward networks derives from the fact that
they have been applied successfully to a wide range
of information processing tasks in such diverse fields
as speech recognition, financial prediction, image
compression, medical diagnosis and protein structure
; new applications are being discovered all
associative) networks
that do contain
feedback connections. Contrary to feed-forward
networks, the dynamical properties of the network
are important. In some cases, the activation values of
the units undergo a relaxation process such that the
neural network will evolve to a stable state in which
these activations do not change anymore. In other
applications, the change of the activation values of
the output neurons are significant, such that the
dynamical behaviour constitutes the output of the
neural network (Pearlmutter, 1990)
International Journal of Computer Science and Management Research Vol 2 Issue 2 February 2013
ISSN 2278-733X
Shritosh Kumar et.al. www.ijcsmr.org1622
3. Network layers
The commonest type of artificial neural network
consists of three groups, or layers, of units: a layer of
"input" units is connected to a layer of "hidden"
units, which is connected to a layer of "output"
units. (see Figure 4.1)
• The activity of the input units represents the
raw information that is fed into the network.
• The activity of each hidden unit is
determined by the activities of the input
units and the weights on the connections
between the input and the hidden units.
• The behaviour of the output units depends
on the activity of the hidden units and the
weights between the hidden and output
units.
Properties of Neural Networks
The following important properties of neural
networks and neurons should be distinguished:
1) Nonlinearity: A neuron is basically a
nonlinear device. Consequently, a
neural network, made up of an
interconnection of neurons, is itself
nonlinear. Moreover, the nonlinearity is
of a special kind in the sense that it is
distributed throughout the network.
Nonlinearity is a highly important
property, particularly if the underlying
physical mechanism responsible for the
generalization of an input signal (e.g.,
speech or video signal) is inherently
nonlinear.
2) Input-Output Mapping: A popular
paradigm of learning called supervised
learning involves the modification of
the synaptic weights of a neural
network by applying a set of labeled
training samples. Each example always
consists of a unique input signal and the
corresponding desired response. The
network is sequentially presented an
example, and the synaptic weights of
the network are being modified so as to
minimize the difference between the
desired response and the actual response
of the network produced by the input
signal. The training of the network is
repeated for the selected examples in
the set until the network reaches a
steady state, where there are no further
significant changes in the synaptic
weights. Thus the network learns from
the examples by constructing an input-
output mapping for the problem at hand.
3) Adaptivity. Neural networks have a
built-in capability to adapt their
synaptic weights to changes in the
surrounding environment. In particular,
a neural network trained to operate in a
specific environment can be easily
retrained to deal with minor changes in
the operating environmental conditions.
4) Evidential Response. In the context of
pattern classification, a neural network
can be designed to provide information
not only about which particular pattern
to select, but also about the confidence
in the decision made. This latter
information may be used to reject
ambiguous patterns, should they arise
and thereby improve the classification
performance of the network.
Perceptrons
Perceptrons are the simplest architecture to learn
when studying Neural Networking. Picture you
mind of a perceptron as a node of a wide,
interconnected neural network, sort of like a data
tree, although the neural network does not necessarily
have to have a top and bottom sections. The
connections among all the nodes not only show the
relationship between the nodes but also transmit data
and information, called a signal or impulse. The
perceptron is a simple model of a neuron .
A BRIEF HISTORICAL
We want to mention here the most important events
that present a historical point by point evolution of
the field of neural networks:
• 1943 – W. McCulloch and W. Pitts – the first
nonlinear mathematical model of the neuron (a
formal neuron) [6].
• 1948 – D. Hebb – the first learning rule. One can
memorize an object by adapting weights [7].
• 1958 – R. Rosenblatt – concept of the perceptron as
a machine, which can learn and classify patterns [8].
International Journal of Computer Science and Management Research Vol 2 Issue 2 February 2013
ISSN 2278-733X
Shritosh Kumar et.al. www.ijcsmr.org1623
4. • 1963 – A.B.J. Novikoff – significant development
of the learning theory, a proof of the theorem about
convergence of the learning algorithm applied to
solution of the pattern recognition problem using the
perceptron [9].
• 1960-s – the extensive development of the threshold
logic, initiated by previous results in perceptron
theory. A deep learning of the features of threshold
Boolean functions, as one of the most important
objects considered in the theory of perceptrons and
neural networks. The most complete summaries are
given by M. Dertouzos [10] and S. Muroga [11].
• 1969 – M. Minsky & S. Papert – potential limit of
the perceptron as a computing system is shown [12].
• 1977 – T. Kohonen – consideration of the
associative memory as a content-addressable
memory, which is able to learn [13].
• 1982 – J. Hopfield shows by means of energy
functions that neural networks are capable to solve a
large number of problems. Revival of extensive
research in the field [14].
• 1982 – T. Kohonen describes the self-organizing
maps [15].
• 1986 – D.E. Rumelhart & J.L. McClelland –
introduction of the feedforward neural network and
learning with backpropagation. Consideration of a
neural network as a universal approximator.
• Present – more and more scientists and research
centers are devoted to research in the field of neural
networks and their applications in pattern
recognition, classification, prediction, image
processing and others.
Applications of Neural Networks
Sales and
Marketing
Operational
Analysis
Medical
Targeted
Marketing
Service
Retail
Inventories
Optimization
Detection and
Evaluation of
Medical
Phenomena
Sales
Forecasting
Scheduling
Optimization
Medical
Diagnosis
Usage
Forecasting
Managerial
Decision
Making
Patient's Stay
Forecasts
Retail Margins
Forecasting
Cash Flow
Forecasting
Treatment Cost
Estimation
Financial Data Mining Energy
Stock Market
Prediction
Prediction Electrical Load
Forecasting
Bankruptcy
Prediction
Classification Energy Demand
Forecasting
Credit
Worthiness,
Credit Rating
Change and
Deviation
Detection
Short and Long-
Term Load
Estimation
Property
Appraisal
Knowledge
Discovery
Predicting
Gas/Coal Prices
Fraud
Detection
Response
Modeling
Power Control
Systems
Price Forecasts Time Series
Analysis
Hydro Dam
Monitoring
Educational Science HR
Management
Teaching
Neural
Networks
Pattern
Recognition
Employee
Retention
Neural
Network
Research
Formulation
Optimization
Employee
Hiring
College
Application
Screening
Chemical
Identification
Staff
Scheduling
Predict Student
Performance
Physical
System
Modeling
Personnel
Profiling
Ecosystem
Evaluation
Industrial Polymer
Identification
Other
Process
Control
Recognizing
Genes
Sports Betting
Quality
Control
Botanical
Classification
Games
Development
Temperature
Prediction
Biological
Systems
Analysis
Quantitative
Weather
Forecast
Force
Prediction
Ground Level
Ozone Prognosi
Optimization
Problem
Routing
Agricultural
Product
Estimates
International Journal of Computer Science and Management Research Vol 2 Issue 2 February 2013
ISSN 2278-733X
Shritosh Kumar et.al. www.ijcsmr.org1624
5. The applications featured here are:
• CoEvolution of Neural Networks for Control
of Pursuit & Evasion
• Learning the Distribution of Object
Trajectories for Event Recognition
• Radiosity for Virtual Reality Systems
• Autonomous Walker & Swimming Eel
• Robocup: Robot World Cup
• Using HMM's for Audio-to-Visual
Conversion
• Artificial Life: Galapagos
• Speechreading (Lipreading)
• Detection and Tracking of Moving Targets
• Real-time Target Identification for Security
Applications
• Facial Animation
• Behavioral Animation and Evolution of
Behavior
• A Three Layer Feedforward Neural Network
• Artificial Life for Graphics, Animation,
Multimedia, and Virtual Reality: Siggraph '95
Showcase
• Creatures: The World Most Advanced
Artificial Life!
• Framsticks Artificial Life
Application of Neural Network Models in IR
In this section, we will study the
applications of some neural network models
(i.e., SOFM and Hopfield network) and
related algorithms (i. e., semantic network)
in information retrieval systems.
1) The Application of SOFM : Most of the
applications of SOFM in IR systems are
base on the fact that SOFM is a topographic
map and can do mappings from a
multidimensional space to a two- or three-
dimensional space. Kohonen has shown that
his self-organizing feature map "is able to
represent rather complicated hierarchical
relations of high-dimensional space in a
two-dimensional display
2) Application of Hopfield Net : Hopfield
net was introduced as a neural net that can
be used as a content addressable memory.
Knowledge and information can be stored in
single-layered interconnected neurons
(nodes) and weighted synapses (links) can
be retrieved based on the network's parallel
relaxation method. It had been used for
various classification tasks and global
optimization.
3) The Applications of MLP Networks
and Semantic Networks: It is hard to
distinguish the applications of MLP and the
applications of semantic networks with
spreading activation methods in IR. In most
cases, the applications of semantic networks
in IR are making use of spreading activation
models while having a feed-forward
network structure similar to that of MLP
networks.
Wong and his colleagues (1993) have
developed a method for computing term
associations using a three-layer feed-forward
network with linear threshold functions.
Each document is represented as a node in
input layer. The nodes in the hidden layer
represent query terms and the output layer
consists of just one node, which pools the
input from all the query terms. Term
associations are modeled by weighted links
connecting different neurons, and are
derived by the perceptron learning algorithm
without the need for introducing any ad hoc
parameters. The preliminary results indicate
the usefulness of neural networks in the
design of adaptive information retrieval
systems.
Future work on Neural Network:
Improvement of existing technologies:
All current Neural Network technologies will most
likely be vastly improved upon in the future.
Everything from handwriting and speech recognition
to stock market prediction will become more
sophisticated as researchers develop better training
methods and network architectures.
International Journal of Computer Science and Management Research Vol 2 Issue 2 February 2013
ISSN 2278-733X
Shritosh Kumar et.al. www.ijcsmr.org1625
6. Neural Networks might, in the future, allow:
• Robots that can see, feel, and predict the
world around them.
• Improved stock prediction
• Common usage of self-driving cars.
• Composition of music.
• Handwritten documents to be automatically
transformed into formatted word processing
documents.
• Trends found in the human genome to aid in
the understanding of the data compiled by
the Human Genome Project.
• Self-diagnosis of medical problems using
neural networks
Conclusion:
The computing world has a lot to gain from neural
networks. Their ability to learn by example makes
them very flexible and powerful. Furthermore there is
no need to devise an algorithm in order to perform a
specific task; i.e. there is no need to understand the
internal mechanisms of that task. They are also very
well suited for real time systems because of their fast
response and computational times which are due to
their parallel architecture.
Neural networks also contribute to other areas of
research such as neurology and psychology. They are
regularly used to model parts of living organisms and
to investigate the internal mechanisms of the brain.
Perhaps the most exciting aspect of neural networks
is the possibility that some day 'conscious' networks
might be produced. There is a number of scientists
arguing that consciousness is a 'mechanical' property
and that 'conscious' neural networks are a realistic
possibility.
Finally, I would like to state that even though neural
networks have a huge potential we will only get the
best of them when they are integrated with
computing, AI, fuzzy logic and related subjects.
References :
1. An introduction to neural computing.
Aleksander, I. and Morton, H. 2nd edition
2. Neural Networks at Pacific Northwest
National Laboratory
http://www.emsl.pnl.gov:2080/docs/cie/neura
l/neural.homepage.html
3. Industrial Applications of Neural Networks
(research reports Esprit, I.F.Croall,
J.P.Mason)
4. A Novel Approach to Modelling and
Diagnosing the Cardiovascular System
http://www.emsl.pnl.gov:2080/docs/cie/neura
l/papers2/keller.wcnn95.abs.html
5. Artificial Neural Networks in Medicine
http://www.emsl.pnl.gov:2080/docs/cie/techb
rief/NN.techbrief.ht
6. Neural Networks by Eric Davalo and Patrick
Naim
7. Learning internal representations by error
propagation by Rumelhart, Hinton and
Williams (1986).
8. Klimasauskas, CC. (1989). The 1989 Neuro
Computing Bibliography. Hammerstrom, D.
(1986). A Connectionist/Neural Network
Bibliography.
9. DARPA Neural Network Study (October,
1987-February, 1989). MIT Lincoln Lab.
Neural Networks, Eric Davalo and Patrick
Naim
10. Assimov, I (1984, 1950), Robot, Ballatine,
New York.
11. Electronic Noses for Telemedicine
http://www.emsl.pnl.gov:2080/docs/cie/neura
l/papers2/keller.ccc95.abs.html
12. http://www-cs-
faculty.stanford.edu/~eroberts/courses/soco/p
rojects/neural-networks/Future/index.html
13. Pattern Recognition of Pathology Images
http://kopernik-
eth.npac.syr.edu:1200/Task4/pattern.html
International Journal of Computer Science and Management Research Vol 2 Issue 2 February 2013
ISSN 2278-733X
Shritosh Kumar et.al. www.ijcsmr.org1626