multimedia data requires
considerable storage capacity and transmission
Apart from the existing technology like JPEG
and MPEG standards, new technology such as
neural networks are used for image
Natural images are captured using image
sensors and stored in memory banks. Large
storage space is required.
eg: A color image of size 256x256 requires a
storage space of 1.5 Mega bits.
Storage cost for 1 GB is approximately Rs.
With the available bandwidth of 64kbps and
54mbps transmitting a three hour movie
requires in uncompressed format takes 2917
years and 19 days respectively.
Transmission of huge image data is time
Artificial neural networks has been chosen for
image compression due to their massively parallel
and distributed architecture.
The idea behind this Training commands is the
Back propagation algorithm.
The focus of this project is to implement the
Neural Architecture Digitally.
The Analogy to the Brain
Neurons are basic signaling units of the nervous
system of a living being in which each neuron is
a discrete cell whose several processes are
from its cell body.
The basic element of human brain has abilities
to remember, think and apply previous
experiences to our every action.
Neural networks process information in a
similar way the human brain does.
Artificial Neural Networks
Artificial Neural Networks are used to process
the information the way biological systems
process analog signals like image and sound.
Types of ANN
Feed forward networks
Information only flows one way
One input pattern produces one output
No sense of time (or memory of previous state)
Nodes connect back to other nodes or
Information flow is multidirectional
Sense of time and memory of previous state(s)
Back propagation algorithm
Information about errors is filtered back
through the system and it is used to adjust the
connections between the layers, thus
The Feed-Forward Neural Network architecture
is capable of approximating most problems
with high accuracy and generalization ability.
The Back propagation algorithm is used to
update weights and bias of the neural
Weight and bias elements of the neuron
decides the functionality of the network.
Value of these weight and bias elements are
calculated during training phase.
Image compression refers to the task of
reducing the amount of data required to store
or transmit an image.
The compressed image is then subjected to
further digital processing such as error control
coding, encryption or multiplexing with other
data sources, before being used to modulate
the analog signal that is actually transmitted
through the channel or stored in a storage
Vector values of the scaled Images (16x4096)
Combining these images to increase the resolution (16x32768)
Normalizing the combined image
Adding bias & weights
Bukarica Leto, firstname.lastname@example.org
Training the network
testing each image
Comparing scaled &
by finding their PSNR &
Each image is converted
to vector form
Passing the image
through the network
MATLAB version R2007b.
The Maximum error, MSE and PSNR values are
Hardware implementation is done using FPGA
board (Spatan 3 ).
Neural network training &
The neural network is trained
using the nntraintool, available
The plot of MSE wrt epochs for
different iterations are as shown:
Bukarica Leto, email@example.com
• A neural network can perform tasks that a linear
• When an element of the neural network fails, it
can continue without any problem by their
• A neural network learns and does not need to
• It works even in the presence of noise with good
The neural network needs training to operate.
The architecture of a neural network is
different from the architecture of
microprocessors therefore needs to be
Requires high processing time for large neural
As the number of neurons increases the
network becomes complex.
• Chipscope Pro Analyzer can easily implement
the design on FPGA kit.
• The analysis showed that comparision
between input and output values was proved
to be similar.
• Using Chipscope Pro Analyzer smaller
architectures can be easily built.