This document describes a biometric eye simulator project for visually impaired individuals. The project aims to determine visual parameters needed for daily activities and converts images into audio signals. It involves using an image processing unit and display to analyze images with respect to a person's retina. Parameters like entity, position, and movement are converted to audio. The goal is to propose a new portable technique that is affordable and risk-free. The document discusses retinal conditions like retinitis pigmentosa and color blindness. It provides details on the human eye anatomy and proposes a mathematical model to simulate vision. The project aims to help visually impaired people gain functional vision through a prototype that processes images and converts the data into electrical signals for the brain to interpret.
2. INTRODUCTION
Aim and Focus: This project focuses on determining various
parameters required by a visually impaired person to manage his/her
daily activities with ease. The technology combines an image
processing unit with display device for the analysis of image with
respect to the retina of the person. The simulation provides an idea of
entity, position and movement of any dynamic or static target. These
parameters are converted into audio signal.
The basic aim was to propose a new technique within a portable device
which will be cost effective and risk-free.
3. PROBLEM STATEMENT
Blindness effects millions worldwide. An eye
suffering from severe vision impairment and
blindness has damaged photoreceptors. There are
prototypes available which corrects vision
defects, but unfortunately, they involve eye
surgery and are costly whether it is Argus II,
Bionic eye or any other Retina implant.
4. RETINITIS PIGMENTOSA (RP)
AND COLOR BLINDNESS
RP: It is an inherited or maybe acquired, degenerative eye disease that
causes severe vision impairment and often blindness. RP is caused due
to the abnormalities of photoreceptors (rods and cones) of the retina
leading to progressive sight loss. It is the reason in almost 97% of the
cases.
Color Blindness: is the inability or decreased ability to see color, or
perceive color differences, under normal lighting conditions. The most
usual cause is a fault in the development of one or more sets
of retinal cones that perceive color in light and transmit that
information to the optic nerve.
5. HUMAN EYE
A retina lies in the back of the eye and consists of a layer of
photoreceptor cells called rods and cones. The cones are responsible for
color detection in light, and there exists three kinds of cones that
respond to long, medium, and short wavelengths; these typically have
peak wavelengths of 564, 534, and 430 nanometers respectively. When
the correct stimulus is applied, this sends distinct action potentials
through the retinal ganglion cells, which get interpreted by the brain as
perceived color.
6. Model of the Human Eye
Eye model can be constructed as shown in Figure(1).Its first part is a simple
optical system consisting of the cornea, the opening of iris, the lens and the
fluids inside the eye. Its second part consists of the retina, which performs the
photo electrical transduction, followed by the visual pathway (nerve) which
performs simple image processing operations and carries the information to the
brain.
Fig 1: A model of the human eye.
The image formation in the human eye is not a simple phenomenon. It is only
partially understood and only some of the visual phenomena have been
measured and understood. Most of them are proven to have non-linear
characteristics.
7. The axons of the ganglion cells form the
optic nerves, which, after leaving the
eyeball, proceed toward the brain until
they come to the optic chiasm, where the
optic nerves divide. Fibers from the nasal
half of the retina cross to the opposite side
of the brain; fibers from the temporal half
go to the same side of the brain. Past the
chiasm, crossed fibers from the
contralateral eye join the uncrossed fibers
from the ipsilateral eye to form the optic
tract. Fibers in the optic tract, which are
still the axons of retinal ganglion cells,
then proceed to the thalamus, where they
end on cells of the lateral geniculate
nucleus. The fibers of this nucleus project
to neurons of the calcarine area of the
occipital cortex. This is area 17, as
enumerated by the anatomist, Brodmann,
and is often referred to as striate cortex.
Fig 2: Eye Structure
Source: Gabriela Gonzalez and Alister Flint, “ The mathematical modelling
of eye”, Mathematics Today, 2003
8. Mathematical Modelling Of Eye
Let us assume that the light entering pupil is P and it varies w.r.t x and y, therefore amplitude of
light is P(x, y) and phase component of this light is assumed to be ϕ(x, y). So, the nature of light
changed by pupil is :
Where P(x, y) includes the shape and size of pupil. We are assuming non-uniform aperture to
have near to real life situation. Eye is a spherical surface whose center of curvature is at
photoreceptor in retina, so mathematics of image formation begins with the computation of the
point spread function (PSF), which is the image of a point source formed by the optical system.
𝑦𝑖
Lens PlaneObject Plane Image Plane
xi
xo
x
yo
𝑦
Fig 3: Mathematical model of an eye.
Source: Gabriela Gonzalez and Alister Flint, “ The mathematical modelling of eye”, Mathematics Today, 2003
9. PSF (xi,yi) =
where FT is Fourier transform and K is constant and the image of a point source on
the retina is called the point-spread function (PSF). We are considering object as an
array of point sources, each point has its intensity, position and spectral properties.
So, from this observation it can be thought of using MATLAB software for
simulation of artificial vision.
Fig. 4: Eye components function
Source: Gabriela Gonzalez and Alister Flint, “ The mathematical modelling of eye”, Mathematics Today, 2003
10. Road Map And Approach
Artificial
Human Eye
Simulator
Automated
System
Hardware
External
Camera
Intel
Processor
Software
MATLAB
& Simulink
Analyzing
data
Electrical
Signal to Brain
Explore
Technical
Idea
PHASE 1
11. Methodology
(Major Project- Phase 1)
• Image is to be captured using an external camera.
• Image is transferred from camera to computer. (presently developed by a
GUI- MATLAB code to read image directly from web cam )
• Image refining and filtering if any disturbance found in image. (Using
Simulink and MATLAB for enhancing image quality)
• Simulink model developed using video and image processing blockset and
Embedded MATLAB function block. (presently working on Simulink
model)
• The generated model is now converted into ‘C’ code.
• We will use another MATLAB tool box i.e. Real Time Workshop-
Embedded Coder.
12. Work Done So Far….
• Object Recognition: The MATLAB code is able to identify a few set of
objects, for example: human being, book, coins etc.
• Colour Determination: The MATLAB code can distinguish among colours
like red, blue, green and black.
• Distance Calculation: The distance between object and the reference point
is calculated through code.
The image under observation for processing is shown below
Fig 5: Image Processed
13. The main aim of the project is to transfer details of single frame of
image captured from the video to optic nerves. Image can be visualized
if we add up the following
• Background sensing and object presence
• Intensity variation to differentiate and to retain object identity
• Colour information processing
• Shape-based Processing
A combination of the above methods is usually used for transferring
image into electrical signal. There is no ‘perfect solution’, and this is
what suits your project best.
14. • Background subtraction is a technique used to isolate useful
information in an image (foreground) from the rest of the
image (background).
• A reference image is selected as the background.
• Each successive image in a video stream is compared against
this image.
• If the difference between the images is significant, the areas
which are different are considered to be the foreground for
that image.
Background sensing
15. From above figure it can be observed that most of the background is left dark.
Now this image detail can be converted into electrical signal depending upon
the intensity (amplitude).
Fig 5.1: Background Sensing
17. Image to Electrical Signal
Digital camera uses CCD sensors for converting optical image into electronic
signal. A charge-coupled device (CCD) is a device for the movement of
electrical charge, usually from within the device to an area where the charge can
be manipulated, for example conversion into a digital value. This is achieved by
"shifting" the signals between stages within the device one at a time. CCDs
move charge between capacitive bins in the device, with the shift allowing for
the transfer of charge between bins. In a CCD for capturing images, there is a
photoactive region (an epitaxial layer of silicon), and a transmission region made
out of a shift register. An image is projected through a lens onto the capacitor
array (the photoactive region), causing each capacitor to accumulate an electric
charge proportional to the light intensity at that location. This is similar to solar
cell which are capable to detect the presence and absence of light. CCDs are
made in a unique way which allows them to send signals through the chip
without worrying about distortion.
18. 0 1 2 3 4 5 6 7 8 9 10 11 12 13
0
20
40
60
80
100
Amplitude
Region of observation
Fig 5.3 : Image To Electrical Signal Conversion
Fig 5.4: Division Of Image
19. • Intensity thresholding is another foreground separation technique. It uses
histograms.
• A histogram is an image statistic which usually operates on an intensity image,
i.e. pixels having a single value between 0-255
• The range 0-255 is divided into bins, e.g. each intensity value may be given its
own bin
• The Y axis shows the count of number of pixels in an image which lie within
limits for a bin.
Bin
Fig 5.5: Histogram
20. Colour Determination
R G B Color
• L M L Dark Green
• L M H Blue
• L H H Teal
• M L L Dark Red
• M H L Green
• H L L Red
• H M H Pink
• H H L Yellow
• H H H White
(H: high, M: mid, L: low/none)
Bayer array side-view of color photosites
Fig 5.6: Determination Of Colour
22. PHASE ΙΙ
• Combine all graphical user interface programs.
• Convert code in ‘C’ using ‘Real time workshop’
in MATLAB.
• Convert Signal to speech.
• Burn code in DSP processor attached to
prototype.
23. Future Scope
1) our eyes perceive a much higher brightness resolution than we do with color.
2) Green light contributes roughly twice as much to our perception of brightness
than does the combined effect of red and blue. Allocating more photosites to
green therefore produces a far better looking image than if each color were
allocated equally.
Fig 7: Use Of Electrode For Image sensing
24. Project is designed with electrodes for high resolution. The array of electrodes
connected to external camera and a processor for video processing was attached
with retina for sighting. But it failed to restore normal vision. Argus II system
works by converting images captured by a miniature video camera housed in the
patient's glasses into a series of small electrical impulses that are then transmitted
wirelessly to an array of electrodes implanted onto the patient's retina. These
impulses stimulate the retina's remaining cells and results in the perception of
patterns of light in the brain. By learning to interpret these visual patterns, the
patients were able to gain some functional vision .
The artificial eye system consists of a camera, attached to a pair of glasses,
which transmits high-frequency radio signals to a microchip implanted in the
retina. Electrodes on the implanted chip convert these signals into electrical
impulses to stimulate cells in the retina that connect to the optic nerve. These
impulses are then passed down along the optic nerve to the vision processing
centers of the brain, where they are interpreted as an image.
25. IMPACT
Blindness is a state of no light perception. According to a survey, in 2012,
there were 285 million visually impaired people in the world, of which 39
million were totally blind . There are many who develop blindness after a
certain age, which is averaged to be 50 years, while there are many who have
not seen since birth.
This process is not a complete cure of blindness, but still is a ray of hope. The
received signals helps the patient to interpret intensity of light which provides
them 'some’ vision. Its main feature is that the implementation does not
requires a surgical process for retinal implantation, it involves mere interface
with brain. This makes the device cost effective and risk free. The project
provides complete and unfettered independence to the visually impaired.