SlideShare a Scribd company logo
1 of 9
Download to read offline
International Journal on Cybernetics & Informatics (IJCI) Vol. 3, No. 4, August 2014 
INTEGRAL IMAGING THREE-DIMENSIONAL (3D) 
DISPLAY WITH WIDE-VIEWING ANGLE USING MULTIPLE 
ILLUMINATIONS 
Md. Shariful Islam , Md. Tariquzzaman and Md. Zahidul Islam 
Department of Information and Communication Engineering, Islamic University, 
Kushtia-7003, Bangladesh 
ABSTRACT 
In this paper, a three-dimensional (3-D) integral imaging (II) system to improve the viewing angle by using 
the multiple illuminations is proposed. In this system, three collimated illuminations that are directed to 
three different angles in order to get widen propagation angle of point light source (PLS). Among three 
illuminations two slanted illuminations increase the propagation angle of PLS over the conventional 
method. Simulation result shows that the viewing angle of proposed PLS displays is three times larger than 
conventional PLS displays. In the simulation, we used Light Tools 6.3 to reconstruct an object. 
KEYWORDS 
Integral Imaging 3D display, Multiple Illuminations, Point Light Source PLS 
1. INTRODUCTION 
Integral imaging which was first proposed by G. Lippman in 1908 [1], has been regarded as one 
of the most attractive 3D imaging and display techniques, because it can provide full-color, full-parallax 
and continuous-viewing images. In addition, integral imaging does not require viewing-aids, 
multiple observers, and multi viewable. Integral imaging technique consists of two 
processes, pickup and reconstruction [2-6], as shown in Figure 1. In the pickup process of integral 
imaging, direction and intensity information of the rays coming from a 3D object are spatially 
sampled by use of a lens array (or pinhole array) and 2D image sensor. The lens array composed 
of many convex elemental lenses is positioned immediately in front of the photographic film as a 
capture device. 
Figure 1. Basic concept of integral imaging. (a) Pick-up and (b) Reconstruction 
DOI: 10.5121/ijci.2014.3402 11
International Journal on Cybernetics & Informatics (IJCI) Vol. 3, No. 4, August 2014 
The captured image is composed of numerous small elemental images, which are imaged by lens 
array, with their number corresponding to that of the elemental lenses. Therefore, a size of 
elemental image is equal to a size of elemental lens. A real image, which is pseudoscopic (depth-reversed) 
image [7-10], is reconstructed through the lens array when the elemental image is 
displayed on a 2D display device in Figure.1 (b). Although we are living in a three-dimensional 
(3D) world, almost all displays present two-dimensional (2D) images. 3D displays are needed for 
some important applications such as a virtual surgery, a 3D design, and a remote control, because 
3D image includes extra information that is helpful to human. Therefore, 3D display is interesting 
research field. Some 3D technologies have been developed, for example, stereoscopy, 
autostereoscopy, holography and integral imaging. Among them, the integral imaging technology 
has some advantages that it does not require any special glasses and has continuous viewpoints 
within the viewing angle. It also provides full parallax, full color, and real time. However, 
conventional integral imaging display (IID) has drawbacks such as small viewing angle (VA), 
limited viewing resolution and short depth range [11]. In this paper, we focus on the VA issue. 
12 
2. VIEWING ANGLE OF PLS DISPLAY 
Figure 2. Conventional PLS display 
Figure 2. shows a structure of PLS display that consists of a light source, a collimating lens, a lens 
array and a spatial light modulator (SLM). Collimated lights are focused by the lens array in a 
focal plane of the lens array. This focal plane is called PLS plane. A number of PLSs is equal to a 
number of elemental lenses, because conventional PLS display uses one plane illumination. When 
the SLM displays the elemental images, the lights from the PLS are modulated to create 
integrated 3D image. For example, three rays reconstruct 3D integrated point P1 at a cross section 
of those three rays when three pixels of SLM such as EI1, EI2, and EI3 open and other pixels 
close, as shown in Figure 2. The size of elemental image depends on a position of SLM. If a gap g 
between SLM and the lens array is longer than the 2f, the light beams from PLSs overlap in the 
SLM. When the g is shorter than the 2f, PLSs does not illuminate some parts of SLM so those 
parts are not used. Therefore, the g is equal to twice of focal length of the lens array. Each 
elemental lens creates one PLS in the focal plane hence a distance between two PLSs is equal to a 
size of the elemental lens PL. The distance between two PLSs is important parameter because it 
determines a spatial resolution of PLS display. Although the size of the elemental lens is small, 
the spatial resolution of PLS display increases, but it reduces an angular resolution due to reduced 
number of SLM pixels corresponding to each elemental lens. From Figure 2, 3D integrated point
International Journal on Cybernetics & Informatics (IJCI) Vol. 3, No. 4, August 2014 
appears in the cross section of diverged rays from the PLSs so a viewing angle of PLS display is 
13 
 
 
P 
arctan 2   
VA L 
determined by diverging angle of the PLS [12]. That is given by . 
2 
 
  
 
= × 
f 
(1) 
where PL and f are the size and the focal length of elemental lens. However, the viewing angles of 
each integrated points are different because the viewing angle depends on the position of 3D 
point. For example, viewing angles of P1 and P2 are different that are shown in Figure 2. Since  
and ' are angles of two extreme rays to reconstruct P2, the viewing angle of P2 is equal to 
 
VA a a (1) 
. 
' ' 
 
 
 
A B 
AB 
arctan arctan '   
 
  
 
+   
 
  
 
= + = 
f 
f 
f 
AO 
AB 
From triangles ABO and P2DO, we can write , 
DO 
z 
2 2 P D 
= = (2) 
where z2 is a distance of P2 from the lens array. Since P2D = i·PL-x, the disparity is given by 
f 
( ) , 
= × − × (3) 
AB i P x L + 
2 z f 
where x is coordinate of P2 and i is index of elemental lens that directs upper extreme ray of P2 in 
Figure 2. The disparity AB has to be smaller PL/2 and greater -PL/2, because the size of elemental 
image is equal to elemental lens. We obtain the viewing angle of the upper extreme ray 
 
 
i × P − 
x a L (4) 
arctan . 
2 
  
 
  
 
+ 
= 
z f 
By repeating previous steps, the angle of the down extreme ray is given 
 
x i PL a (5) 
, 
' 
 
' arctan 
− × 
2 
   
  
 
− 
= 
z f 
where i' is index of elemental lens that directs down extreme ray of P2. Hence, the viewing angle 
of integrated point at any position is defined 
' 
' arctan arctan L L i P x x i P 
VA 
z f z f 
a a 
 × −   − ×  
= + =   +   
 +   +  
(6) 
The i and i' are defined with indexes of elemental lenses that direct two rays with largest two 
disparities among collected ray to reconstruct 3D point. The largest disparity is half of the 
elemental lens.
International Journal on Cybernetics  Informatics (IJCI) Vol. 3, No. 4, August 2014 
14 
Figure 3. Calculation of the viewing angle of integrated points. (a) Dependent on the distance and the 
lateral position of integrated point. The viewing angles of integrated points where are (b) on palne at z=40 
mm and (c) on the center of lens array 
Figure 3. shows calculation of the viewing angle of integrated point that is determined an angle of 
between two extreme rays. In this calculation, a size, a focal length of the elemental lens, and a 
distance between the lens array and the display are 5 mm, 8 mm, and 16 mm, respectively. The 
distance of integrated point is from 30 mm to 50 mm. The lateral position is from the center of 
lens array. According to Figure 3, the viewing angle of PLS display is larger than IID as the same 
configuration. The viewing angles depend on the collected ray number highly. When the disparity 
of elemental image is equal to half of the elemental lens, the viewing angle of that point achieves 
the maximum value. For example, viewing angles of pixels at 40 mm from SLM and lateral 
positions such as -10 mm, -5 mm, 0 mm, 5 mm, and 10 mm are 34.76° in Figure 3. Those lateral 
positions are centers of elemental lenses ordered -2, -1, 0, 1, and 2. J.-H. Park et al. used the 
double collimated to increase the horizontal viewing angle of 3D/2D convertible IID [13]. 
However, their method can enhance only horizontal viewing angle and requires different 
elemental images for each of double collimated illuminations. 
3. PRINCIPLE OF WIDE-VIEWING ANGLE PLS DISPLAY 
Figure 4. Structure of the wide-viewing angle display
International Journal on Cybernetics  Informatics (IJCI) Vol. 3, No. 4, August 2014 
Figure 4. shows a structure of the proposed method. A conventional PLS display uses only center 
illumination but in our proposed method left, center, and right illuminated lights are collected by 
the lens array in the focal plane of the lens array and propagated into different directions. One 
elemental lens collects three different illuminations at the three different positions in the focal 
plane. Hence, one PLS consists of three parts of left, center, and right illuminations that are 
collected in focal point of elemental lens by three neighboring elemental lenses. The two 
additional parts of one PLS enhance the propagation angle of PLS so that total viewing angle of 
the proposed method is enhanced. The three parts of one PLS must be focused on one point where 
is the center of the corresponding central elemental lens in focal plane. Therefore, the left and 
right illumination are collected far PL from the center of corresponding lens. In this condition, an 
angle of illumination is given 
15 
 
 
PL a (8) 
. arctan 2   
 
  
 
= × 
f 
Since the angle of diverging rays determine the viewing angle, the viewing angle of proposed 
method is given by 
  
3 
= 2 × arctan 
  
L P 
 2 
 (9) 
VA 
f 
From Equations (1) and (9), the viewing angle of proposed method is three times higher than the 
conventional PLS display, but both equations just infer the maximum viewing angle. The 
viewing angle of any point is defined by Eq. (6) but the indexes of two elemental lenses are 
calculated with different disparity. In the new method, the disparity is greater than -3·PL/2 and 
smaller than 3·PL/2 because of additional two illuminations 
Figure 5. Viewing angle of proposed method depends (a) on the distance and the lateral position of 
integrated point. Viewing angles of integrated points where are (b) on plane at 40 mm from the SLM and (c) 
on the center of lens array 
Figure 5. shows viewing angle calculation of proposed in the same image volumes as 
conventional PLS display. According to Figures. 3 and 5 the viewing angle of proposed method 
enhances with multiple plane illuminations. In Figure 5.(b) viewing angles of points in the plane 
at z = 40 mm is 2.86 times higher than conventional PLS display in the same configuration. In 
additional, the difference between maximum viewing angle and minimum viewing angle of 
proposed method is smaller than the conventional PLS display. From Figure 5.(c), this difference 
reduces when the distance of 3D point increases. The gap g between SLM and lens array is
International Journal on Cybernetics  Informatics (IJCI) Vol. 3, No. 4, August 2014 
important parameter, because the overlapping of elemental images and miss using parts of SLM 
depend on the gap. In those reasons, we can determine the gap 
16 
3 
........................(10) 
f 
2 
g = 
When g is given Eq.10, the proposed method can display 3D image without overlapping problem 
and all parts of SLM are used. 
4. SIMULATION RESULS 
In the simulation, we used Light Tools 6.3 to reconstruct an object. A simulation setup is shown 
in Figure 6. Three plane light sources illuminate the lens array that the focal length f is 3.3 mm 
and the size of elemental lens PL is 1 mm. It is widely used lens array IID. A receiver of Light 
Tools is similar 2D sensor so we use an imaging lens to capture reconstructed object in the 
different viewer’s positions. 
Figure 6. Implementation of wide-viewing angle PLS display with Light Tools 
Figure 7. Show simulation result of ray collection. In Figure 7(a), three differently directed rays 
are collected by lens array in the three different positions because of Petzval curvature. It 
increases a size of PLS, as shown in Figure.7 (b). We use just three-plane illuminations arranged 
along horizontal axis. Theretofore, a width of PLS is bigger than height of PLS, as shown in 
Figure.7(b). If the lens array consists of perfect elemental lens, all rays are collected at PLS plane 
and the width and the height of PLS are same. The receiver is located at PLS plane, so we can 
determine a distance between two PLSs. It is equal to 1 mm from Figure.7 (b), since the size of 
the elemental lens is 1 mm. 
Figure 7. Ray collection (a) in top side and (b) in PLS plane
International Journal on Cybernetics  Informatics (IJCI) Vol. 3, No. 4, August 2014 
Figure.8 shows the reconstructed object with the conventional PLS and the proposed method at 
the five different positions such as a left 20°, a left 10°, a center 0°, a right 10°, and a right 20°. 
Since the imaging lens and the receiver are the same with single lens camera, the reconstructed 
object “3” is inverted. We do not see any object in Figures.8(a) and 8(e) because the left 20° and 
the right 20° are outside of the viewing angle of the conventional PLS display. In the proposed 
method, those two positions are inside of viewing angle of proposed method so the object is 
reconstructed in Figures.8(f) and.8(j). A quality of the reconstructed object depends on the 
number of rays. 
17 
Figure 8. Reconstructions of (a)-(e) conventional PLS display and (f)-(j) proposed PLS display 
Figure.9 Intensity distributions of (a)-(e) conventional and (f)-(j) proposed PLS display at 
the five different viewing directions. 
Figure 9. shows an intensity distributions of conventional and proposed PLS display at the five 
different positions such as the left 20°, the left 10°, the center 0°, the right 10°, and the right 20°. 
In the conventional PLS display, the intensity decreases if the angle of observer increases. When 
the angles of observer are as the left 20° and the right 20°, observer does not see any object. From 
Figures.9 (f) - (j), the intensities of the object “3” are almost same in the five different viewing 
directions
International Journal on Cybernetics  Informatics (IJCI) Vol. 3, No. 4, August 2014 
18 
CONCLUSION 
In conclusion, a method to enhance the viewing angle in II has been proposed and demonstrated 
by simulation. In this method, three collimated illuminations that are directed to three different 
angles in order to get widen propagation angle of point light source (PLS). Among three 
illuminations two slanted illuminations increase the propagation angle of PLS over the 
conventional method. Simulation result shows that the viewing angle of proposed PLS displays is 
three times larger than conventional PLS displays. 
ACKNOWLEDGEMENT 
This research work is supported by MICT (Ministry of Information and Communication 
Technology Division), Government of the People’s Republic of Bangladesh 
[56.00.0000.028.33.007.15.14-312] 
REFERENCES 
[1] G. Lippmann, “La Photographie Integrale, (1908)” C. R. Acad. Sci., vol. 146, pp. 446–451. 
[2] F. Okano, H. Hoshino, J. Arai, and I. Yuyama, (1997) “Real-Time Pickup Method for a Three 
Dimensional Image Based on Integral Photography,” Appl. Opt., vol. 36, pp. 1598-1603. 
[3] Y.-W. Song, B. Javidi, and F. Jin, (2005) “3D Object Scaling in Integral Imaging Display by Varying 
the Spatial Ray Sampling Rate,” Opt. Express, vol. 13, pp. 3242-3251. 
[4] F. Okano, J. Arai, K. Mitani, and M. Okui, (2006) “Real-Time Integral Imaging Based on Extremely 
High Resolution Video System,” Proc. of IEEE , vol. 94, pp. 490-501. 
[5] J. Arai, F. Okano, H. Hoshino, and I. Yuyama, (1998) “Gradient-index Lens Array Method Based on 
Real-time Integral Photography for Three-dimensional Omages,” Appl. Opt., vol. 37, pp. 2034–2045. 
[6] R. Cuenca, A. Pons, G. Saavedra, M. Corral, and B. Javidi, (2006) “Optically-Corrected Elemental 
Images for Undistorted Integral Image Display,” Opt. Express, vol. 14, pp. 9657-9663. 
[7] N. Davies, M. McCormick, and L. Yang, (1998) “Three-Dimensional Imaging Systems: A New 
Development,” Appl. Opt., vol. 27, pp. 4520-4528. 
[8] Y. A. Dudnikov, “Elimination of Image Pseudoscopy in Integral Photography, (1971) ” Opt. 
Technology, vol. 38, pp. 140-143. 
[9] S.-W. Min, J. Hong, and B. Lee, (2004) “Analysis of an Optical Depth Converter Used in a Three- 
Dimensional Integral Imaging System,” Appl. Opt., vol. 43, pp. 4539-4549. 
[10] B. Lee, J.-H. Park and S.-W. Min, (2006) “Three-Dimensional Display and Information Processing 
Based on Integral Imaging,” in Digital Holography and Three-Dimensional Display, (Springer, New 
York, USA, 2006), pp. 333-378. 
[11] Ganbat Baasantseren, Jae-Hyeung Park, Ki-Chul Kwon and Nam Kim, (2009) “Viewing angle 
enhanced integral imaging display using two elemental image masks”, Optics Express-14405, 
Vol.17.No.16. 
[12] J.-H. Park, J. Kim, Y. Kim, and B. Lee, (2005) “Resolution-Enhanced Three-Dimension / Two- 
Dimension Convertible Display Based on Integral Imaging,” Opt. Express, vol. 13, pp. 1875-1884. 
[13] J. -H. Park, J. Kim, J.- P. Bae, Y. Kim, and B. Lee, (2005) “Viewing Angle Enhancement of Three- 
Dimension/Two-Dimension Convertible Integral Imaging Display Using Double Collimated or 
Noncollimated Illumination,” Jpn. J. Appl. Phys, vol. 44, pp. 991-994.
International Journal on Cybernetics  Informatics (IJCI) Vol. 3, No. 4, August 2014 
19 
Authors 
Md. Shariful Islam received the B.Sc. and M.Sc. degrees from the Department of 
Applied Physics, Electronics and Communication Engineering (Erstwhile Electronics 
 Applied Physics) from Islamic University, Kushtia-7003, Bangladesh, in 1999 
(Examination held 2001) and 2000 (Examination held-2003), respectively.. He joined 
as a lecturer in the Dept. of Information and Communication Engineering, IU, 
Kushtia-7003, Bangladesh, in September 2004 and currently, he is working as an 
Assistant Professor in the same department. Now, he is doing his PhD work on 
Integral Imaging Three-dimensional display (3D) system. His research interests are in the area of Integral 
Imaging 3D display technology. Image processing. 
Md. Tariquzzaman received the B.Sc. and M.Sc. degrees from the Department of 
Applied Physics, Electronics and Communication Engineering (Erstwhile Electronics 
 Applied Physics) from Islamic University, Kushtia-7003, Bangladesh, in 1999 
(Examination held 2001) and 2000 (Examination held-2003), respectively. He 
received the Ph.D. degree in Engineering in 2010 from Chonnam National University, 
Republic of Korea. . From 2010 to 2011 he worked as a postdoctoral fellow in the 
School of Electronics  Computer Engineering, Chonnam National University, and 
Mokpo National University Republic of Korea. He joined as a lecturer in the Dept. of 
Information and Communication Engineering, IU, Kushtia-7003, Bangladesh, in September 2004 
and currently, he is working as an Associate Professor in the same department. His research interests are 
in the area of audio-visual speech processing, biometrics, computer vision and machine learning 
Md. Zahidul Islam has born on December 29th, 1978 in Kushtia, Bangladesh. He has received his B.Sc. 
and M.Sc. degrees from the Department of Applied Physics  Electronic Engineering, 
University of Rajshahi (RU), Bangladesh in 2000 and 2002 respectively. In 2003, he 
has joined as a Lecturer in the Department of Information and Communication 
Engineering, Islamic University (IU), Kushtia, Bangladesh. He has done his Ph.D 
research on Visual Object Tracking System from the Department of Computer 
Engineering under the supervision of Prof. Chil-Woo Lee in Intelligent Image Media 
 Interface Lab, Chonnam National University (CNU), South Korea. In August 2011, 
Dr. Islam has been successfully awarded his PhD from the same department. Besides, 
he has done his research internship in 3D Vision Lab in Samsung Advanced Institute 
of Technology (SAIT), Suwon, South Korea. Dr. Islam has also other research interests like computer 
vision, 3D object, human and motion tracking and tracking articulated body, genetic algorithm etc.

More Related Content

What's hot

MC0086 Internal Assignment (SMU)
MC0086 Internal Assignment (SMU)MC0086 Internal Assignment (SMU)
MC0086 Internal Assignment (SMU)Krishan Pareek
 
Visual Odometry using Stereo Vision
Visual Odometry using Stereo VisionVisual Odometry using Stereo Vision
Visual Odometry using Stereo VisionRSIS International
 
Basics of image processing & analysis
Basics of image processing & analysisBasics of image processing & analysis
Basics of image processing & analysisMohsin Siddique
 
Region based image segmentation
Region based image segmentationRegion based image segmentation
Region based image segmentationSafayet Hossain
 
3D Reconstruction from Multiple uncalibrated 2D Images of an Object
3D Reconstruction from Multiple uncalibrated 2D Images of an Object3D Reconstruction from Multiple uncalibrated 2D Images of an Object
3D Reconstruction from Multiple uncalibrated 2D Images of an ObjectAnkur Tyagi
 
Comparison of image segmentation
Comparison of image segmentationComparison of image segmentation
Comparison of image segmentationHaitham Ahmed
 
TYBSC (CS) SEM 6- DIGITAL IMAGE PROCESSING
TYBSC (CS) SEM 6- DIGITAL IMAGE PROCESSINGTYBSC (CS) SEM 6- DIGITAL IMAGE PROCESSING
TYBSC (CS) SEM 6- DIGITAL IMAGE PROCESSINGKamana Tripathi
 
Image segmentation
Image segmentationImage segmentation
Image segmentationDeepak Kumar
 
Image segmentation ppt
Image segmentation pptImage segmentation ppt
Image segmentation pptGichelle Amon
 
Image segmentation using advanced fuzzy c-mean algorithm [FYP @ IITR, obtaine...
Image segmentation using advanced fuzzy c-mean algorithm [FYP @ IITR, obtaine...Image segmentation using advanced fuzzy c-mean algorithm [FYP @ IITR, obtaine...
Image segmentation using advanced fuzzy c-mean algorithm [FYP @ IITR, obtaine...Koteswar Rao Jerripothula
 
TYBSC (CS) SEM 6- DIGITAL IMAGE PROCESSING
TYBSC (CS) SEM 6- DIGITAL IMAGE PROCESSINGTYBSC (CS) SEM 6- DIGITAL IMAGE PROCESSING
TYBSC (CS) SEM 6- DIGITAL IMAGE PROCESSINGKamana Tripathi
 
Region based segmentation
Region based segmentationRegion based segmentation
Region based segmentationramya marichamy
 
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...ijma
 
Dense Image Matching - Challenges and Potentials (Keynote 3D-ARCH 2015)
Dense Image Matching - Challenges and Potentials (Keynote 3D-ARCH 2015) Dense Image Matching - Challenges and Potentials (Keynote 3D-ARCH 2015)
Dense Image Matching - Challenges and Potentials (Keynote 3D-ARCH 2015) Konrad Wenzel
 
Filtering Based Illumination Normalization Techniques for Face Recognition
Filtering Based Illumination Normalization Techniques for Face RecognitionFiltering Based Illumination Normalization Techniques for Face Recognition
Filtering Based Illumination Normalization Techniques for Face RecognitionRadita Apriana
 
Structure from motion
Structure from motionStructure from motion
Structure from motionFatima Radi
 
LOCAL DISTANCE AND DEMPSTER-DHAFER FOR MULTI-FOCUS IMAGE FUSION
LOCAL DISTANCE AND DEMPSTER-DHAFER FOR MULTI-FOCUS IMAGE FUSION LOCAL DISTANCE AND DEMPSTER-DHAFER FOR MULTI-FOCUS IMAGE FUSION
LOCAL DISTANCE AND DEMPSTER-DHAFER FOR MULTI-FOCUS IMAGE FUSION sipij
 
Ajay ppt region segmentation new copy
Ajay ppt region segmentation new   copyAjay ppt region segmentation new   copy
Ajay ppt region segmentation new copyAjay Kumar Singh
 

What's hot (20)

MC0086 Internal Assignment (SMU)
MC0086 Internal Assignment (SMU)MC0086 Internal Assignment (SMU)
MC0086 Internal Assignment (SMU)
 
Visual Odometry using Stereo Vision
Visual Odometry using Stereo VisionVisual Odometry using Stereo Vision
Visual Odometry using Stereo Vision
 
Basics of image processing & analysis
Basics of image processing & analysisBasics of image processing & analysis
Basics of image processing & analysis
 
Region based image segmentation
Region based image segmentationRegion based image segmentation
Region based image segmentation
 
3D Reconstruction from Multiple uncalibrated 2D Images of an Object
3D Reconstruction from Multiple uncalibrated 2D Images of an Object3D Reconstruction from Multiple uncalibrated 2D Images of an Object
3D Reconstruction from Multiple uncalibrated 2D Images of an Object
 
Comparison of image segmentation
Comparison of image segmentationComparison of image segmentation
Comparison of image segmentation
 
TYBSC (CS) SEM 6- DIGITAL IMAGE PROCESSING
TYBSC (CS) SEM 6- DIGITAL IMAGE PROCESSINGTYBSC (CS) SEM 6- DIGITAL IMAGE PROCESSING
TYBSC (CS) SEM 6- DIGITAL IMAGE PROCESSING
 
Image segmentation
Image segmentationImage segmentation
Image segmentation
 
Image segmentation ppt
Image segmentation pptImage segmentation ppt
Image segmentation ppt
 
Image segmentation using advanced fuzzy c-mean algorithm [FYP @ IITR, obtaine...
Image segmentation using advanced fuzzy c-mean algorithm [FYP @ IITR, obtaine...Image segmentation using advanced fuzzy c-mean algorithm [FYP @ IITR, obtaine...
Image segmentation using advanced fuzzy c-mean algorithm [FYP @ IITR, obtaine...
 
TYBSC (CS) SEM 6- DIGITAL IMAGE PROCESSING
TYBSC (CS) SEM 6- DIGITAL IMAGE PROCESSINGTYBSC (CS) SEM 6- DIGITAL IMAGE PROCESSING
TYBSC (CS) SEM 6- DIGITAL IMAGE PROCESSING
 
Region based segmentation
Region based segmentationRegion based segmentation
Region based segmentation
 
Image segmentation
Image segmentationImage segmentation
Image segmentation
 
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
 
Dense Image Matching - Challenges and Potentials (Keynote 3D-ARCH 2015)
Dense Image Matching - Challenges and Potentials (Keynote 3D-ARCH 2015) Dense Image Matching - Challenges and Potentials (Keynote 3D-ARCH 2015)
Dense Image Matching - Challenges and Potentials (Keynote 3D-ARCH 2015)
 
Filtering Based Illumination Normalization Techniques for Face Recognition
Filtering Based Illumination Normalization Techniques for Face RecognitionFiltering Based Illumination Normalization Techniques for Face Recognition
Filtering Based Illumination Normalization Techniques for Face Recognition
 
PPT s02-machine vision-s2
PPT s02-machine vision-s2PPT s02-machine vision-s2
PPT s02-machine vision-s2
 
Structure from motion
Structure from motionStructure from motion
Structure from motion
 
LOCAL DISTANCE AND DEMPSTER-DHAFER FOR MULTI-FOCUS IMAGE FUSION
LOCAL DISTANCE AND DEMPSTER-DHAFER FOR MULTI-FOCUS IMAGE FUSION LOCAL DISTANCE AND DEMPSTER-DHAFER FOR MULTI-FOCUS IMAGE FUSION
LOCAL DISTANCE AND DEMPSTER-DHAFER FOR MULTI-FOCUS IMAGE FUSION
 
Ajay ppt region segmentation new copy
Ajay ppt region segmentation new   copyAjay ppt region segmentation new   copy
Ajay ppt region segmentation new copy
 

Similar to Wide-Viewing Angle 3D Display Using Multiple Illuminations

Design of imaging system of fundus camera
Design of imaging system of fundus cameraDesign of imaging system of fundus camera
Design of imaging system of fundus cameraMark Gokhler Ph. D
 
Characterization of materials lec2 3
Characterization of materials  lec2 3Characterization of materials  lec2 3
Characterization of materials lec2 3Noor Faraz
 
Solving the Pose Ambiguity via a Simple Concentric Circle Constraint
Solving the Pose Ambiguity via a Simple Concentric Circle ConstraintSolving the Pose Ambiguity via a Simple Concentric Circle Constraint
Solving the Pose Ambiguity via a Simple Concentric Circle ConstraintDr. Amarjeet Singh
 
Computer Input with Human Eyes-Only Using Two Purkinje Images Which Works in ...
Computer Input with Human Eyes-Only Using Two Purkinje Images Which Works in ...Computer Input with Human Eyes-Only Using Two Purkinje Images Which Works in ...
Computer Input with Human Eyes-Only Using Two Purkinje Images Which Works in ...Waqas Tariq
 
Rotman Lens Performance Analysis
Rotman Lens Performance AnalysisRotman Lens Performance Analysis
Rotman Lens Performance AnalysisIDES Editor
 
Phong Shading over any Polygonal Surface
Phong Shading over any Polygonal Surface Phong Shading over any Polygonal Surface
Phong Shading over any Polygonal Surface Bhuvnesh Pratap
 
Super resolution imaging using frequency wavelets and three dimensional
Super resolution imaging using frequency wavelets and three dimensionalSuper resolution imaging using frequency wavelets and three dimensional
Super resolution imaging using frequency wavelets and three dimensionalIAEME Publication
 
Modeling of Optical Scattering in Advanced LIGO
Modeling of Optical Scattering in Advanced LIGOModeling of Optical Scattering in Advanced LIGO
Modeling of Optical Scattering in Advanced LIGOHunter Rew
 
Satellite Imaging System
Satellite Imaging SystemSatellite Imaging System
Satellite Imaging SystemCSCJournals
 
VISUAL MODEL BASED SINGLE IMAGE DEHAZING USING ARTIFICIAL BEE COLONY OPTIMIZA...
VISUAL MODEL BASED SINGLE IMAGE DEHAZING USING ARTIFICIAL BEE COLONY OPTIMIZA...VISUAL MODEL BASED SINGLE IMAGE DEHAZING USING ARTIFICIAL BEE COLONY OPTIMIZA...
VISUAL MODEL BASED SINGLE IMAGE DEHAZING USING ARTIFICIAL BEE COLONY OPTIMIZA...ijistjournal
 
VISUAL MODEL BASED SINGLE IMAGE DEHAZING USING ARTIFICIAL BEE COLONY OPTIMIZA...
VISUAL MODEL BASED SINGLE IMAGE DEHAZING USING ARTIFICIAL BEE COLONY OPTIMIZA...VISUAL MODEL BASED SINGLE IMAGE DEHAZING USING ARTIFICIAL BEE COLONY OPTIMIZA...
VISUAL MODEL BASED SINGLE IMAGE DEHAZING USING ARTIFICIAL BEE COLONY OPTIMIZA...ijistjournal
 
BoDong-ICPR2014-CameraReady
BoDong-ICPR2014-CameraReadyBoDong-ICPR2014-CameraReady
BoDong-ICPR2014-CameraReadyBo Dong
 
Cz31447455
Cz31447455Cz31447455
Cz31447455IJMER
 

Similar to Wide-Viewing Angle 3D Display Using Multiple Illuminations (20)

SID 2012
SID 2012SID 2012
SID 2012
 
Design of imaging system of fundus camera
Design of imaging system of fundus cameraDesign of imaging system of fundus camera
Design of imaging system of fundus camera
 
Characterization of materials lec2 3
Characterization of materials  lec2 3Characterization of materials  lec2 3
Characterization of materials lec2 3
 
Aberration_Errors
Aberration_ErrorsAberration_Errors
Aberration_Errors
 
Solving the Pose Ambiguity via a Simple Concentric Circle Constraint
Solving the Pose Ambiguity via a Simple Concentric Circle ConstraintSolving the Pose Ambiguity via a Simple Concentric Circle Constraint
Solving the Pose Ambiguity via a Simple Concentric Circle Constraint
 
Module No. 38
Module No. 38Module No. 38
Module No. 38
 
Computer Input with Human Eyes-Only Using Two Purkinje Images Which Works in ...
Computer Input with Human Eyes-Only Using Two Purkinje Images Which Works in ...Computer Input with Human Eyes-Only Using Two Purkinje Images Which Works in ...
Computer Input with Human Eyes-Only Using Two Purkinje Images Which Works in ...
 
Rotman Lens Performance Analysis
Rotman Lens Performance AnalysisRotman Lens Performance Analysis
Rotman Lens Performance Analysis
 
Phong Shading over any Polygonal Surface
Phong Shading over any Polygonal Surface Phong Shading over any Polygonal Surface
Phong Shading over any Polygonal Surface
 
Progresivos
ProgresivosProgresivos
Progresivos
 
Super resolution imaging using frequency wavelets and three dimensional
Super resolution imaging using frequency wavelets and three dimensionalSuper resolution imaging using frequency wavelets and three dimensional
Super resolution imaging using frequency wavelets and three dimensional
 
F045033337
F045033337F045033337
F045033337
 
Modeling of Optical Scattering in Advanced LIGO
Modeling of Optical Scattering in Advanced LIGOModeling of Optical Scattering in Advanced LIGO
Modeling of Optical Scattering in Advanced LIGO
 
FinalReport
FinalReportFinalReport
FinalReport
 
Satellite Imaging System
Satellite Imaging SystemSatellite Imaging System
Satellite Imaging System
 
bragg2pre
bragg2prebragg2pre
bragg2pre
 
VISUAL MODEL BASED SINGLE IMAGE DEHAZING USING ARTIFICIAL BEE COLONY OPTIMIZA...
VISUAL MODEL BASED SINGLE IMAGE DEHAZING USING ARTIFICIAL BEE COLONY OPTIMIZA...VISUAL MODEL BASED SINGLE IMAGE DEHAZING USING ARTIFICIAL BEE COLONY OPTIMIZA...
VISUAL MODEL BASED SINGLE IMAGE DEHAZING USING ARTIFICIAL BEE COLONY OPTIMIZA...
 
VISUAL MODEL BASED SINGLE IMAGE DEHAZING USING ARTIFICIAL BEE COLONY OPTIMIZA...
VISUAL MODEL BASED SINGLE IMAGE DEHAZING USING ARTIFICIAL BEE COLONY OPTIMIZA...VISUAL MODEL BASED SINGLE IMAGE DEHAZING USING ARTIFICIAL BEE COLONY OPTIMIZA...
VISUAL MODEL BASED SINGLE IMAGE DEHAZING USING ARTIFICIAL BEE COLONY OPTIMIZA...
 
BoDong-ICPR2014-CameraReady
BoDong-ICPR2014-CameraReadyBoDong-ICPR2014-CameraReady
BoDong-ICPR2014-CameraReady
 
Cz31447455
Cz31447455Cz31447455
Cz31447455
 

Recently uploaded

How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationMichael W. Hawkins
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationSafe Software
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsEnterprise Knowledge
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Alan Dix
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions
 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...HostedbyConfluent
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersThousandEyes
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machinePadma Pradeep
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 
How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?XfilesPro
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsMemoori
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsSnow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsHyundai Motor Group
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Allon Mureinik
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking MenDelhi Call girls
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 

Recently uploaded (20)

How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping Elbows
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food Manufacturing
 
The transition to renewables in India.pdf
The transition to renewables in India.pdfThe transition to renewables in India.pdf
The transition to renewables in India.pdf
 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machine
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial Buildings
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsSnow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 

Wide-Viewing Angle 3D Display Using Multiple Illuminations

  • 1. International Journal on Cybernetics & Informatics (IJCI) Vol. 3, No. 4, August 2014 INTEGRAL IMAGING THREE-DIMENSIONAL (3D) DISPLAY WITH WIDE-VIEWING ANGLE USING MULTIPLE ILLUMINATIONS Md. Shariful Islam , Md. Tariquzzaman and Md. Zahidul Islam Department of Information and Communication Engineering, Islamic University, Kushtia-7003, Bangladesh ABSTRACT In this paper, a three-dimensional (3-D) integral imaging (II) system to improve the viewing angle by using the multiple illuminations is proposed. In this system, three collimated illuminations that are directed to three different angles in order to get widen propagation angle of point light source (PLS). Among three illuminations two slanted illuminations increase the propagation angle of PLS over the conventional method. Simulation result shows that the viewing angle of proposed PLS displays is three times larger than conventional PLS displays. In the simulation, we used Light Tools 6.3 to reconstruct an object. KEYWORDS Integral Imaging 3D display, Multiple Illuminations, Point Light Source PLS 1. INTRODUCTION Integral imaging which was first proposed by G. Lippman in 1908 [1], has been regarded as one of the most attractive 3D imaging and display techniques, because it can provide full-color, full-parallax and continuous-viewing images. In addition, integral imaging does not require viewing-aids, multiple observers, and multi viewable. Integral imaging technique consists of two processes, pickup and reconstruction [2-6], as shown in Figure 1. In the pickup process of integral imaging, direction and intensity information of the rays coming from a 3D object are spatially sampled by use of a lens array (or pinhole array) and 2D image sensor. The lens array composed of many convex elemental lenses is positioned immediately in front of the photographic film as a capture device. Figure 1. Basic concept of integral imaging. (a) Pick-up and (b) Reconstruction DOI: 10.5121/ijci.2014.3402 11
  • 2. International Journal on Cybernetics & Informatics (IJCI) Vol. 3, No. 4, August 2014 The captured image is composed of numerous small elemental images, which are imaged by lens array, with their number corresponding to that of the elemental lenses. Therefore, a size of elemental image is equal to a size of elemental lens. A real image, which is pseudoscopic (depth-reversed) image [7-10], is reconstructed through the lens array when the elemental image is displayed on a 2D display device in Figure.1 (b). Although we are living in a three-dimensional (3D) world, almost all displays present two-dimensional (2D) images. 3D displays are needed for some important applications such as a virtual surgery, a 3D design, and a remote control, because 3D image includes extra information that is helpful to human. Therefore, 3D display is interesting research field. Some 3D technologies have been developed, for example, stereoscopy, autostereoscopy, holography and integral imaging. Among them, the integral imaging technology has some advantages that it does not require any special glasses and has continuous viewpoints within the viewing angle. It also provides full parallax, full color, and real time. However, conventional integral imaging display (IID) has drawbacks such as small viewing angle (VA), limited viewing resolution and short depth range [11]. In this paper, we focus on the VA issue. 12 2. VIEWING ANGLE OF PLS DISPLAY Figure 2. Conventional PLS display Figure 2. shows a structure of PLS display that consists of a light source, a collimating lens, a lens array and a spatial light modulator (SLM). Collimated lights are focused by the lens array in a focal plane of the lens array. This focal plane is called PLS plane. A number of PLSs is equal to a number of elemental lenses, because conventional PLS display uses one plane illumination. When the SLM displays the elemental images, the lights from the PLS are modulated to create integrated 3D image. For example, three rays reconstruct 3D integrated point P1 at a cross section of those three rays when three pixels of SLM such as EI1, EI2, and EI3 open and other pixels close, as shown in Figure 2. The size of elemental image depends on a position of SLM. If a gap g between SLM and the lens array is longer than the 2f, the light beams from PLSs overlap in the SLM. When the g is shorter than the 2f, PLSs does not illuminate some parts of SLM so those parts are not used. Therefore, the g is equal to twice of focal length of the lens array. Each elemental lens creates one PLS in the focal plane hence a distance between two PLSs is equal to a size of the elemental lens PL. The distance between two PLSs is important parameter because it determines a spatial resolution of PLS display. Although the size of the elemental lens is small, the spatial resolution of PLS display increases, but it reduces an angular resolution due to reduced number of SLM pixels corresponding to each elemental lens. From Figure 2, 3D integrated point
  • 3. International Journal on Cybernetics & Informatics (IJCI) Vol. 3, No. 4, August 2014 appears in the cross section of diverged rays from the PLSs so a viewing angle of PLS display is 13 P arctan 2 VA L determined by diverging angle of the PLS [12]. That is given by . 2 = × f (1) where PL and f are the size and the focal length of elemental lens. However, the viewing angles of each integrated points are different because the viewing angle depends on the position of 3D point. For example, viewing angles of P1 and P2 are different that are shown in Figure 2. Since and ' are angles of two extreme rays to reconstruct P2, the viewing angle of P2 is equal to VA a a (1) . ' ' A B AB arctan arctan ' + = + = f f f AO AB From triangles ABO and P2DO, we can write , DO z 2 2 P D = = (2) where z2 is a distance of P2 from the lens array. Since P2D = i·PL-x, the disparity is given by f ( ) , = × − × (3) AB i P x L + 2 z f where x is coordinate of P2 and i is index of elemental lens that directs upper extreme ray of P2 in Figure 2. The disparity AB has to be smaller PL/2 and greater -PL/2, because the size of elemental image is equal to elemental lens. We obtain the viewing angle of the upper extreme ray i × P − x a L (4) arctan . 2 + = z f By repeating previous steps, the angle of the down extreme ray is given x i PL a (5) , ' ' arctan − × 2 − = z f where i' is index of elemental lens that directs down extreme ray of P2. Hence, the viewing angle of integrated point at any position is defined ' ' arctan arctan L L i P x x i P VA z f z f a a × − − × = + = + + + (6) The i and i' are defined with indexes of elemental lenses that direct two rays with largest two disparities among collected ray to reconstruct 3D point. The largest disparity is half of the elemental lens.
  • 4. International Journal on Cybernetics Informatics (IJCI) Vol. 3, No. 4, August 2014 14 Figure 3. Calculation of the viewing angle of integrated points. (a) Dependent on the distance and the lateral position of integrated point. The viewing angles of integrated points where are (b) on palne at z=40 mm and (c) on the center of lens array Figure 3. shows calculation of the viewing angle of integrated point that is determined an angle of between two extreme rays. In this calculation, a size, a focal length of the elemental lens, and a distance between the lens array and the display are 5 mm, 8 mm, and 16 mm, respectively. The distance of integrated point is from 30 mm to 50 mm. The lateral position is from the center of lens array. According to Figure 3, the viewing angle of PLS display is larger than IID as the same configuration. The viewing angles depend on the collected ray number highly. When the disparity of elemental image is equal to half of the elemental lens, the viewing angle of that point achieves the maximum value. For example, viewing angles of pixels at 40 mm from SLM and lateral positions such as -10 mm, -5 mm, 0 mm, 5 mm, and 10 mm are 34.76° in Figure 3. Those lateral positions are centers of elemental lenses ordered -2, -1, 0, 1, and 2. J.-H. Park et al. used the double collimated to increase the horizontal viewing angle of 3D/2D convertible IID [13]. However, their method can enhance only horizontal viewing angle and requires different elemental images for each of double collimated illuminations. 3. PRINCIPLE OF WIDE-VIEWING ANGLE PLS DISPLAY Figure 4. Structure of the wide-viewing angle display
  • 5. International Journal on Cybernetics Informatics (IJCI) Vol. 3, No. 4, August 2014 Figure 4. shows a structure of the proposed method. A conventional PLS display uses only center illumination but in our proposed method left, center, and right illuminated lights are collected by the lens array in the focal plane of the lens array and propagated into different directions. One elemental lens collects three different illuminations at the three different positions in the focal plane. Hence, one PLS consists of three parts of left, center, and right illuminations that are collected in focal point of elemental lens by three neighboring elemental lenses. The two additional parts of one PLS enhance the propagation angle of PLS so that total viewing angle of the proposed method is enhanced. The three parts of one PLS must be focused on one point where is the center of the corresponding central elemental lens in focal plane. Therefore, the left and right illumination are collected far PL from the center of corresponding lens. In this condition, an angle of illumination is given 15 PL a (8) . arctan 2 = × f Since the angle of diverging rays determine the viewing angle, the viewing angle of proposed method is given by 3 = 2 × arctan L P 2 (9) VA f From Equations (1) and (9), the viewing angle of proposed method is three times higher than the conventional PLS display, but both equations just infer the maximum viewing angle. The viewing angle of any point is defined by Eq. (6) but the indexes of two elemental lenses are calculated with different disparity. In the new method, the disparity is greater than -3·PL/2 and smaller than 3·PL/2 because of additional two illuminations Figure 5. Viewing angle of proposed method depends (a) on the distance and the lateral position of integrated point. Viewing angles of integrated points where are (b) on plane at 40 mm from the SLM and (c) on the center of lens array Figure 5. shows viewing angle calculation of proposed in the same image volumes as conventional PLS display. According to Figures. 3 and 5 the viewing angle of proposed method enhances with multiple plane illuminations. In Figure 5.(b) viewing angles of points in the plane at z = 40 mm is 2.86 times higher than conventional PLS display in the same configuration. In additional, the difference between maximum viewing angle and minimum viewing angle of proposed method is smaller than the conventional PLS display. From Figure 5.(c), this difference reduces when the distance of 3D point increases. The gap g between SLM and lens array is
  • 6. International Journal on Cybernetics Informatics (IJCI) Vol. 3, No. 4, August 2014 important parameter, because the overlapping of elemental images and miss using parts of SLM depend on the gap. In those reasons, we can determine the gap 16 3 ........................(10) f 2 g = When g is given Eq.10, the proposed method can display 3D image without overlapping problem and all parts of SLM are used. 4. SIMULATION RESULS In the simulation, we used Light Tools 6.3 to reconstruct an object. A simulation setup is shown in Figure 6. Three plane light sources illuminate the lens array that the focal length f is 3.3 mm and the size of elemental lens PL is 1 mm. It is widely used lens array IID. A receiver of Light Tools is similar 2D sensor so we use an imaging lens to capture reconstructed object in the different viewer’s positions. Figure 6. Implementation of wide-viewing angle PLS display with Light Tools Figure 7. Show simulation result of ray collection. In Figure 7(a), three differently directed rays are collected by lens array in the three different positions because of Petzval curvature. It increases a size of PLS, as shown in Figure.7 (b). We use just three-plane illuminations arranged along horizontal axis. Theretofore, a width of PLS is bigger than height of PLS, as shown in Figure.7(b). If the lens array consists of perfect elemental lens, all rays are collected at PLS plane and the width and the height of PLS are same. The receiver is located at PLS plane, so we can determine a distance between two PLSs. It is equal to 1 mm from Figure.7 (b), since the size of the elemental lens is 1 mm. Figure 7. Ray collection (a) in top side and (b) in PLS plane
  • 7. International Journal on Cybernetics Informatics (IJCI) Vol. 3, No. 4, August 2014 Figure.8 shows the reconstructed object with the conventional PLS and the proposed method at the five different positions such as a left 20°, a left 10°, a center 0°, a right 10°, and a right 20°. Since the imaging lens and the receiver are the same with single lens camera, the reconstructed object “3” is inverted. We do not see any object in Figures.8(a) and 8(e) because the left 20° and the right 20° are outside of the viewing angle of the conventional PLS display. In the proposed method, those two positions are inside of viewing angle of proposed method so the object is reconstructed in Figures.8(f) and.8(j). A quality of the reconstructed object depends on the number of rays. 17 Figure 8. Reconstructions of (a)-(e) conventional PLS display and (f)-(j) proposed PLS display Figure.9 Intensity distributions of (a)-(e) conventional and (f)-(j) proposed PLS display at the five different viewing directions. Figure 9. shows an intensity distributions of conventional and proposed PLS display at the five different positions such as the left 20°, the left 10°, the center 0°, the right 10°, and the right 20°. In the conventional PLS display, the intensity decreases if the angle of observer increases. When the angles of observer are as the left 20° and the right 20°, observer does not see any object. From Figures.9 (f) - (j), the intensities of the object “3” are almost same in the five different viewing directions
  • 8. International Journal on Cybernetics Informatics (IJCI) Vol. 3, No. 4, August 2014 18 CONCLUSION In conclusion, a method to enhance the viewing angle in II has been proposed and demonstrated by simulation. In this method, three collimated illuminations that are directed to three different angles in order to get widen propagation angle of point light source (PLS). Among three illuminations two slanted illuminations increase the propagation angle of PLS over the conventional method. Simulation result shows that the viewing angle of proposed PLS displays is three times larger than conventional PLS displays. ACKNOWLEDGEMENT This research work is supported by MICT (Ministry of Information and Communication Technology Division), Government of the People’s Republic of Bangladesh [56.00.0000.028.33.007.15.14-312] REFERENCES [1] G. Lippmann, “La Photographie Integrale, (1908)” C. R. Acad. Sci., vol. 146, pp. 446–451. [2] F. Okano, H. Hoshino, J. Arai, and I. Yuyama, (1997) “Real-Time Pickup Method for a Three Dimensional Image Based on Integral Photography,” Appl. Opt., vol. 36, pp. 1598-1603. [3] Y.-W. Song, B. Javidi, and F. Jin, (2005) “3D Object Scaling in Integral Imaging Display by Varying the Spatial Ray Sampling Rate,” Opt. Express, vol. 13, pp. 3242-3251. [4] F. Okano, J. Arai, K. Mitani, and M. Okui, (2006) “Real-Time Integral Imaging Based on Extremely High Resolution Video System,” Proc. of IEEE , vol. 94, pp. 490-501. [5] J. Arai, F. Okano, H. Hoshino, and I. Yuyama, (1998) “Gradient-index Lens Array Method Based on Real-time Integral Photography for Three-dimensional Omages,” Appl. Opt., vol. 37, pp. 2034–2045. [6] R. Cuenca, A. Pons, G. Saavedra, M. Corral, and B. Javidi, (2006) “Optically-Corrected Elemental Images for Undistorted Integral Image Display,” Opt. Express, vol. 14, pp. 9657-9663. [7] N. Davies, M. McCormick, and L. Yang, (1998) “Three-Dimensional Imaging Systems: A New Development,” Appl. Opt., vol. 27, pp. 4520-4528. [8] Y. A. Dudnikov, “Elimination of Image Pseudoscopy in Integral Photography, (1971) ” Opt. Technology, vol. 38, pp. 140-143. [9] S.-W. Min, J. Hong, and B. Lee, (2004) “Analysis of an Optical Depth Converter Used in a Three- Dimensional Integral Imaging System,” Appl. Opt., vol. 43, pp. 4539-4549. [10] B. Lee, J.-H. Park and S.-W. Min, (2006) “Three-Dimensional Display and Information Processing Based on Integral Imaging,” in Digital Holography and Three-Dimensional Display, (Springer, New York, USA, 2006), pp. 333-378. [11] Ganbat Baasantseren, Jae-Hyeung Park, Ki-Chul Kwon and Nam Kim, (2009) “Viewing angle enhanced integral imaging display using two elemental image masks”, Optics Express-14405, Vol.17.No.16. [12] J.-H. Park, J. Kim, Y. Kim, and B. Lee, (2005) “Resolution-Enhanced Three-Dimension / Two- Dimension Convertible Display Based on Integral Imaging,” Opt. Express, vol. 13, pp. 1875-1884. [13] J. -H. Park, J. Kim, J.- P. Bae, Y. Kim, and B. Lee, (2005) “Viewing Angle Enhancement of Three- Dimension/Two-Dimension Convertible Integral Imaging Display Using Double Collimated or Noncollimated Illumination,” Jpn. J. Appl. Phys, vol. 44, pp. 991-994.
  • 9. International Journal on Cybernetics Informatics (IJCI) Vol. 3, No. 4, August 2014 19 Authors Md. Shariful Islam received the B.Sc. and M.Sc. degrees from the Department of Applied Physics, Electronics and Communication Engineering (Erstwhile Electronics Applied Physics) from Islamic University, Kushtia-7003, Bangladesh, in 1999 (Examination held 2001) and 2000 (Examination held-2003), respectively.. He joined as a lecturer in the Dept. of Information and Communication Engineering, IU, Kushtia-7003, Bangladesh, in September 2004 and currently, he is working as an Assistant Professor in the same department. Now, he is doing his PhD work on Integral Imaging Three-dimensional display (3D) system. His research interests are in the area of Integral Imaging 3D display technology. Image processing. Md. Tariquzzaman received the B.Sc. and M.Sc. degrees from the Department of Applied Physics, Electronics and Communication Engineering (Erstwhile Electronics Applied Physics) from Islamic University, Kushtia-7003, Bangladesh, in 1999 (Examination held 2001) and 2000 (Examination held-2003), respectively. He received the Ph.D. degree in Engineering in 2010 from Chonnam National University, Republic of Korea. . From 2010 to 2011 he worked as a postdoctoral fellow in the School of Electronics Computer Engineering, Chonnam National University, and Mokpo National University Republic of Korea. He joined as a lecturer in the Dept. of Information and Communication Engineering, IU, Kushtia-7003, Bangladesh, in September 2004 and currently, he is working as an Associate Professor in the same department. His research interests are in the area of audio-visual speech processing, biometrics, computer vision and machine learning Md. Zahidul Islam has born on December 29th, 1978 in Kushtia, Bangladesh. He has received his B.Sc. and M.Sc. degrees from the Department of Applied Physics Electronic Engineering, University of Rajshahi (RU), Bangladesh in 2000 and 2002 respectively. In 2003, he has joined as a Lecturer in the Department of Information and Communication Engineering, Islamic University (IU), Kushtia, Bangladesh. He has done his Ph.D research on Visual Object Tracking System from the Department of Computer Engineering under the supervision of Prof. Chil-Woo Lee in Intelligent Image Media Interface Lab, Chonnam National University (CNU), South Korea. In August 2011, Dr. Islam has been successfully awarded his PhD from the same department. Besides, he has done his research internship in 3D Vision Lab in Samsung Advanced Institute of Technology (SAIT), Suwon, South Korea. Dr. Islam has also other research interests like computer vision, 3D object, human and motion tracking and tracking articulated body, genetic algorithm etc.