SlideShare a Scribd company logo
1 of 8
Download to read offline
E-ISSN: 2321–9637
Volume 2, Issue 1, January 2014
International Journal of Research in Advent Technology
Available Online at: http://www.ijrat.org
i
AUTOMATIC VIEW SYNTHESIS FROM
STEREOSCOPIC 3D
BY IMAGE DOMAIN WARPING
Mrs. Mayuri T. Deshmukh Miss.Ashwini T. Sharnagat
Assistant Professor ME Ist
Digital Electronic
S.S.B.T’s C.O.E.T. Bambhori. S.S.B.T’s C.O.E.T. Bambhori.
Email id:mayuri.deshmukh08@gmail.com Email id:aashwini179@gmail.com
ABSTARCT- This paper focuses on the automatic view synthesis by image domain warping is presented
that automatically synthesizes new views directly from S3D video and functions completely. Nowadays,
stereoscopic 3D (S3D) cinema is already conventional and almost all new display devices for the home
support S3D content. S3D giving out communications to the home is already established partly in the
form of 3D Blu-ray discs, video on demand services, or television channels. The need to wear glasses is,
however, often considered as an obstacle, which hinders broader acceptance of this technology in the
home. Multiviewautosterescopic displays make possible a glasses free perception of S3D content for
several observers simultaneously and support head motion parallax in a limited range. To verify the
entire system the result is being observed on 3D television, 3D cinema’s for view the video allow a glasses
free perception.
Index Terms—Three dimensional TV, auto stereoscopic displays, 3D Blu-ray discs, head motion parallax
1. INTRODUCTION
STEREOSCOPIC 3D (S3D) cinema and television are in the process of changing the scenery of entertainment.
Primarily responsible for the change is the fact that technologies ranging from 3D content creation, to data
compression and transmission, to 3D display devices are progressively improving and adapted to enable a rich
and higher quality 3D experience. However, the necessity to wear glasses is often regarded as a main obstacle of
today’s conventional stereoscopic 3D display systems. Multi-view auto stereoscopic displays (MAD) overcome
this problem. They allow glasses free stereo viewing by emitting several images at the same time.
Stereoscopic 3D can expand users’ experiences beyond traditional2D-TV broadcasting by
contribution programs with depth idea of the observed scenes. IN fact, 3D has been successfully commercialized
as stereo movies, such as those by IMAX, for people to watch in the cinema, using special devices. Given that
the popularity of 3D programs has dramatically increased, 3D-TV has been known as a possible break through
for conventional TV technologies to satisfy the coming need for watching 3D programs at home. Typical MADs
which are on the market today require 8-views, 9-views or even 28-views as input. Because of the different
number of input views required by different MADs, no unique display format exists for such displays.
According to the formats involved in the distribution chain (Fig.1.), the transmission format has to enable such a
decoupling. Hence, a good transmission format has to fulfil the following requirements
• An automatic conversion from production to transmission format has to be possible. For live broadcast
applications also real-time conversion is required.
Volume 2, Issue 1, January 2014
International Journal of Research in Advent Technology
Available Online at:
• The transmission format has to allow an automatic and rea
format.
• The transmission format has to be well compressible to
and consumer 3D content production
stereoscopic displays. It is believed t
Stefanoski,2013]
Fig.1. High level view on the usual format conversions
Research communities and standardization bodies continue to
and enable efficient generation of novel views as required by MADs. Such formats can be divided into two
classes: video only formats like S3D and multiview video in general, and depth enhanced formats like si
view video plus depth and multiview video plus depth. The per
enhanced formats provides information on the depth structure of the 3D scene. Such depth data can be used for
view synthesis by depth-image bas
2004].
However, high quality view synthesis with DIBR requires high quality depth data. There exist
stereo algorithms which can automatically compute depth maps from stereo images, or depth sensors which can
capture depth usually of too little
content productions. Thus, today highly accurate depth maps are usually generated in a semiautomatic
where stereo algorithms or depth sensors are used to estimate or capture initial depth ma
improved in an interactive process. The most simple and cost efficient conversion from S3D As a
format to a transmission format consists of conducting only low level conversion steps (like colour space, bit
depth, frame-rate, or image resolution conversions). Such a transmission format can be
compatible to the existing S3D distribution and stereoscopic display infrastructure. Thus, using a transmission
format without supplementary depth data prevents an i
presented synthesis method can be used at the decoder side to synthesize new views directly from transmitted
S3D content. for real-time view synthesis is presented and analysed.
Fig.2 Stereo pinhole camera model illustrating the projection of a
It shows a bird’s-eye view of a pin
cameras are located at positions CL
E
Volume 2, Issue 1, January 2014
International Journal of Research in Advent Technology
Available Online at: http://www.ijrat.org
The transmission format has to allow an automatic and real-time conversion into any particular
• The transmission format has to be well compressible to save transmission band width. nowadays
and consumer 3D content production is dominated by S3D content, i.e. 2-view content which is
stereoscopic displays. It is believed that S3D as aproduction format will dominate over years
1. High level view on the usual format conversions required from content Production to display
Research communities and standardization bodies continue to investigate formats which are well compressible
and enable efficient generation of novel views as required by MADs. Such formats can be divided into two
classes: video only formats like S3D and multiview video in general, and depth enhanced formats like si
view video plus depth and multiview video plus depth. The per-pixel depth information included in the depth
enhanced formats provides information on the depth structure of the 3D scene. Such depth data can be used for
e based rendering (DIBR) [Masayuki T, Zhao Y, and C. Zhu
However, high quality view synthesis with DIBR requires high quality depth data. There exist
stereo algorithms which can automatically compute depth maps from stereo images, or depth sensors which can
accuracy to allow a high quality synthesis as required e.g. in professional
, today highly accurate depth maps are usually generated in a semiautomatic
where stereo algorithms or depth sensors are used to estimate or capture initial depth ma
improved in an interactive process. The most simple and cost efficient conversion from S3D As a
format to a transmission format consists of conducting only low level conversion steps (like colour space, bit
or image resolution conversions). Such a transmission format can be well
compatible to the existing S3D distribution and stereoscopic display infrastructure. Thus, using a transmission
format without supplementary depth data prevents an increase of content production and distribution costs. The
presented synthesis method can be used at the decoder side to synthesize new views directly from transmitted
time view synthesis is presented and analysed.
Stereo pinhole camera model illustrating the projection of a
3D point pinto the image planes of both cameras.
eye view of a pin-hole model, parallel stereo camera setup. Projection centers of both
and CR at a baseline distance of b and both cameras have a focal length of
E-ISSN: 2321–9637
International Journal of Research in Advent Technology
79
particular N-view display
width. nowadays, professional
view content which is watchable on
production format will dominate over years[Nikolce and
Production to display
investigate formats which are well compressible
and enable efficient generation of novel views as required by MADs. Such formats can be divided into two
classes: video only formats like S3D and multiview video in general, and depth enhanced formats like single
pixel depth information included in the depth
enhanced formats provides information on the depth structure of the 3D scene. Such depth data can be used for
, and C. Zhu, 2012], [Fehn C,
However, high quality view synthesis with DIBR requires high quality depth data. There exist
stereo algorithms which can automatically compute depth maps from stereo images, or depth sensors which can
s required e.g. in professional
, today highly accurate depth maps are usually generated in a semiautomatic process,
where stereo algorithms or depth sensors are used to estimate or capture initial depth maps which are then
improved in an interactive process. The most simple and cost efficient conversion from S3D As a production
format to a transmission format consists of conducting only low level conversion steps (like colour space, bit
well compressed and is
compatible to the existing S3D distribution and stereoscopic display infrastructure. Thus, using a transmission
ncrease of content production and distribution costs. The
presented synthesis method can be used at the decoder side to synthesize new views directly from transmitted
hole model, parallel stereo camera setup. Projection centers of both
and both cameras have a focal length of f
E-ISSN: 2321–9637
Volume 2, Issue 1, January 2014
International Journal of Research in Advent Technology
Available Online at: http://www.ijrat.org
80
projecting a 3D point P, which is located at a distance Z from the projection centers, into the
respective.projection planes gives the projected points pL and pR. Hence, the projected points have an image
disparity of
d = (pL − cL) − (pR − cR) = (1)
Obviously, a synthesis of a new view at a position
new = (1 − ) + C (2)
Corresponds to having a baseline distance
= (3)
with respect to the left view. Hence, that would give disparities
= , (4)
i.e. a linear rescaling of all previous disparities.
2. IMAGE-DOMAIN-WARPING
Automatic view synthesis technology which synthesizes new views from 2-view video is highly popular. Such
synthesis technology would be compatible to any S3D distribution infrastructure to the home. Image-domain
Warping (IDW) is a view synthesis method which is able to automatically synthesize new views based on
stereoscopic video input. In contrast to synthesis methods based on DIBR, which relies on dense disparity or
depth maps, IDW employs only sparse disparities to synthesize a novel view. It use the facts that our human
visual system is not able to very exactly estimate absolute depth and that it is not responsive to image distortions
up to a certain level as long as images remain visually plausible, e.g. image distortions can be hidden in non-
salient regions. They are used to compute an image warp which enforces desired sparse disparities in the final
synthesized image while distortions are hidden in non-salient regions. To find out which disparities have to be
enforced in a synthesized image.[ Yin Zhao and Ce Zhu ,2011]
Goal: “warp” the pixels of the image so that they appear in the correct place for a new viewpoint. An
advantage of IDW is, there is no need a geometric model of the object/environment can be done in time
proportional to screen size and (Mostly) independent of object/environment complexity. Very less Disadvantage
are their require Limited resolution and Excessive warping reveals several visual artifacts a linear rescaling of
all previous disparities d is required to synthesize the new view.
In general, a stereo image pair doesn’t contain sufficient information to completely describe an image captured
from a slightly different camera n position. IDW uses image saliency information to deal with this problem.[
Aliaga ,2010]
I. Image Warps: We define a warp as a function that deforms the parameter domain of an image
W:[0, W] × [0, H]→R2 (5)
Where W and H are the width and height of the image, respectively. Image warps have a long history of use in
computer vision and graphics based problems goal of the IDW method is to compute a warping of Each of the
initial stereo images that can be used to produce an output image meeting predefined properties (e.g. scaled
image disparities). To do this, we formulate a quadratic energy functional E (w). A warp w is then warps defined
at regular grid positions.
W [p, q] := w( ∆x p, ∆y q). (6)
Volume 2, Issue 1, January 2014
International Journal of Research in Advent Technology
Available Online at:
Fig.3 E
II. View Synthesis: The IDW algorithm computes N
separated into four modules, In the Warp Calculator only that warps are computed which are necessary for the
Synthesis of A view that is located
minimizing their respective error functional. Calculated warps are then used
Interpolator/Extrapolator module to interpolate or extrapolate the warps that are necessary to synthesize the
output views, as required by a particular multi
warped module, images are warped to synthesize the output images.
Fig. 4 Block diagram of the view synthesizer which converts 2
I. Data Extraction: First, a sparse set of disparity features is extracted. These sparse disparities are estimated
an automatic, accurate and robust way.Disparities of vertical image edges are particularly important for the
stereo sis, i.e. the perceived depth. For this reason, additional features and corresponding disparities are detected
such that features lay uniformly distributed on nearly vertical image edges. Disparities of detected features are
estimated using the Lucas-Kanade method. The availability of such features is also important to prevent salient
synthesis errors with IDW like bending edges in the synt
Fig. 5 Disparities
II. Warp Calculation: Two warps
camera position located in the center between the two input cameras. For a given sparse set of disparity features
(xL, xR) ∈F, Disparities b
= !
"
E
Volume 2, Issue 1, January 2014
International Journal of Research in Advent Technology
Available Online at: http://www.ijrat.org
3 Example of a warping function that deforms an input image.
The IDW algorithm computes N-view video from 2-view video where
Warp Calculator only that warps are computed which are necessary for the
of A view that is located in the middle between two input views. These warps are calculated by
minimizing their respective error functional. Calculated warps are then used
Interpolator/Extrapolator module to interpolate or extrapolate the warps that are necessary to synthesize the
output views, as required by a particular multi-view autosterescopic display. Finally, in the Image
warped to synthesize the output images.
diagram of the view synthesizer which converts 2-view videoto N-view video.
First, a sparse set of disparity features is extracted. These sparse disparities are estimated
an automatic, accurate and robust way.Disparities of vertical image edges are particularly important for the
, i.e. the perceived depth. For this reason, additional features and corresponding disparities are detected
formly distributed on nearly vertical image edges. Disparities of detected features are
Kanade method. The availability of such features is also important to prevent salient
synthesis errors with IDW like bending edges in the synthesized image
Fig. 5 Disparities estimated at sparse feature positions
Two warps wL and wR are computed which warp images IL and IR
camera position located in the center between the two input cameras. For a given sparse set of disparity features
(7)
E-ISSN: 2321–9637
International Journal of Research in Advent Technology
81
view video where N >2. It can be
Warp Calculator only that warps are computed which are necessary for the
views. These warps are calculated by
minimizing their respective error functional. Calculated warps are then used in the Warp
Interpolator/Extrapolator module to interpolate or extrapolate the warps that are necessary to synthesize the N
view autosterescopic display. Finally, in the Image-domain
view video.
First, a sparse set of disparity features is extracted. These sparse disparities are estimatedin
an automatic, accurate and robust way.Disparities of vertical image edges are particularly important for the
, i.e. the perceived depth. For this reason, additional features and corresponding disparities are detected
formly distributed on nearly vertical image edges. Disparities of detected features are
Kanade method. The availability of such features is also important to prevent salient
IR, respectively, to a
camera position located in the center between the two input cameras. For a given sparse set of disparity features
)
Volume 2, Issue 1, January 2014
International Journal of Research in Advent Technology
Available Online at:
have to be enforced. Each warp is computed as the result of an energy minimization problem. Anenergy
functional E is defined, and minimizing it yields a warp
view synthesis. The energy functional is de
terms which are related to a particular type of constraint as described below. Each term is
weighted with a parameter λ
#($) ≔ &#
In Warp calculation 4 constraints are used
Smoothness, Constraints and Energy
equation system. The number of spatial and temporal smoothness cons
of freedom of the warp Wto be solved, while the number of disparity constraints depends on the number of
detected features, which is content dependent.
III. Warp Interpolation/Extrapolation
camera positions as input. The main reason for this restriction is to reduce the overall computational complexity
of the warp calculation. Furthermore
depend on the number of output views required by a particular display system.
Fig.6 Cameras, camera positions, and associated warps
IV. Image-Domain Warping: An output image at a position
based on the input image which is closer to the desired output position. Because warps are continuous, no holes
can occur in the synthesized image. In particular, open regions are implicitly unpainted by stretching unsealing
texture from the neighborhood into the region
synthesis results in practice as long as views are synthesizedwhich are in the range
However, if only one image is used for the s
output image.
3. TRANSMISSION OF WARPS AS SUPPLEMENTARY DATA
To reduce the computational complexity at the receiver side, we modify the transmission system which was
proposed in fig.7 The modified system is shown in fig.7 Thus, it is proposed to shift the warp extraction and
warp calculation part to the sending side, and, in addition to the multiview data, to efficiently compress and
transmit the warp calculation result, i.e. a restricted s
Fig.7
E
Volume 2, Issue 1, January 2014
International Journal of Research in Advent Technology
Available Online at: http://www.ijrat.org
have to be enforced. Each warp is computed as the result of an energy minimization problem. Anenergy
is defined, and minimizing it yields a warp w that creates the desired change of disparities after
view synthesis. The energy functional is defined with help of the extracted data and consists of three additive
terms which are related to a particular type of constraint as described below. Each term is
#&(') + (#((') + )#)(') (8)
4 constraints are used Disparity Constraints, Spatial Smoothness Constraints
Smoothness, Constraints and Energy Minimization.This functional always represents an over
equation system. The number of spatial and temporal smoothness constraints is dense in the number of degrees
of freedom of the warp Wto be solved, while the number of disparity constraints depends on the number of
detected features, which is content dependent.
Warp Interpolation/Extrapolation: Multiview auto stereoscopic displays require many views from different
camera positions as input. The main reason for this restriction is to reduce the overall computational complexity
calculation. Furthermore, using this approach, the complexity of the warp computation does not
depend on the number of output views required by a particular display system.
Fig.6 Cameras, camera positions, and associated warps
An output image at a position α is synthesized according to
based on the input image which is closer to the desired output position. Because warps are continuous, no holes
can occur in the synthesized image. In particular, open regions are implicitly unpainted by stretching unsealing
re from the neighborhood into the regionWe noticed that this kind of implicit unpointing provides good
synthesis results in practice as long as views are synthesizedwhich are in the range −
However, if only one image is used for the synthesis, empty regions can occur on the left or r
TRANSMISSION OF WARPS AS SUPPLEMENTARY DATA
To reduce the computational complexity at the receiver side, we modify the transmission system which was
modified system is shown in fig.7 Thus, it is proposed to shift the warp extraction and
warp calculation part to the sending side, and, in addition to the multiview data, to efficiently compress and
transmit the warp calculation result, i.e. a restricted set of warps
Fig.7 Modified transmission and view synthesis system
E-ISSN: 2321–9637
International Journal of Research in Advent Technology
82
have to be enforced. Each warp is computed as the result of an energy minimization problem. Anenergy
that creates the desired change of disparities after
fined with help of the extracted data and consists of three additive
)
Disparity Constraints, Spatial Smoothness Constraints,Temporal
This functional always represents an over-constrained
traints is dense in the number of degrees
of freedom of the warp Wto be solved, while the number of disparity constraints depends on the number of
Multiview auto stereoscopic displays require many views from different
camera positions as input. The main reason for this restriction is to reduce the overall computational complexity
ty of the warp computation does not
to i.e. Iα is synthesized
based on the input image which is closer to the desired output position. Because warps are continuous, no holes
can occur in the synthesized image. In particular, open regions are implicitly unpainted by stretching unsealing
We noticed that this kind of implicit unpointing provides good
−0.5 ≤ α ≤ 1.5 (Fig.6).
ynthesis, empty regions can occur on the left or right border of the
To reduce the computational complexity at the receiver side, we modify the transmission system which was
modified system is shown in fig.7 Thus, it is proposed to shift the warp extraction and
warp calculation part to the sending side, and, in addition to the multiview data, to efficiently compress and
Volume 2, Issue 1, January 2014
International Journal of Research in Advent Technology
Available Online at:
I. Warp Coding with a Dedicated Warp Coder
successively. For each view, they are encoded separately and multiplexed into a single bit stream. Without loss
of simplification, the coding of a warp sequence assigned to one view is described in the foll
the warp at time instant f as w f . Each warp
resolution where each node of the grid is indexed with integer coordinates
∈R2 assigned
I. Spatial Partitioning: A partition consists of a set of 2D locations which we call a group of locations (GOLs).
Each warp w f is partitioned into GOLs using a quincunx resolution pyramid,
II.Intra-Warp and Inter-Warp Prediction:
corresponding prediction modes are supported: INTRA, INTER_P and INTER_B. After prediction, residuals
w[i, j] − ẃf [i, j ] are computed, uniformly quantized, and entropy coded. Quantized residuals of each GOL are
entropy coded independently from other GOLs. In the INTRA prediction mode, all locations of
row-wise and predicted in a closed loop DPCM from previo
shown in Fig.9. Locations of all other GOLs
spatially neigh boring locations in *
II. Warp Coding With Help of a Video Coder
video coding technology, we propose the wa
warp coding method shown.
I. Coding System: Similar to the coding with the dedicated warp coder, warps are encoded separately for each
view and then multiplexed into a single bit stream. To encode a warp, first, a Warp Quantize is used to convert
each warp into an 8-bit grayscale image representation.
E
Volume 2, Issue 1, January 2014
International Journal of Research in Advent Technology
Available Online at: http://www.ijrat.org
a Dedicated Warp Coder: Warps of all time instances and views are encoded
successively. For each view, they are encoded separately and multiplexed into a single bit stream. Without loss
of simplification, the coding of a warp sequence assigned to one view is described in the foll
. Each warp w f is represented as a regular quad grid, in above
resolution where each node of the grid is indexed with integer coordinates i, j and has a 2D location
Fig. 8 Block-diagram of warp coder.
A partition consists of a set of 2D locations which we call a group of locations (GOLs).
is partitioned into GOLs using a quincunx resolution pyramid,
Warp Prediction: Similar to video coding standards, three warp coding types and
corresponding prediction modes are supported: INTRA, INTER_P and INTER_B. After prediction, residuals
are computed, uniformly quantized, and entropy coded. Quantized residuals of each GOL are
entropy coded independently from other GOLs. In the INTRA prediction mode, all locations of
wise and predicted in a closed loop DPCM from previously scanned spatially neighbouring locations,as it is
shown in Fig.9. Locations of all other GOLs d fl are predicted by the respective centroids computed from
*+,-
. -
.
/
as indicated in Fig.9.
th Help of a Video Coder: To take advantage of already existing and highly sophisticated
video coding technology, we propose the warp coding system show in Fig.10 as an alternative to the dedicated
the coding with the dedicated warp coder, warps are encoded separately for each
view and then multiplexed into a single bit stream. To encode a warp, first, a Warp Quantize is used to convert
bit grayscale image representation.
Fig.9. Warp coding system using a video coder.
E-ISSN: 2321–9637
International Journal of Research in Advent Technology
83
Warps of all time instances and views are encoded
successively. For each view, they are encoded separately and multiplexed into a single bit stream. Without loss
of simplification, the coding of a warp sequence assigned to one view is described in the following. We denote
in above (Fig.3) of fixed
and has a 2D location w f [i, j]
A partition consists of a set of 2D locations which we call a group of locations (GOLs).
Similar to video coding standards, three warp coding types and
corresponding prediction modes are supported: INTRA, INTER_P and INTER_B. After prediction, residuals
are computed, uniformly quantized, and entropy coded. Quantized residuals of each GOL are
entropy coded independently from other GOLs. In the INTRA prediction mode, all locations of d f 1 are scanned
usly scanned spatially neighbouring locations,as it is
are predicted by the respective centroids computed from
To take advantage of already existing and highly sophisticated
as an alternative to the dedicated
the coding with the dedicated warp coder, warps are encoded separately for each
view and then multiplexed into a single bit stream. To encode a warp, first, a Warp Quantize is used to convert
Volume 2, Issue 1, January 2014
International Journal of Research in Advent Technology
Available Online at:
4. DISCUSSION
1) Recently, the Moving Pictures Experts Group (MPEG) issued a Call for Proposals (CfP) on 3D Video Coding
technology with the goal to identify
• A 3D video format,
• A corresponding efficient compression technology,
• A view synthesis technology which enables an efficient synthesis of new views based on the 3D video
format.
Fig..11. Assessed quality of the MPEG in the Multiview
This includes Data Extraction, Warp Calculation, and Warp Interpolation/Extrapolation.
3D format with 2 views is already supported by existing consumer and professional stereo cameras. With each
proposal to the CfP, compressed bit streams, a decoder, and view synthesis software had to be provided. Bit
streams had to be compressed at predefined target bit rates Proposals were evaluat
the synthesized views through formal subjective testing on both stereoscopic and multiview au
displays. Fig.11 shows the quality assessed on a
results were also assessed on a stereoscopic display; the corresponding stereo sequences can be found here for
download. [Sikora Thomas,1997]
5. CONCLUSION
A view synthesis method based on
from stereoscopic 3D video. It relies on an automatic estimation of sparse disparities and image saliency
information, and enforces target disparities in the synthesized images using an
Image-domain Warping leads to high quality synthesis results without requiring depth map estimation and
transmission. However a reuse of existing video coding technology has the advantage of reduced development
and production costs. For this reason, JCT
coding based on HEVC, which will allow receivers to perform a synthesis of new views based on Image
domain- Warping While a dedicated warp coder can have a stronger coding e
video coding technology for warp coding lies the reduced development and production costs, i.e.in the reuse of
available video coding chips. For this reason,
extend the upcoming 3D-HEVC standard by
E
Volume 2, Issue 1, January 2014
International Journal of Research in Advent Technology
Available Online at: http://www.ijrat.org
1) Recently, the Moving Pictures Experts Group (MPEG) issued a Call for Proposals (CfP) on 3D Video Coding
ient compression technology,
view synthesis technology which enables an efficient synthesis of new views based on the 3D video
Fig. 10 Transmission and view synthesis system.
. Assessed quality of the MPEG in the Multiview autosterescopic display test scenario
This includes Data Extraction, Warp Calculation, and Warp Interpolation/Extrapolation.
3D format with 2 views is already supported by existing consumer and professional stereo cameras. With each
proposal to the CfP, compressed bit streams, a decoder, and view synthesis software had to be provided. Bit
be compressed at predefined target bit rates Proposals were evaluated by assessing the quality of
he synthesized views through formal subjective testing on both stereoscopic and multiview au
shows the quality assessed on a multiview auto stereoscopic display. Qualitatively similar
results were also assessed on a stereoscopic display; the corresponding stereo sequences can be found here for
A view synthesis method based on Image-domain-Warping. Its approach automatically synthesizes new views
from stereoscopic 3D video. It relies on an automatic estimation of sparse disparities and image saliency
information, and enforces target disparities in the synthesized images using an image warping framework.
domain Warping leads to high quality synthesis results without requiring depth map estimation and
transmission. However a reuse of existing video coding technology has the advantage of reduced development
ts. For this reason, JCT-3V planes to extend the upcoming 3D-HEVC standard by warp
coding based on HEVC, which will allow receivers to perform a synthesis of new views based on Image
While a dedicated warp coder can have a stronger coding efficiency, the
coding lies the reduced development and production costs, i.e.in the reuse of
available video coding chips. For this reason, based on the evaluation results presented in it,
HEVC standard by warp coding based on HEVC. This will enable the transmission
E-ISSN: 2321–9637
International Journal of Research in Advent Technology
84
1) Recently, the Moving Pictures Experts Group (MPEG) issued a Call for Proposals (CfP) on 3D Video Coding
view synthesis technology which enables an efficient synthesis of new views based on the 3D video
display test scenario.
Please note that such a
3D format with 2 views is already supported by existing consumer and professional stereo cameras. With each
proposal to the CfP, compressed bit streams, a decoder, and view synthesis software had to be provided. Bit
ed by assessing the quality of
he synthesized views through formal subjective testing on both stereoscopic and multiview auto stereoscopic
multiview auto stereoscopic display. Qualitatively similar
results were also assessed on a stereoscopic display; the corresponding stereo sequences can be found here for
approach automatically synthesizes new views
from stereoscopic 3D video. It relies on an automatic estimation of sparse disparities and image saliency
image warping framework.
domain Warping leads to high quality synthesis results without requiring depth map estimation and
transmission. However a reuse of existing video coding technology has the advantage of reduced development
HEVC standard by warp
coding based on HEVC, which will allow receivers to perform a synthesis of new views based on Image-
, the advantage of reusing
coding lies the reduced development and production costs, i.e.in the reuse of
based on the evaluation results presented in it, JCT-3V planes to
warp coding based on HEVC. This will enable the transmission of
E-ISSN: 2321–9637
Volume 2, Issue 1, January 2014
International Journal of Research in Advent Technology
Available Online at: http://www.ijrat.org
85
multi-view video plus warp data with an internationalstandard, which will allow the use of IDW for view
synthesisat the receiver side.
REFERENCES
[1] Aliaga G. Daniel image warping CS635 Spring 2010
[2] Fehn C, “Depth-image-based rendering (DIBR), compression, and Transmission for a new approach on
3D- TV,” Proc. SPIE, vol. 5291, Vol. 93–104, May 2004
[3] Le Gall D.J, “The MPEG video compression algorithm,” Signal Processing: Image Commune.1992,
vol. 4, no. 4, pp. 129–140
[4] Muller K, Merkle P, and T. Wiegand, “3-D video representation using depth maps,” Proc. IEEE, vol.
99, no. 4, pp. 643–656, Apr. 2011.
[5] Masayuki T, Zhao Y, and C. Zhu, 3D-TV System with Depth- Image-Based Rendering: Architecture,
Techniques and Challenges, 1st
sed. New York, NY, USA: Springer-Verlag, 2012.
[6] Nikolce Stefanoski, Oliver Wang, Manuel Lang, Pierre Greisen, Simon Heinzle, and Aljosa Smolic
IEEE Transactions On Image Processing, Vol. 22, No. 9, September 2013
[7] Smolic A,“3D video and free viewpoint video—From capture to display,” Pattern Recognit., vol. 44,
no. 9, pp. 1958–1968, Sep. 2011.
[8] Sikora Thomas, The MPEG-4 Video Standard Verification Model, Senior Member, IEEE Transactions
On Circuits And Systems For Video Technology, Vol. 7, No. 1, February 1997
[9] Yin Zhao, Ce Zhu,Depth No-Synthesis-Error Model for View Synthesis in 3-D Video Senior Member,
IEEE, Zhenzhong Chen, Member, IEEE, and Lu Yu, Member, IEEE IEEE TRANSACTIONS ON
IMAGE PROCESSING, VOL. 20, NO. 8, AUGUST 2011
[10] Zilly F, Riechert C, P. Eisert, and P. Kauff, “Semantic kernels binarized—A feature descriptor for fast
and robust matching,” in Nov. 2011, pp. 39–48.

More Related Content

What's hot

Tangible 3 D Hand Gesture
Tangible 3 D Hand GestureTangible 3 D Hand Gesture
Tangible 3 D Hand GestureJongHyoun
 
AN EMERGING TREND OF FEATURE EXTRACTION METHOD IN VIDEO PROCESSING
AN EMERGING TREND OF FEATURE EXTRACTION METHOD IN VIDEO PROCESSINGAN EMERGING TREND OF FEATURE EXTRACTION METHOD IN VIDEO PROCESSING
AN EMERGING TREND OF FEATURE EXTRACTION METHOD IN VIDEO PROCESSINGcscpconf
 
Image Processing
Image ProcessingImage Processing
Image ProcessingRolando
 
nternational Journal of Computational Engineering Research(IJCER)
nternational Journal of Computational Engineering Research(IJCER)nternational Journal of Computational Engineering Research(IJCER)
nternational Journal of Computational Engineering Research(IJCER)ijceronline
 
Digital image processing
Digital image processingDigital image processing
Digital image processingDeevena Dayaal
 
Sensors optimized for 3 d digitization
Sensors optimized for 3 d digitizationSensors optimized for 3 d digitization
Sensors optimized for 3 d digitizationBasavaraj Patted
 
Ijarcet vol-2-issue-3-938-941
Ijarcet vol-2-issue-3-938-941Ijarcet vol-2-issue-3-938-941
Ijarcet vol-2-issue-3-938-941Editor IJARCET
 
Sensors on 3 d digitization seminar report
Sensors on 3 d digitization seminar reportSensors on 3 d digitization seminar report
Sensors on 3 d digitization seminar reportVishnu Prasad
 
Digital image processing
Digital image processingDigital image processing
Digital image processingtushar05
 
Device for text to speech production and to braille script
Device for text to speech production and to braille scriptDevice for text to speech production and to braille script
Device for text to speech production and to braille scriptIAEME Publication
 
Retrieving Of Color Images Using SDS Technique
Retrieving Of Color Images Using SDS TechniqueRetrieving Of Color Images Using SDS Technique
Retrieving Of Color Images Using SDS TechniqueEditor IJMTER
 
An Introduction to Image Processing and Artificial Intelligence
An Introduction to Image Processing and Artificial IntelligenceAn Introduction to Image Processing and Artificial Intelligence
An Introduction to Image Processing and Artificial IntelligenceWasif Altaf
 

What's hot (20)

Tangible 3 D Hand Gesture
Tangible 3 D Hand GestureTangible 3 D Hand Gesture
Tangible 3 D Hand Gesture
 
Chapter2 DIP
Chapter2 DIP Chapter2 DIP
Chapter2 DIP
 
AN EMERGING TREND OF FEATURE EXTRACTION METHOD IN VIDEO PROCESSING
AN EMERGING TREND OF FEATURE EXTRACTION METHOD IN VIDEO PROCESSINGAN EMERGING TREND OF FEATURE EXTRACTION METHOD IN VIDEO PROCESSING
AN EMERGING TREND OF FEATURE EXTRACTION METHOD IN VIDEO PROCESSING
 
Image processing Presentation
Image processing PresentationImage processing Presentation
Image processing Presentation
 
Image Processing
Image ProcessingImage Processing
Image Processing
 
G0352039045
G0352039045G0352039045
G0352039045
 
nternational Journal of Computational Engineering Research(IJCER)
nternational Journal of Computational Engineering Research(IJCER)nternational Journal of Computational Engineering Research(IJCER)
nternational Journal of Computational Engineering Research(IJCER)
 
Digital image processing
Digital image processingDigital image processing
Digital image processing
 
Sensors optimized for 3 d digitization
Sensors optimized for 3 d digitizationSensors optimized for 3 d digitization
Sensors optimized for 3 d digitization
 
Ijarcet vol-2-issue-3-938-941
Ijarcet vol-2-issue-3-938-941Ijarcet vol-2-issue-3-938-941
Ijarcet vol-2-issue-3-938-941
 
Sensors on 3 d digitization seminar report
Sensors on 3 d digitization seminar reportSensors on 3 d digitization seminar report
Sensors on 3 d digitization seminar report
 
Digital image processing
Digital image processingDigital image processing
Digital image processing
 
Image Processing
Image ProcessingImage Processing
Image Processing
 
Digital image processing
Digital image processingDigital image processing
Digital image processing
 
Device for text to speech production and to braille script
Device for text to speech production and to braille scriptDevice for text to speech production and to braille script
Device for text to speech production and to braille script
 
Retrieving Of Color Images Using SDS Technique
Retrieving Of Color Images Using SDS TechniqueRetrieving Of Color Images Using SDS Technique
Retrieving Of Color Images Using SDS Technique
 
Tele imersion
Tele imersionTele imersion
Tele imersion
 
An Introduction to Image Processing and Artificial Intelligence
An Introduction to Image Processing and Artificial IntelligenceAn Introduction to Image Processing and Artificial Intelligence
An Introduction to Image Processing and Artificial Intelligence
 
M.sc. m hassan
M.sc. m hassanM.sc. m hassan
M.sc. m hassan
 
E04552327
E04552327E04552327
E04552327
 

Viewers also liked

Paper id 212014103
Paper id 212014103Paper id 212014103
Paper id 212014103IJRAT
 
Paper id 25201451
Paper id 25201451Paper id 25201451
Paper id 25201451IJRAT
 
Paper id 212014136
Paper id 212014136Paper id 212014136
Paper id 212014136IJRAT
 
Paper id 252014139
Paper id 252014139Paper id 252014139
Paper id 252014139IJRAT
 
Paper id 24201464
Paper id 24201464Paper id 24201464
Paper id 24201464IJRAT
 
Paper id 24201471
Paper id 24201471Paper id 24201471
Paper id 24201471IJRAT
 
Paper id 24201463
Paper id 24201463Paper id 24201463
Paper id 24201463IJRAT
 
Paper id 24201478
Paper id 24201478Paper id 24201478
Paper id 24201478IJRAT
 
Paper id 252014137
Paper id 252014137Paper id 252014137
Paper id 252014137IJRAT
 
Paper id 24201458
Paper id 24201458Paper id 24201458
Paper id 24201458IJRAT
 
Paper id 25201446
Paper id 25201446Paper id 25201446
Paper id 25201446IJRAT
 
Paper id 24201445
Paper id 24201445Paper id 24201445
Paper id 24201445IJRAT
 
Paper id 26201429
Paper id 26201429Paper id 26201429
Paper id 26201429IJRAT
 
Paper id 25201417
Paper id 25201417Paper id 25201417
Paper id 25201417IJRAT
 
Paper id 212014145
Paper id 212014145Paper id 212014145
Paper id 212014145IJRAT
 
Paper id 23201417
Paper id 23201417Paper id 23201417
Paper id 23201417IJRAT
 
Paper id 24201486
Paper id 24201486Paper id 24201486
Paper id 24201486IJRAT
 
Paper id 26201476
Paper id 26201476Paper id 26201476
Paper id 26201476IJRAT
 
Paper id 25201444
Paper id 25201444Paper id 25201444
Paper id 25201444IJRAT
 
Paper id 27201447
Paper id 27201447Paper id 27201447
Paper id 27201447IJRAT
 

Viewers also liked (20)

Paper id 212014103
Paper id 212014103Paper id 212014103
Paper id 212014103
 
Paper id 25201451
Paper id 25201451Paper id 25201451
Paper id 25201451
 
Paper id 212014136
Paper id 212014136Paper id 212014136
Paper id 212014136
 
Paper id 252014139
Paper id 252014139Paper id 252014139
Paper id 252014139
 
Paper id 24201464
Paper id 24201464Paper id 24201464
Paper id 24201464
 
Paper id 24201471
Paper id 24201471Paper id 24201471
Paper id 24201471
 
Paper id 24201463
Paper id 24201463Paper id 24201463
Paper id 24201463
 
Paper id 24201478
Paper id 24201478Paper id 24201478
Paper id 24201478
 
Paper id 252014137
Paper id 252014137Paper id 252014137
Paper id 252014137
 
Paper id 24201458
Paper id 24201458Paper id 24201458
Paper id 24201458
 
Paper id 25201446
Paper id 25201446Paper id 25201446
Paper id 25201446
 
Paper id 24201445
Paper id 24201445Paper id 24201445
Paper id 24201445
 
Paper id 26201429
Paper id 26201429Paper id 26201429
Paper id 26201429
 
Paper id 25201417
Paper id 25201417Paper id 25201417
Paper id 25201417
 
Paper id 212014145
Paper id 212014145Paper id 212014145
Paper id 212014145
 
Paper id 23201417
Paper id 23201417Paper id 23201417
Paper id 23201417
 
Paper id 24201486
Paper id 24201486Paper id 24201486
Paper id 24201486
 
Paper id 26201476
Paper id 26201476Paper id 26201476
Paper id 26201476
 
Paper id 25201444
Paper id 25201444Paper id 25201444
Paper id 25201444
 
Paper id 27201447
Paper id 27201447Paper id 27201447
Paper id 27201447
 

Similar to Paper id 21201457

Digital stereoscopic imaging (1)
Digital stereoscopic imaging (1)Digital stereoscopic imaging (1)
Digital stereoscopic imaging (1)kamsaliraviteja
 
Teleimmersion
TeleimmersionTeleimmersion
Teleimmersionstudent
 
FPGA Based Pattern Generation and Synchonization for High Speed Structured Li...
FPGA Based Pattern Generation and Synchonization for High Speed Structured Li...FPGA Based Pattern Generation and Synchonization for High Speed Structured Li...
FPGA Based Pattern Generation and Synchonization for High Speed Structured Li...TELKOMNIKA JOURNAL
 
Automatic 2D to 3D Video Conversion For 3DTV's
 Automatic 2D to 3D Video Conversion For 3DTV's Automatic 2D to 3D Video Conversion For 3DTV's
Automatic 2D to 3D Video Conversion For 3DTV'sRishikese MR
 
Stereoscopic 3D Advertising FAQs
Stereoscopic 3D Advertising FAQsStereoscopic 3D Advertising FAQs
Stereoscopic 3D Advertising FAQsMike Fuchsman
 
iMinds insights - 3D Visualization Technologies
iMinds insights - 3D Visualization TechnologiesiMinds insights - 3D Visualization Technologies
iMinds insights - 3D Visualization TechnologiesiMindsinsights
 
3-D Video Formats and Coding- A review
3-D Video Formats and Coding- A review3-D Video Formats and Coding- A review
3-D Video Formats and Coding- A reviewinventionjournals
 
3-D Video Formats and Coding- A review
3-D Video Formats and Coding- A review3-D Video Formats and Coding- A review
3-D Video Formats and Coding- A reviewinventionjournals
 
PROPOSED SYSTEM FOR MID-AIR HOLOGRAPHY PROJECTION USING CONVERSION OF 2D TO 3...
PROPOSED SYSTEM FOR MID-AIR HOLOGRAPHY PROJECTION USING CONVERSION OF 2D TO 3...PROPOSED SYSTEM FOR MID-AIR HOLOGRAPHY PROJECTION USING CONVERSION OF 2D TO 3...
PROPOSED SYSTEM FOR MID-AIR HOLOGRAPHY PROJECTION USING CONVERSION OF 2D TO 3...IAEME Publication
 
HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basics
HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The BasicsHA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basics
HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basicshamza_123456
 
HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basics
HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The BasicsHA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basics
HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basicshamza_123456
 
DISTRIBUTED SYSTEM FOR 3D REMOTE MONITORING USING KINECT DEPTH CAMERAS
DISTRIBUTED SYSTEM FOR 3D REMOTE MONITORING USING KINECT DEPTH CAMERASDISTRIBUTED SYSTEM FOR 3D REMOTE MONITORING USING KINECT DEPTH CAMERAS
DISTRIBUTED SYSTEM FOR 3D REMOTE MONITORING USING KINECT DEPTH CAMERAScscpconf
 
CT-SVD and Arnold Transform for Secure Color Image Watermarking
CT-SVD and Arnold Transform for Secure Color Image WatermarkingCT-SVD and Arnold Transform for Secure Color Image Watermarking
CT-SVD and Arnold Transform for Secure Color Image WatermarkingAM Publications,India
 
Preliminary study of multi view imaging for accurate
Preliminary study of multi view imaging for accuratePreliminary study of multi view imaging for accurate
Preliminary study of multi view imaging for accurateeSAT Publishing House
 
An Adaptive Remote Display Framework to Improve Power Efficiency
An Adaptive Remote Display Framework to Improve Power Efficiency An Adaptive Remote Display Framework to Improve Power Efficiency
An Adaptive Remote Display Framework to Improve Power Efficiency csandit
 

Similar to Paper id 21201457 (20)

Digital stereoscopic imaging (1)
Digital stereoscopic imaging (1)Digital stereoscopic imaging (1)
Digital stereoscopic imaging (1)
 
Teleimmersion
TeleimmersionTeleimmersion
Teleimmersion
 
3D Workshop
3D Workshop3D Workshop
3D Workshop
 
Tele-immersion
Tele-immersionTele-immersion
Tele-immersion
 
FPGA Based Pattern Generation and Synchonization for High Speed Structured Li...
FPGA Based Pattern Generation and Synchonization for High Speed Structured Li...FPGA Based Pattern Generation and Synchonization for High Speed Structured Li...
FPGA Based Pattern Generation and Synchonization for High Speed Structured Li...
 
Automatic 2D to 3D Video Conversion For 3DTV's
 Automatic 2D to 3D Video Conversion For 3DTV's Automatic 2D to 3D Video Conversion For 3DTV's
Automatic 2D to 3D Video Conversion For 3DTV's
 
Stereoscopic 3D Advertising FAQs
Stereoscopic 3D Advertising FAQsStereoscopic 3D Advertising FAQs
Stereoscopic 3D Advertising FAQs
 
iMinds insights - 3D Visualization Technologies
iMinds insights - 3D Visualization TechnologiesiMinds insights - 3D Visualization Technologies
iMinds insights - 3D Visualization Technologies
 
3-D Video Formats and Coding- A review
3-D Video Formats and Coding- A review3-D Video Formats and Coding- A review
3-D Video Formats and Coding- A review
 
3-D Video Formats and Coding- A review
3-D Video Formats and Coding- A review3-D Video Formats and Coding- A review
3-D Video Formats and Coding- A review
 
3d television
3d television3d television
3d television
 
PROPOSED SYSTEM FOR MID-AIR HOLOGRAPHY PROJECTION USING CONVERSION OF 2D TO 3...
PROPOSED SYSTEM FOR MID-AIR HOLOGRAPHY PROJECTION USING CONVERSION OF 2D TO 3...PROPOSED SYSTEM FOR MID-AIR HOLOGRAPHY PROJECTION USING CONVERSION OF 2D TO 3...
PROPOSED SYSTEM FOR MID-AIR HOLOGRAPHY PROJECTION USING CONVERSION OF 2D TO 3...
 
HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basics
HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The BasicsHA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basics
HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basics
 
3D - The Basics
3D - The Basics 3D - The Basics
3D - The Basics
 
HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basics
HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The BasicsHA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basics
HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basics
 
A0540106
A0540106A0540106
A0540106
 
DISTRIBUTED SYSTEM FOR 3D REMOTE MONITORING USING KINECT DEPTH CAMERAS
DISTRIBUTED SYSTEM FOR 3D REMOTE MONITORING USING KINECT DEPTH CAMERASDISTRIBUTED SYSTEM FOR 3D REMOTE MONITORING USING KINECT DEPTH CAMERAS
DISTRIBUTED SYSTEM FOR 3D REMOTE MONITORING USING KINECT DEPTH CAMERAS
 
CT-SVD and Arnold Transform for Secure Color Image Watermarking
CT-SVD and Arnold Transform for Secure Color Image WatermarkingCT-SVD and Arnold Transform for Secure Color Image Watermarking
CT-SVD and Arnold Transform for Secure Color Image Watermarking
 
Preliminary study of multi view imaging for accurate
Preliminary study of multi view imaging for accuratePreliminary study of multi view imaging for accurate
Preliminary study of multi view imaging for accurate
 
An Adaptive Remote Display Framework to Improve Power Efficiency
An Adaptive Remote Display Framework to Improve Power Efficiency An Adaptive Remote Display Framework to Improve Power Efficiency
An Adaptive Remote Display Framework to Improve Power Efficiency
 

More from IJRAT

96202108
9620210896202108
96202108IJRAT
 
97202107
9720210797202107
97202107IJRAT
 
93202101
9320210193202101
93202101IJRAT
 
92202102
9220210292202102
92202102IJRAT
 
91202104
9120210491202104
91202104IJRAT
 
87202003
8720200387202003
87202003IJRAT
 
87202001
8720200187202001
87202001IJRAT
 
86202013
8620201386202013
86202013IJRAT
 
86202008
8620200886202008
86202008IJRAT
 
86202005
8620200586202005
86202005IJRAT
 
86202004
8620200486202004
86202004IJRAT
 
85202026
8520202685202026
85202026IJRAT
 
711201940
711201940711201940
711201940IJRAT
 
711201939
711201939711201939
711201939IJRAT
 
711201935
711201935711201935
711201935IJRAT
 
711201927
711201927711201927
711201927IJRAT
 
711201905
711201905711201905
711201905IJRAT
 
710201947
710201947710201947
710201947IJRAT
 
712201907
712201907712201907
712201907IJRAT
 
712201903
712201903712201903
712201903IJRAT
 

More from IJRAT (20)

96202108
9620210896202108
96202108
 
97202107
9720210797202107
97202107
 
93202101
9320210193202101
93202101
 
92202102
9220210292202102
92202102
 
91202104
9120210491202104
91202104
 
87202003
8720200387202003
87202003
 
87202001
8720200187202001
87202001
 
86202013
8620201386202013
86202013
 
86202008
8620200886202008
86202008
 
86202005
8620200586202005
86202005
 
86202004
8620200486202004
86202004
 
85202026
8520202685202026
85202026
 
711201940
711201940711201940
711201940
 
711201939
711201939711201939
711201939
 
711201935
711201935711201935
711201935
 
711201927
711201927711201927
711201927
 
711201905
711201905711201905
711201905
 
710201947
710201947710201947
710201947
 
712201907
712201907712201907
712201907
 
712201903
712201903712201903
712201903
 

Paper id 21201457

  • 1. E-ISSN: 2321–9637 Volume 2, Issue 1, January 2014 International Journal of Research in Advent Technology Available Online at: http://www.ijrat.org i AUTOMATIC VIEW SYNTHESIS FROM STEREOSCOPIC 3D BY IMAGE DOMAIN WARPING Mrs. Mayuri T. Deshmukh Miss.Ashwini T. Sharnagat Assistant Professor ME Ist Digital Electronic S.S.B.T’s C.O.E.T. Bambhori. S.S.B.T’s C.O.E.T. Bambhori. Email id:mayuri.deshmukh08@gmail.com Email id:aashwini179@gmail.com ABSTARCT- This paper focuses on the automatic view synthesis by image domain warping is presented that automatically synthesizes new views directly from S3D video and functions completely. Nowadays, stereoscopic 3D (S3D) cinema is already conventional and almost all new display devices for the home support S3D content. S3D giving out communications to the home is already established partly in the form of 3D Blu-ray discs, video on demand services, or television channels. The need to wear glasses is, however, often considered as an obstacle, which hinders broader acceptance of this technology in the home. Multiviewautosterescopic displays make possible a glasses free perception of S3D content for several observers simultaneously and support head motion parallax in a limited range. To verify the entire system the result is being observed on 3D television, 3D cinema’s for view the video allow a glasses free perception. Index Terms—Three dimensional TV, auto stereoscopic displays, 3D Blu-ray discs, head motion parallax 1. INTRODUCTION STEREOSCOPIC 3D (S3D) cinema and television are in the process of changing the scenery of entertainment. Primarily responsible for the change is the fact that technologies ranging from 3D content creation, to data compression and transmission, to 3D display devices are progressively improving and adapted to enable a rich and higher quality 3D experience. However, the necessity to wear glasses is often regarded as a main obstacle of today’s conventional stereoscopic 3D display systems. Multi-view auto stereoscopic displays (MAD) overcome this problem. They allow glasses free stereo viewing by emitting several images at the same time. Stereoscopic 3D can expand users’ experiences beyond traditional2D-TV broadcasting by contribution programs with depth idea of the observed scenes. IN fact, 3D has been successfully commercialized as stereo movies, such as those by IMAX, for people to watch in the cinema, using special devices. Given that the popularity of 3D programs has dramatically increased, 3D-TV has been known as a possible break through for conventional TV technologies to satisfy the coming need for watching 3D programs at home. Typical MADs which are on the market today require 8-views, 9-views or even 28-views as input. Because of the different number of input views required by different MADs, no unique display format exists for such displays. According to the formats involved in the distribution chain (Fig.1.), the transmission format has to enable such a decoupling. Hence, a good transmission format has to fulfil the following requirements • An automatic conversion from production to transmission format has to be possible. For live broadcast applications also real-time conversion is required.
  • 2. Volume 2, Issue 1, January 2014 International Journal of Research in Advent Technology Available Online at: • The transmission format has to allow an automatic and rea format. • The transmission format has to be well compressible to and consumer 3D content production stereoscopic displays. It is believed t Stefanoski,2013] Fig.1. High level view on the usual format conversions Research communities and standardization bodies continue to and enable efficient generation of novel views as required by MADs. Such formats can be divided into two classes: video only formats like S3D and multiview video in general, and depth enhanced formats like si view video plus depth and multiview video plus depth. The per enhanced formats provides information on the depth structure of the 3D scene. Such depth data can be used for view synthesis by depth-image bas 2004]. However, high quality view synthesis with DIBR requires high quality depth data. There exist stereo algorithms which can automatically compute depth maps from stereo images, or depth sensors which can capture depth usually of too little content productions. Thus, today highly accurate depth maps are usually generated in a semiautomatic where stereo algorithms or depth sensors are used to estimate or capture initial depth ma improved in an interactive process. The most simple and cost efficient conversion from S3D As a format to a transmission format consists of conducting only low level conversion steps (like colour space, bit depth, frame-rate, or image resolution conversions). Such a transmission format can be compatible to the existing S3D distribution and stereoscopic display infrastructure. Thus, using a transmission format without supplementary depth data prevents an i presented synthesis method can be used at the decoder side to synthesize new views directly from transmitted S3D content. for real-time view synthesis is presented and analysed. Fig.2 Stereo pinhole camera model illustrating the projection of a It shows a bird’s-eye view of a pin cameras are located at positions CL E Volume 2, Issue 1, January 2014 International Journal of Research in Advent Technology Available Online at: http://www.ijrat.org The transmission format has to allow an automatic and real-time conversion into any particular • The transmission format has to be well compressible to save transmission band width. nowadays and consumer 3D content production is dominated by S3D content, i.e. 2-view content which is stereoscopic displays. It is believed that S3D as aproduction format will dominate over years 1. High level view on the usual format conversions required from content Production to display Research communities and standardization bodies continue to investigate formats which are well compressible and enable efficient generation of novel views as required by MADs. Such formats can be divided into two classes: video only formats like S3D and multiview video in general, and depth enhanced formats like si view video plus depth and multiview video plus depth. The per-pixel depth information included in the depth enhanced formats provides information on the depth structure of the 3D scene. Such depth data can be used for e based rendering (DIBR) [Masayuki T, Zhao Y, and C. Zhu However, high quality view synthesis with DIBR requires high quality depth data. There exist stereo algorithms which can automatically compute depth maps from stereo images, or depth sensors which can accuracy to allow a high quality synthesis as required e.g. in professional , today highly accurate depth maps are usually generated in a semiautomatic where stereo algorithms or depth sensors are used to estimate or capture initial depth ma improved in an interactive process. The most simple and cost efficient conversion from S3D As a format to a transmission format consists of conducting only low level conversion steps (like colour space, bit or image resolution conversions). Such a transmission format can be well compatible to the existing S3D distribution and stereoscopic display infrastructure. Thus, using a transmission format without supplementary depth data prevents an increase of content production and distribution costs. The presented synthesis method can be used at the decoder side to synthesize new views directly from transmitted time view synthesis is presented and analysed. Stereo pinhole camera model illustrating the projection of a 3D point pinto the image planes of both cameras. eye view of a pin-hole model, parallel stereo camera setup. Projection centers of both and CR at a baseline distance of b and both cameras have a focal length of E-ISSN: 2321–9637 International Journal of Research in Advent Technology 79 particular N-view display width. nowadays, professional view content which is watchable on production format will dominate over years[Nikolce and Production to display investigate formats which are well compressible and enable efficient generation of novel views as required by MADs. Such formats can be divided into two classes: video only formats like S3D and multiview video in general, and depth enhanced formats like single pixel depth information included in the depth enhanced formats provides information on the depth structure of the 3D scene. Such depth data can be used for , and C. Zhu, 2012], [Fehn C, However, high quality view synthesis with DIBR requires high quality depth data. There exist stereo algorithms which can automatically compute depth maps from stereo images, or depth sensors which can s required e.g. in professional , today highly accurate depth maps are usually generated in a semiautomatic process, where stereo algorithms or depth sensors are used to estimate or capture initial depth maps which are then improved in an interactive process. The most simple and cost efficient conversion from S3D As a production format to a transmission format consists of conducting only low level conversion steps (like colour space, bit well compressed and is compatible to the existing S3D distribution and stereoscopic display infrastructure. Thus, using a transmission ncrease of content production and distribution costs. The presented synthesis method can be used at the decoder side to synthesize new views directly from transmitted hole model, parallel stereo camera setup. Projection centers of both and both cameras have a focal length of f
  • 3. E-ISSN: 2321–9637 Volume 2, Issue 1, January 2014 International Journal of Research in Advent Technology Available Online at: http://www.ijrat.org 80 projecting a 3D point P, which is located at a distance Z from the projection centers, into the respective.projection planes gives the projected points pL and pR. Hence, the projected points have an image disparity of d = (pL − cL) − (pR − cR) = (1) Obviously, a synthesis of a new view at a position new = (1 − ) + C (2) Corresponds to having a baseline distance = (3) with respect to the left view. Hence, that would give disparities = , (4) i.e. a linear rescaling of all previous disparities. 2. IMAGE-DOMAIN-WARPING Automatic view synthesis technology which synthesizes new views from 2-view video is highly popular. Such synthesis technology would be compatible to any S3D distribution infrastructure to the home. Image-domain Warping (IDW) is a view synthesis method which is able to automatically synthesize new views based on stereoscopic video input. In contrast to synthesis methods based on DIBR, which relies on dense disparity or depth maps, IDW employs only sparse disparities to synthesize a novel view. It use the facts that our human visual system is not able to very exactly estimate absolute depth and that it is not responsive to image distortions up to a certain level as long as images remain visually plausible, e.g. image distortions can be hidden in non- salient regions. They are used to compute an image warp which enforces desired sparse disparities in the final synthesized image while distortions are hidden in non-salient regions. To find out which disparities have to be enforced in a synthesized image.[ Yin Zhao and Ce Zhu ,2011] Goal: “warp” the pixels of the image so that they appear in the correct place for a new viewpoint. An advantage of IDW is, there is no need a geometric model of the object/environment can be done in time proportional to screen size and (Mostly) independent of object/environment complexity. Very less Disadvantage are their require Limited resolution and Excessive warping reveals several visual artifacts a linear rescaling of all previous disparities d is required to synthesize the new view. In general, a stereo image pair doesn’t contain sufficient information to completely describe an image captured from a slightly different camera n position. IDW uses image saliency information to deal with this problem.[ Aliaga ,2010] I. Image Warps: We define a warp as a function that deforms the parameter domain of an image W:[0, W] × [0, H]→R2 (5) Where W and H are the width and height of the image, respectively. Image warps have a long history of use in computer vision and graphics based problems goal of the IDW method is to compute a warping of Each of the initial stereo images that can be used to produce an output image meeting predefined properties (e.g. scaled image disparities). To do this, we formulate a quadratic energy functional E (w). A warp w is then warps defined at regular grid positions. W [p, q] := w( ∆x p, ∆y q). (6)
  • 4. Volume 2, Issue 1, January 2014 International Journal of Research in Advent Technology Available Online at: Fig.3 E II. View Synthesis: The IDW algorithm computes N separated into four modules, In the Warp Calculator only that warps are computed which are necessary for the Synthesis of A view that is located minimizing their respective error functional. Calculated warps are then used Interpolator/Extrapolator module to interpolate or extrapolate the warps that are necessary to synthesize the output views, as required by a particular multi warped module, images are warped to synthesize the output images. Fig. 4 Block diagram of the view synthesizer which converts 2 I. Data Extraction: First, a sparse set of disparity features is extracted. These sparse disparities are estimated an automatic, accurate and robust way.Disparities of vertical image edges are particularly important for the stereo sis, i.e. the perceived depth. For this reason, additional features and corresponding disparities are detected such that features lay uniformly distributed on nearly vertical image edges. Disparities of detected features are estimated using the Lucas-Kanade method. The availability of such features is also important to prevent salient synthesis errors with IDW like bending edges in the synt Fig. 5 Disparities II. Warp Calculation: Two warps camera position located in the center between the two input cameras. For a given sparse set of disparity features (xL, xR) ∈F, Disparities b = ! " E Volume 2, Issue 1, January 2014 International Journal of Research in Advent Technology Available Online at: http://www.ijrat.org 3 Example of a warping function that deforms an input image. The IDW algorithm computes N-view video from 2-view video where Warp Calculator only that warps are computed which are necessary for the of A view that is located in the middle between two input views. These warps are calculated by minimizing their respective error functional. Calculated warps are then used Interpolator/Extrapolator module to interpolate or extrapolate the warps that are necessary to synthesize the output views, as required by a particular multi-view autosterescopic display. Finally, in the Image warped to synthesize the output images. diagram of the view synthesizer which converts 2-view videoto N-view video. First, a sparse set of disparity features is extracted. These sparse disparities are estimated an automatic, accurate and robust way.Disparities of vertical image edges are particularly important for the , i.e. the perceived depth. For this reason, additional features and corresponding disparities are detected formly distributed on nearly vertical image edges. Disparities of detected features are Kanade method. The availability of such features is also important to prevent salient synthesis errors with IDW like bending edges in the synthesized image Fig. 5 Disparities estimated at sparse feature positions Two warps wL and wR are computed which warp images IL and IR camera position located in the center between the two input cameras. For a given sparse set of disparity features (7) E-ISSN: 2321–9637 International Journal of Research in Advent Technology 81 view video where N >2. It can be Warp Calculator only that warps are computed which are necessary for the views. These warps are calculated by minimizing their respective error functional. Calculated warps are then used in the Warp Interpolator/Extrapolator module to interpolate or extrapolate the warps that are necessary to synthesize the N view autosterescopic display. Finally, in the Image-domain view video. First, a sparse set of disparity features is extracted. These sparse disparities are estimatedin an automatic, accurate and robust way.Disparities of vertical image edges are particularly important for the , i.e. the perceived depth. For this reason, additional features and corresponding disparities are detected formly distributed on nearly vertical image edges. Disparities of detected features are Kanade method. The availability of such features is also important to prevent salient IR, respectively, to a camera position located in the center between the two input cameras. For a given sparse set of disparity features )
  • 5. Volume 2, Issue 1, January 2014 International Journal of Research in Advent Technology Available Online at: have to be enforced. Each warp is computed as the result of an energy minimization problem. Anenergy functional E is defined, and minimizing it yields a warp view synthesis. The energy functional is de terms which are related to a particular type of constraint as described below. Each term is weighted with a parameter λ #($) ≔ &# In Warp calculation 4 constraints are used Smoothness, Constraints and Energy equation system. The number of spatial and temporal smoothness cons of freedom of the warp Wto be solved, while the number of disparity constraints depends on the number of detected features, which is content dependent. III. Warp Interpolation/Extrapolation camera positions as input. The main reason for this restriction is to reduce the overall computational complexity of the warp calculation. Furthermore depend on the number of output views required by a particular display system. Fig.6 Cameras, camera positions, and associated warps IV. Image-Domain Warping: An output image at a position based on the input image which is closer to the desired output position. Because warps are continuous, no holes can occur in the synthesized image. In particular, open regions are implicitly unpainted by stretching unsealing texture from the neighborhood into the region synthesis results in practice as long as views are synthesizedwhich are in the range However, if only one image is used for the s output image. 3. TRANSMISSION OF WARPS AS SUPPLEMENTARY DATA To reduce the computational complexity at the receiver side, we modify the transmission system which was proposed in fig.7 The modified system is shown in fig.7 Thus, it is proposed to shift the warp extraction and warp calculation part to the sending side, and, in addition to the multiview data, to efficiently compress and transmit the warp calculation result, i.e. a restricted s Fig.7 E Volume 2, Issue 1, January 2014 International Journal of Research in Advent Technology Available Online at: http://www.ijrat.org have to be enforced. Each warp is computed as the result of an energy minimization problem. Anenergy is defined, and minimizing it yields a warp w that creates the desired change of disparities after view synthesis. The energy functional is defined with help of the extracted data and consists of three additive terms which are related to a particular type of constraint as described below. Each term is #&(') + (#((') + )#)(') (8) 4 constraints are used Disparity Constraints, Spatial Smoothness Constraints Smoothness, Constraints and Energy Minimization.This functional always represents an over equation system. The number of spatial and temporal smoothness constraints is dense in the number of degrees of freedom of the warp Wto be solved, while the number of disparity constraints depends on the number of detected features, which is content dependent. Warp Interpolation/Extrapolation: Multiview auto stereoscopic displays require many views from different camera positions as input. The main reason for this restriction is to reduce the overall computational complexity calculation. Furthermore, using this approach, the complexity of the warp computation does not depend on the number of output views required by a particular display system. Fig.6 Cameras, camera positions, and associated warps An output image at a position α is synthesized according to based on the input image which is closer to the desired output position. Because warps are continuous, no holes can occur in the synthesized image. In particular, open regions are implicitly unpainted by stretching unsealing re from the neighborhood into the regionWe noticed that this kind of implicit unpointing provides good synthesis results in practice as long as views are synthesizedwhich are in the range − However, if only one image is used for the synthesis, empty regions can occur on the left or r TRANSMISSION OF WARPS AS SUPPLEMENTARY DATA To reduce the computational complexity at the receiver side, we modify the transmission system which was modified system is shown in fig.7 Thus, it is proposed to shift the warp extraction and warp calculation part to the sending side, and, in addition to the multiview data, to efficiently compress and transmit the warp calculation result, i.e. a restricted set of warps Fig.7 Modified transmission and view synthesis system E-ISSN: 2321–9637 International Journal of Research in Advent Technology 82 have to be enforced. Each warp is computed as the result of an energy minimization problem. Anenergy that creates the desired change of disparities after fined with help of the extracted data and consists of three additive ) Disparity Constraints, Spatial Smoothness Constraints,Temporal This functional always represents an over-constrained traints is dense in the number of degrees of freedom of the warp Wto be solved, while the number of disparity constraints depends on the number of Multiview auto stereoscopic displays require many views from different camera positions as input. The main reason for this restriction is to reduce the overall computational complexity ty of the warp computation does not to i.e. Iα is synthesized based on the input image which is closer to the desired output position. Because warps are continuous, no holes can occur in the synthesized image. In particular, open regions are implicitly unpainted by stretching unsealing We noticed that this kind of implicit unpointing provides good −0.5 ≤ α ≤ 1.5 (Fig.6). ynthesis, empty regions can occur on the left or right border of the To reduce the computational complexity at the receiver side, we modify the transmission system which was modified system is shown in fig.7 Thus, it is proposed to shift the warp extraction and warp calculation part to the sending side, and, in addition to the multiview data, to efficiently compress and
  • 6. Volume 2, Issue 1, January 2014 International Journal of Research in Advent Technology Available Online at: I. Warp Coding with a Dedicated Warp Coder successively. For each view, they are encoded separately and multiplexed into a single bit stream. Without loss of simplification, the coding of a warp sequence assigned to one view is described in the foll the warp at time instant f as w f . Each warp resolution where each node of the grid is indexed with integer coordinates ∈R2 assigned I. Spatial Partitioning: A partition consists of a set of 2D locations which we call a group of locations (GOLs). Each warp w f is partitioned into GOLs using a quincunx resolution pyramid, II.Intra-Warp and Inter-Warp Prediction: corresponding prediction modes are supported: INTRA, INTER_P and INTER_B. After prediction, residuals w[i, j] − ẃf [i, j ] are computed, uniformly quantized, and entropy coded. Quantized residuals of each GOL are entropy coded independently from other GOLs. In the INTRA prediction mode, all locations of row-wise and predicted in a closed loop DPCM from previo shown in Fig.9. Locations of all other GOLs spatially neigh boring locations in * II. Warp Coding With Help of a Video Coder video coding technology, we propose the wa warp coding method shown. I. Coding System: Similar to the coding with the dedicated warp coder, warps are encoded separately for each view and then multiplexed into a single bit stream. To encode a warp, first, a Warp Quantize is used to convert each warp into an 8-bit grayscale image representation. E Volume 2, Issue 1, January 2014 International Journal of Research in Advent Technology Available Online at: http://www.ijrat.org a Dedicated Warp Coder: Warps of all time instances and views are encoded successively. For each view, they are encoded separately and multiplexed into a single bit stream. Without loss of simplification, the coding of a warp sequence assigned to one view is described in the foll . Each warp w f is represented as a regular quad grid, in above resolution where each node of the grid is indexed with integer coordinates i, j and has a 2D location Fig. 8 Block-diagram of warp coder. A partition consists of a set of 2D locations which we call a group of locations (GOLs). is partitioned into GOLs using a quincunx resolution pyramid, Warp Prediction: Similar to video coding standards, three warp coding types and corresponding prediction modes are supported: INTRA, INTER_P and INTER_B. After prediction, residuals are computed, uniformly quantized, and entropy coded. Quantized residuals of each GOL are entropy coded independently from other GOLs. In the INTRA prediction mode, all locations of wise and predicted in a closed loop DPCM from previously scanned spatially neighbouring locations,as it is shown in Fig.9. Locations of all other GOLs d fl are predicted by the respective centroids computed from *+,- . - . / as indicated in Fig.9. th Help of a Video Coder: To take advantage of already existing and highly sophisticated video coding technology, we propose the warp coding system show in Fig.10 as an alternative to the dedicated the coding with the dedicated warp coder, warps are encoded separately for each view and then multiplexed into a single bit stream. To encode a warp, first, a Warp Quantize is used to convert bit grayscale image representation. Fig.9. Warp coding system using a video coder. E-ISSN: 2321–9637 International Journal of Research in Advent Technology 83 Warps of all time instances and views are encoded successively. For each view, they are encoded separately and multiplexed into a single bit stream. Without loss of simplification, the coding of a warp sequence assigned to one view is described in the following. We denote in above (Fig.3) of fixed and has a 2D location w f [i, j] A partition consists of a set of 2D locations which we call a group of locations (GOLs). Similar to video coding standards, three warp coding types and corresponding prediction modes are supported: INTRA, INTER_P and INTER_B. After prediction, residuals are computed, uniformly quantized, and entropy coded. Quantized residuals of each GOL are entropy coded independently from other GOLs. In the INTRA prediction mode, all locations of d f 1 are scanned usly scanned spatially neighbouring locations,as it is are predicted by the respective centroids computed from To take advantage of already existing and highly sophisticated as an alternative to the dedicated the coding with the dedicated warp coder, warps are encoded separately for each view and then multiplexed into a single bit stream. To encode a warp, first, a Warp Quantize is used to convert
  • 7. Volume 2, Issue 1, January 2014 International Journal of Research in Advent Technology Available Online at: 4. DISCUSSION 1) Recently, the Moving Pictures Experts Group (MPEG) issued a Call for Proposals (CfP) on 3D Video Coding technology with the goal to identify • A 3D video format, • A corresponding efficient compression technology, • A view synthesis technology which enables an efficient synthesis of new views based on the 3D video format. Fig..11. Assessed quality of the MPEG in the Multiview This includes Data Extraction, Warp Calculation, and Warp Interpolation/Extrapolation. 3D format with 2 views is already supported by existing consumer and professional stereo cameras. With each proposal to the CfP, compressed bit streams, a decoder, and view synthesis software had to be provided. Bit streams had to be compressed at predefined target bit rates Proposals were evaluat the synthesized views through formal subjective testing on both stereoscopic and multiview au displays. Fig.11 shows the quality assessed on a results were also assessed on a stereoscopic display; the corresponding stereo sequences can be found here for download. [Sikora Thomas,1997] 5. CONCLUSION A view synthesis method based on from stereoscopic 3D video. It relies on an automatic estimation of sparse disparities and image saliency information, and enforces target disparities in the synthesized images using an Image-domain Warping leads to high quality synthesis results without requiring depth map estimation and transmission. However a reuse of existing video coding technology has the advantage of reduced development and production costs. For this reason, JCT coding based on HEVC, which will allow receivers to perform a synthesis of new views based on Image domain- Warping While a dedicated warp coder can have a stronger coding e video coding technology for warp coding lies the reduced development and production costs, i.e.in the reuse of available video coding chips. For this reason, extend the upcoming 3D-HEVC standard by E Volume 2, Issue 1, January 2014 International Journal of Research in Advent Technology Available Online at: http://www.ijrat.org 1) Recently, the Moving Pictures Experts Group (MPEG) issued a Call for Proposals (CfP) on 3D Video Coding ient compression technology, view synthesis technology which enables an efficient synthesis of new views based on the 3D video Fig. 10 Transmission and view synthesis system. . Assessed quality of the MPEG in the Multiview autosterescopic display test scenario This includes Data Extraction, Warp Calculation, and Warp Interpolation/Extrapolation. 3D format with 2 views is already supported by existing consumer and professional stereo cameras. With each proposal to the CfP, compressed bit streams, a decoder, and view synthesis software had to be provided. Bit be compressed at predefined target bit rates Proposals were evaluated by assessing the quality of he synthesized views through formal subjective testing on both stereoscopic and multiview au shows the quality assessed on a multiview auto stereoscopic display. Qualitatively similar results were also assessed on a stereoscopic display; the corresponding stereo sequences can be found here for A view synthesis method based on Image-domain-Warping. Its approach automatically synthesizes new views from stereoscopic 3D video. It relies on an automatic estimation of sparse disparities and image saliency information, and enforces target disparities in the synthesized images using an image warping framework. domain Warping leads to high quality synthesis results without requiring depth map estimation and transmission. However a reuse of existing video coding technology has the advantage of reduced development ts. For this reason, JCT-3V planes to extend the upcoming 3D-HEVC standard by warp coding based on HEVC, which will allow receivers to perform a synthesis of new views based on Image While a dedicated warp coder can have a stronger coding efficiency, the coding lies the reduced development and production costs, i.e.in the reuse of available video coding chips. For this reason, based on the evaluation results presented in it, HEVC standard by warp coding based on HEVC. This will enable the transmission E-ISSN: 2321–9637 International Journal of Research in Advent Technology 84 1) Recently, the Moving Pictures Experts Group (MPEG) issued a Call for Proposals (CfP) on 3D Video Coding view synthesis technology which enables an efficient synthesis of new views based on the 3D video display test scenario. Please note that such a 3D format with 2 views is already supported by existing consumer and professional stereo cameras. With each proposal to the CfP, compressed bit streams, a decoder, and view synthesis software had to be provided. Bit ed by assessing the quality of he synthesized views through formal subjective testing on both stereoscopic and multiview auto stereoscopic multiview auto stereoscopic display. Qualitatively similar results were also assessed on a stereoscopic display; the corresponding stereo sequences can be found here for approach automatically synthesizes new views from stereoscopic 3D video. It relies on an automatic estimation of sparse disparities and image saliency image warping framework. domain Warping leads to high quality synthesis results without requiring depth map estimation and transmission. However a reuse of existing video coding technology has the advantage of reduced development HEVC standard by warp coding based on HEVC, which will allow receivers to perform a synthesis of new views based on Image- , the advantage of reusing coding lies the reduced development and production costs, i.e.in the reuse of based on the evaluation results presented in it, JCT-3V planes to warp coding based on HEVC. This will enable the transmission of
  • 8. E-ISSN: 2321–9637 Volume 2, Issue 1, January 2014 International Journal of Research in Advent Technology Available Online at: http://www.ijrat.org 85 multi-view video plus warp data with an internationalstandard, which will allow the use of IDW for view synthesisat the receiver side. REFERENCES [1] Aliaga G. Daniel image warping CS635 Spring 2010 [2] Fehn C, “Depth-image-based rendering (DIBR), compression, and Transmission for a new approach on 3D- TV,” Proc. SPIE, vol. 5291, Vol. 93–104, May 2004 [3] Le Gall D.J, “The MPEG video compression algorithm,” Signal Processing: Image Commune.1992, vol. 4, no. 4, pp. 129–140 [4] Muller K, Merkle P, and T. Wiegand, “3-D video representation using depth maps,” Proc. IEEE, vol. 99, no. 4, pp. 643–656, Apr. 2011. [5] Masayuki T, Zhao Y, and C. Zhu, 3D-TV System with Depth- Image-Based Rendering: Architecture, Techniques and Challenges, 1st sed. New York, NY, USA: Springer-Verlag, 2012. [6] Nikolce Stefanoski, Oliver Wang, Manuel Lang, Pierre Greisen, Simon Heinzle, and Aljosa Smolic IEEE Transactions On Image Processing, Vol. 22, No. 9, September 2013 [7] Smolic A,“3D video and free viewpoint video—From capture to display,” Pattern Recognit., vol. 44, no. 9, pp. 1958–1968, Sep. 2011. [8] Sikora Thomas, The MPEG-4 Video Standard Verification Model, Senior Member, IEEE Transactions On Circuits And Systems For Video Technology, Vol. 7, No. 1, February 1997 [9] Yin Zhao, Ce Zhu,Depth No-Synthesis-Error Model for View Synthesis in 3-D Video Senior Member, IEEE, Zhenzhong Chen, Member, IEEE, and Lu Yu, Member, IEEE IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 20, NO. 8, AUGUST 2011 [10] Zilly F, Riechert C, P. Eisert, and P. Kauff, “Semantic kernels binarized—A feature descriptor for fast and robust matching,” in Nov. 2011, pp. 39–48.