TO GET THIS PROJECT COMPLETE SOURCE ON SUPPORT WITH EXECUTION PLEASE CALL BELOW CONTACT DETAILS
MOBILE: 9791938249, 0413-2211159, WEB: WWW.NEXGENPROJECT.COM,WWW.FINALYEAR-IEEEPROJECTS.COM, EMAIL:Praveen@nexgenproject.com
NEXGEN TECHNOLOGY provides total software solutions to its customers. Apsys works closely with the customers to identify their business processes for computerization and help them implement state-of-the-art solutions. By identifying and enhancing their processes through information technology solutions. NEXGEN TECHNOLOGY help it customers optimally use their resources.
A DCT-BASED TOTAL JND PROFILE FORSPATIO-TEMPORAL AND FOVEATED MASKING EFFECTS
1. CONTACT: PRAVEEN KUMAR. L (, +91 – 9791938249)
MAIL ID: sunsid1989@gmail.com, praveen@nexgenproject.com
Web: www.nexgenproject.com, www.finalyear-ieeeprojects.com
A DCT-BASED TOTAL JND PROFILE FORSPATIO-TEMPORAL AND FOVEATED
MASKING EFFECTS
ABSTRACT
In image and video processing fields, DCT-based just noticeable difference
(JND) profiles have effectively been utilized to remove perceptual
redundancies in pictures for compression. In this paper, we solve two
problems that are often intrinsic to the conventional DCT-based JND profiles:
(i) no foveated masking (FM) JND model has been incorporated in modeling
the DCT-based JND profiles; and (ii) the conventional temporal masking (TM)
JND models assume that all moving objects in frames can be well tracked by
the eyes and that they are projected on the fovea regions of the eyes, which is
not a realistic assumption and may result in poor estimation of JND values for
untracked moving objects (or image regions). To solve these two problems, we
first propose a generalized JND model for joint effects between TM and FM
effects. With this model, called the temporal-foveated masking (TFM) JND
model, JND thresholds for any tracked/untracked and moving/still image
regions can be elaborately estimated. Finally, the TFM-JND model is
incorporated into a total DCT-based JND profile with a spatial contrast
sensitivity function, luminance masking, and contrast masking JND models. In
addition, we propose a JND adjustment method for our total JND profile to
avoid overestimation of JND values for image blocks of fixed sizes with various
image characteristics. To validate the effectiveness of the total JND profile, an
experiment involving a subjective distortionvisibility assessment has been
2. CONTACT: PRAVEEN KUMAR. L (, +91 – 9791938249)
MAIL ID: sunsid1989@gmail.com, praveen@nexgenproject.com
Web: www.nexgenproject.com, www.finalyear-ieeeprojects.com
conducted. The experiment results show that the proposed total DCT-based
JND profile yields significant performance improvement with much higher
capability of distortion concealment (average 5.6 dB lower PSNR) compared to
state-of-the-art JND profiles. The MATLAB source code of the proposed total
DCT-based JND profile is publicly available online at
https://sites.google.com/site/sunghobaecv/jnd
CONCLUSION
In this paper, we proposea novel DCT-based total JND profile with a new TFM-
JND model and a JND adjustment for perceptually inhomogeneous blocks. The
proposed TFMJND model shows remarkable performance in image and video
domains with much higher distortions that are perceptually invisible, satisfying
a standard level of JND for a predefined detection probability (DP = 0.5). Also,
the proposed JND adjustment method for perceptually inhomogeneous blocks
solves both overestimation and underestimation problems of JND for the
proposed total JND profile. The proposed novel DCT-based total JND profile is
compared with the state-ofthe- art JND profiles and gives superior results in
objective and subjective tests, yielding, on average, 5.6 lower dB PSNR with
very small DMOS values close to zero. As future work, we plan to apply our
DCT-based total JND profile for HEVC (High Efficiency Video Coding)-based
perceptual video coding (PVC) where perceptual redundancy is effectively
removed, thus improving the coding efficiency of HEVC-based PVC encoders.
3. CONTACT: PRAVEEN KUMAR. L (, +91 – 9791938249)
MAIL ID: sunsid1989@gmail.com, praveen@nexgenproject.com
Web: www.nexgenproject.com, www.finalyear-ieeeprojects.com
REFERENCES
[1] N. Jayant, J. Johnston, and R. Safranek, “Signal compression based on
models of human perception,” Proceedings of the IEEE, vol. 81, no. 10, pp.
1385–1422, 1993.
[2] Z. Lu, W. Lin, X. Yang, E. Ong, and S. Yao, “Modeling visual attention’s
modulatory aftereffects on visual sensitivity and quality evaluation,” Image
Processing, IEEE Transactions on, vol. 14, no. 11, pp. 1928– 1942, 2005.
[3] R. Ferzli and L. J. Karam, “A no-reference objective image sharpness metric
based on the notion of just noticeable blur (jnb),” Image Processing, IEEE
Transactions on, vol. 18, no. 4, pp. 717–728, 2009.
[4] R. B. Wolfgang, C. I. Podilchuk, and E. J. Delp, “Perceptual watermarks for
digital images and video,” Proceedings of the IEEE, vol. 87, no. 7, pp. 1108–
1126, 1999.
[5] L. J. Karam, N. G. Sadaka, R. Ferzli, and Z. A. Ivanovski, “An efficient
selective perceptual-based super-resolution estimator,” Image Processing, IEEE
Transactions on, vol. 20, no. 12, pp. 3470–3482, 2011.
[6] J.-S. Choi, S.-H.Bae, and M. Kim, “A no-reference perceptual blurriness
metric based fast super-resolution of still pictures using sparse
representation,” in IS&T/SPIE Electronic Imaging, pp. 94010N–94010N,
International Society for Optics and Photonics, 2015.
4. CONTACT: PRAVEEN KUMAR. L (, +91 – 9791938249)
MAIL ID: sunsid1989@gmail.com, praveen@nexgenproject.com
Web: www.nexgenproject.com, www.finalyear-ieeeprojects.com
[7] D.-F. Shen and M.-T. Huang, “A watershed-based image segmentation
usingjnd property,” in Acoustics, Speech, and Signal Processing, 2003.
Proceedings.(ICASSP’03). 2003 IEEEInternationalConference on, vol. 3, pp. III–
377, IEEE, 2003.
[8] Z. Luo, L. Song, S. Zheng, and N. Ling, “H. 264/advanced video control
perceptual optimization coding based on jnd-directed coefficient suppression,”
Circuits and Systemsfor Video Technology, IEEE Transactions on, vol. 23, no. 6,
pp. 935–948, 2013.
[9] J. Kim, S.-H. Bae, and M. Kim, “Anhevc-compliant perceptual video coding
scheme based on jnd models for variable block-sized transform kernels,”
Circuits and Systems for Video Technology, IEEE Transactions on, vol. 25, no.
11, pp. 1786–1800, 2015.
[10] C.-H. Chou and C.-W. I. Chen, “A perceptually optimized 3-d subband
codec for video communication over wireless channels,” Circuits and Systems
for Video Technology, IEEE Transactions on, vol. 6, no. 2, pp. 143–156, 1996.
[11] X. Yang, W. Ling, Z. Lu, E. P. Ong, and S. Yao, “Just noticeable distortion
model and its applications in video coding,” Signal Processing: Image
Communication, vol. 20, no. 7, pp. 662–680, 2005.
[12] Z. Chen and C. Guillemot, “Perceptually-friendly h. 264/avc video coding
based on foveated just-noticeable-distortion model,” Circuits and Systems for
Video Technology, IEEE Transactions on, vol. 20, no. 6, pp. 806–819, 2010.