A Blur Classification Approach using
Convolution Neural Network
Shamik Tiwari
Department of Virtualization, School of Computer Science
University of Petroleum and Energy Studies, Dehradun, India
shamik.tiwari@ddn.upes.ac.in
Contents
• Introduction
• Literature Review
• Convolution Neural Network
• Results
• Conclusion
Image Degradation
Images are usually prone to degradation due to a number of reasons such as
• Noise
• Focus is not adjusted
• The camera is moved during image acquisition cause ‘Motion blur'
• A dirty lens
• Low resolution of the images
• Poor illuminance etc.
( ) ( ) ( ) ( )g x, y = h x, y f x, y + x, y
( ) ( ) ( ) ( )G u,v = H u,v F u,v + N u,v
Image Degradation/Restoration Model
6
Spatial domain:
Frequency domain:
),(ˆ yxf
Image Blur
Blurring degrades of image quality by reducing the high frequency
components of an image.
Few common blur models are:
 Motion Blur
 Defocus Blur
 Gaussian Blur
 Box Blur
7
Image Restoration
The objective of restoration is to obtain an estimate the original image f(x,y)
Why image restoration is important?
Image Restoration
9
• Non-blind Restoration
Given: Degraded image g(x,y) and blurring function h(x,y)
Design: Restoration scheme such that the distortion between f(x,y) and is
minimized
• Blind Restoration
Given: Degraded image g(x,y)
Design: (i) Estimate blurring function h(x,y)
(ii)Restoration scheme such that the distortion between f(x,y) and is minimized
),(ˆ yxf
),(ˆ yxf
Chapter 2
Blur Models
 Motion Blur
When the scene to be recorded translates relative to the camera, the
uniform motion blur can be described as
 Defocus blur
It is caused by a system with circular aperture can be modeled as a
uniform disk with radius R
8
Chapter 2
• Gaussian Blur
Blur function are usually approximated by the Gaussian function for a large
range of devices like optical image camera, microscope, telescope, etc.
• Box Blur
Box blur is a mean filter where each pixel in the output image has mean of
its neighbouring pixels defined in some specific region in the input image.
𝑔 𝑥, 𝑦 =
1
𝑚 ∗ 𝑛
෍
𝑥,𝑦∈𝑆 𝑢𝑣
𝑓(𝑥, 𝑦)
ℎ 𝐺 =
1
2𝜋𝜎2
𝒆
−
𝒙2+𝒚2
2𝝈2
Blur Type Recognition in Spatial Domain?
Figure: A Sharp Image and blurred versions with three types of blurs
Blur Patterns in Frequency Domain
Literature Review
Bolan et al. [16] offered a blur image classification and blurred region
recognition technique.this work considers only two classes of blur.
Tiwari et al. [18] blur classification using statistical texture features and
neural network classification into motion, defocus and combined blur.
Tiwari et al. [19] blur classification using wavelet features and neural
network classification into motion, defocus and combined blur.
Tiwari et al. [20] blur classification using Ridgelet texture features and
neural network classification into motion, defocus and combined blur.
Tiwari et al. [21] blur classification using Curvelet features and neural
network classification into motion, defocus and combined blur.
Blur Classification Framework
Image dataset
The proposed method is evaluated using Triesch gesture database [33].
This database consists of 720 images taken by 24 different persons in
3 dissimilar backgrounds comprise of 10 hand gestures for each. All the
images are synthetically blurred to create the database. All the images
are blurred separately using each class of blur namely motion, defocus,
box and Gaussian blur. Therefore, the blurred image database has the
size 2880 images.
Experiment 1: Multilayer Perceptron for Blur
Classification
Results
Experiment 2: Convolution Neural Network for
Blur Classification
Result Analysis
The average accuracies achieved through MLP and CNN models are 93.0 and 97.0
respectively. The synthesis database is also tested with Curvelet transform based energy
features and a feed forward neural network based classification model (Tiwari et al., 2014).
The average accuracy using this model is 95.7.
Conclusion and Future scope
This paper have presented two different classification models namely
multi-layer perceptron and convolution neural network to classify blur
into one of the four categories namely motion, defocus, box and
Gaussian blur. The frequency spectrum of blurred images are
considered as input images for these models since blur features are
easily noticeable in frequency domain. From the experimental results, it
is evident that convolution neural network is most suitable classification
model. This blur classification work can be stretched by introducing
more fine-tuned model to improve accuracies in presence and absence
of noise.
References
[1] Mitra, S., & Acharya, T. (2007). Gesture recognition: A survey. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 37(3), 311-324.
[2] Rautaray, S. S., & Agrawal, A. (2015). Vision based hand gesture recognition for human computer interaction: a survey. Artificial Intelligence Review, 43(1), 1-54.
[3] Nicholas, C. G., Marti, L. M., van der Merwe, R., & Kassebaum, J. (2017). U.S. Patent No. 9,679,414. Washington, DC: U.S. Patent and Trademark Office.
[4] Eisaku, O., Hiroshi, H., Lim, A. H. (2004). Barcode readers using the camera device in mobile phones. In Proc. of Internet. Conf. on Cyber worlds, pp. 260–265.
[5] Thielemann, J. T., Schumann-Olsen, H., Schulerud, H., and Kirkhus, T. (2004). Handheld PC with camera used for reading information dense barcodes. In Proc. IEEE Int. Conf. on
Computer Vision and Pattern Recognition, Demonstration Program, Washington, DC, pp. 102-112.
[6] Joseph, E., and Pavlidis, T. (1994). Bar code waveform recognition using peak locations. Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 16(6), pp. 630-640.
[7] Selim, E. (2004). Blind deconvolution of barcode signals. Inverse Prob., vol. 20(1), pp. 121– 135.
[8] Tong, H., Li, M., Zhang, H., and Zhang, C. (2004). Blur detection for digital images using wavelet transform. In proceedings of IEEE international conference on Multimedia and Expo,
vol. 1, pp. 17-20.
[9] Yang, Q., Yi, X., and Yang, X. (2013). No-reference image blur assessment based on gradient profile sharpness. IEEE International Symposium
on Broadband Multimedia Systems and Broadcasting (BMSB), pp.1-4.
[10] Rugna, J. D., and Konik, H. (2006). Blur identification in image processing. International Joint Conference on Neural Networks (IJCNN '06.),
pp. 2536-2541.
[11] Crete, F., Dolmiere, T., Ladret, P., and Nicolas, M. (2007). The blur effect: Perception and estimation with a new no-reference perceptual
blur metric. In SPIE Human Vision & Electronic Imaging, vol. 6492, pp. 1-11.
[12] Chi, Z. (2008). An unsupervised approach to determination of main subject regions in images with low depth of field. IEEE 10th Workshop
on Multimedia Signal Processing, pp. 650-653.
[13] Chong, R.M., and Tanaka, T. (2008). Image extrema analysis and blur detection with identification. IEEE International Conference on Signal
Image Technology and Internet Based Systems (SITIS '08), pp. 320-326.
[14] Liu, R., Li, Z., and Jia, J. (2008). Image partial blur detection and classification. In Proc. CVPR, pp. 23–28.
[15] Aizenberg, I., Paliy, D. V., Zurada, J. M., and Astola, J. T. (2008). Blur identification by multilayer neural network based on multivalued
neurons. IEEE Transactions on Neural Networks, vol. 19(5), pp.883-898.
[16] Bolan S., Lu, S., and Tan, C. (2011). Blurred image region detection and classification. In Proc. ACM Multimedia, pp.1397–1400.
[17] Yan, R., and Shao, L. (2013). Image Blur classification and parameter identification using two-stage deep belief
networks. British Machine Vision Conference (BMVC), Bristol, UK, pp. 1-11.
[18] Tiwari, S., Shukla, V. P., Biradar, S., & Singh, A. (2013). Texture features based blur classification in barcode
images. International Journal of Information Engineering and Electronic Business, 5(5), 34.
[19] Tiwari, S., Shukla, V. P., Biradar, S. R., & Singh, A. K. (2014). Blur Classification Using Wavelet Transform and Feed
Forward Neural Network. International Journal of Modern Education and Computer Science, 6(4), 16.
[20] Tiwari, S. (2017). A Pattern Classification Based approach for Blur Classification. Indonesian Journal of Electrical
Engineering and Informatics (IJEEI), 5(2).
[21] Tiwari, S., Shukla, V. P., Biradar, S. R., & Singh, A. K. (2014). Blur classification using ridgelet transform and feed
forward neural network. International Journal of Image, Graphics and Signal Processing, 6(9), 47.
[22] Pan, H., Feng, X. F., & Daly, S. (2005, September). LCD motion blur modeling and analysis. In Image Processing,
2005. ICIP 2005. IEEE International Conference on (Vol. 2, pp. II-21). IEEE.
[23] Dobes, M., Machala, L., and Frst, T. (2010). Blurred image restoration: A fast method of finding the motion
length and angle. Digital Signal Processing, vol. 20(6), pp. 1677–1686.
[24] Sakano, M., Suetake, N., and Uchino, E. (2007). A robust point spread function estimation for out-of-focus
blurred and noisy images based on a distribution of gradient vectors on the polar plane. Journal of Optical Society of
Japan, vol. 14(5), pp. 297-303.
[25] Tiwari, S., Shukla, V. P., Biradar, S. R., & Singh, A. K. (2014). Blur parameters identification for simultaneous defocus and motion blur. CSI transactions on
ICT, 2(1), 11-22.
[26] Gardner, M. W., & Dorling, S. R. (1998). Artificial neural networks (the multilayer perceptron)—a review of applications in the atmospheric sciences.
Atmospheric environment, 32(14-15), 2627-2636.
[27] Tang, J., Deng, C., & Huang, G. B. (2016). Extreme learning machine for multilayer perceptron. IEEE transactions on neural networks and learning systems,
27(4), 809-821.
[28] Chaudhuri, B. B., & Bhattacharya, U. (2000). Efficient training and improved performance of multilayer perceptron in pattern classification.
Neurocomputing, 34(1-4), 11-27.
[29] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information
processing systems (pp. 1097-1105).
[30] Sahiner, B., Chan, H. P., Petrick, N., Wei, D., Helvie, M. A., Adler, D. D., & Goodsitt, M. M. (1996). Classification of mass and normal breast tissue: a
convolution neural network classifier with spatial domain and texture images. IEEE transactions on Medical Imaging, 15(5), 598-610.
[31] Howard, A. G. (2013). Some improvements on deep convolutional neural network based image classification. arXiv preprint arXiv:1312.5402.
[32] Oquab, M., Bottou, L., Laptev, I., & Sivic, J. (2014). Learning and transferring mid-level image representations using convolutional neural
networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1717-1724).
[33] Triesch, J., & Von Der Malsburg, C. (1996). Robust classification of hand postures against complex backgrounds. In Automatic Face and
Gesture Recognition, Proceedings of the Second International Conference on (pp. 170-175). IEEE.
[34] Billsus, D., & Pazzani, M. J. (1998, July). Learning Collaborative Information Filters. In Icml (Vol. 98, pp. 46-54).
[35] Fawcett, T. (2006). An introduction to ROC analysis. Pattern recognition letters, 27(8), 861-874.
[36] Ferri, C., Hernández-Orallo, J., & Salido, M. A. (2003, September). Volume under the ROC surface for multi-class problems. In European
Conference on Machine Learning (pp. 108-120). Springer, Berlin, Heidelberg.
Thanks
Blurclassification
Blurclassification
Blurclassification
Blurclassification
Blurclassification
Blurclassification
Blurclassification

Blurclassification

  • 1.
    A Blur ClassificationApproach using Convolution Neural Network Shamik Tiwari Department of Virtualization, School of Computer Science University of Petroleum and Energy Studies, Dehradun, India shamik.tiwari@ddn.upes.ac.in
  • 2.
    Contents • Introduction • LiteratureReview • Convolution Neural Network • Results • Conclusion
  • 3.
    Image Degradation Images areusually prone to degradation due to a number of reasons such as • Noise • Focus is not adjusted • The camera is moved during image acquisition cause ‘Motion blur' • A dirty lens • Low resolution of the images • Poor illuminance etc.
  • 4.
    ( ) () ( ) ( )g x, y = h x, y f x, y + x, y ( ) ( ) ( ) ( )G u,v = H u,v F u,v + N u,v Image Degradation/Restoration Model 6 Spatial domain: Frequency domain: ),(ˆ yxf
  • 5.
    Image Blur Blurring degradesof image quality by reducing the high frequency components of an image. Few common blur models are:  Motion Blur  Defocus Blur  Gaussian Blur  Box Blur 7
  • 6.
    Image Restoration The objectiveof restoration is to obtain an estimate the original image f(x,y)
  • 7.
    Why image restorationis important?
  • 8.
    Image Restoration 9 • Non-blindRestoration Given: Degraded image g(x,y) and blurring function h(x,y) Design: Restoration scheme such that the distortion between f(x,y) and is minimized • Blind Restoration Given: Degraded image g(x,y) Design: (i) Estimate blurring function h(x,y) (ii)Restoration scheme such that the distortion between f(x,y) and is minimized ),(ˆ yxf ),(ˆ yxf Chapter 2
  • 9.
    Blur Models  MotionBlur When the scene to be recorded translates relative to the camera, the uniform motion blur can be described as  Defocus blur It is caused by a system with circular aperture can be modeled as a uniform disk with radius R 8 Chapter 2
  • 10.
    • Gaussian Blur Blurfunction are usually approximated by the Gaussian function for a large range of devices like optical image camera, microscope, telescope, etc. • Box Blur Box blur is a mean filter where each pixel in the output image has mean of its neighbouring pixels defined in some specific region in the input image. 𝑔 𝑥, 𝑦 = 1 𝑚 ∗ 𝑛 ෍ 𝑥,𝑦∈𝑆 𝑢𝑣 𝑓(𝑥, 𝑦) ℎ 𝐺 = 1 2𝜋𝜎2 𝒆 − 𝒙2+𝒚2 2𝝈2
  • 11.
    Blur Type Recognitionin Spatial Domain? Figure: A Sharp Image and blurred versions with three types of blurs
  • 12.
    Blur Patterns inFrequency Domain
  • 13.
    Literature Review Bolan etal. [16] offered a blur image classification and blurred region recognition technique.this work considers only two classes of blur. Tiwari et al. [18] blur classification using statistical texture features and neural network classification into motion, defocus and combined blur. Tiwari et al. [19] blur classification using wavelet features and neural network classification into motion, defocus and combined blur. Tiwari et al. [20] blur classification using Ridgelet texture features and neural network classification into motion, defocus and combined blur. Tiwari et al. [21] blur classification using Curvelet features and neural network classification into motion, defocus and combined blur.
  • 14.
  • 15.
    Image dataset The proposedmethod is evaluated using Triesch gesture database [33]. This database consists of 720 images taken by 24 different persons in 3 dissimilar backgrounds comprise of 10 hand gestures for each. All the images are synthetically blurred to create the database. All the images are blurred separately using each class of blur namely motion, defocus, box and Gaussian blur. Therefore, the blurred image database has the size 2880 images.
  • 16.
    Experiment 1: MultilayerPerceptron for Blur Classification
  • 17.
  • 18.
    Experiment 2: ConvolutionNeural Network for Blur Classification
  • 19.
    Result Analysis The averageaccuracies achieved through MLP and CNN models are 93.0 and 97.0 respectively. The synthesis database is also tested with Curvelet transform based energy features and a feed forward neural network based classification model (Tiwari et al., 2014). The average accuracy using this model is 95.7.
  • 20.
    Conclusion and Futurescope This paper have presented two different classification models namely multi-layer perceptron and convolution neural network to classify blur into one of the four categories namely motion, defocus, box and Gaussian blur. The frequency spectrum of blurred images are considered as input images for these models since blur features are easily noticeable in frequency domain. From the experimental results, it is evident that convolution neural network is most suitable classification model. This blur classification work can be stretched by introducing more fine-tuned model to improve accuracies in presence and absence of noise.
  • 21.
    References [1] Mitra, S.,& Acharya, T. (2007). Gesture recognition: A survey. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 37(3), 311-324. [2] Rautaray, S. S., & Agrawal, A. (2015). Vision based hand gesture recognition for human computer interaction: a survey. Artificial Intelligence Review, 43(1), 1-54. [3] Nicholas, C. G., Marti, L. M., van der Merwe, R., & Kassebaum, J. (2017). U.S. Patent No. 9,679,414. Washington, DC: U.S. Patent and Trademark Office. [4] Eisaku, O., Hiroshi, H., Lim, A. H. (2004). Barcode readers using the camera device in mobile phones. In Proc. of Internet. Conf. on Cyber worlds, pp. 260–265. [5] Thielemann, J. T., Schumann-Olsen, H., Schulerud, H., and Kirkhus, T. (2004). Handheld PC with camera used for reading information dense barcodes. In Proc. IEEE Int. Conf. on Computer Vision and Pattern Recognition, Demonstration Program, Washington, DC, pp. 102-112. [6] Joseph, E., and Pavlidis, T. (1994). Bar code waveform recognition using peak locations. Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 16(6), pp. 630-640. [7] Selim, E. (2004). Blind deconvolution of barcode signals. Inverse Prob., vol. 20(1), pp. 121– 135. [8] Tong, H., Li, M., Zhang, H., and Zhang, C. (2004). Blur detection for digital images using wavelet transform. In proceedings of IEEE international conference on Multimedia and Expo, vol. 1, pp. 17-20.
  • 22.
    [9] Yang, Q.,Yi, X., and Yang, X. (2013). No-reference image blur assessment based on gradient profile sharpness. IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), pp.1-4. [10] Rugna, J. D., and Konik, H. (2006). Blur identification in image processing. International Joint Conference on Neural Networks (IJCNN '06.), pp. 2536-2541. [11] Crete, F., Dolmiere, T., Ladret, P., and Nicolas, M. (2007). The blur effect: Perception and estimation with a new no-reference perceptual blur metric. In SPIE Human Vision & Electronic Imaging, vol. 6492, pp. 1-11. [12] Chi, Z. (2008). An unsupervised approach to determination of main subject regions in images with low depth of field. IEEE 10th Workshop on Multimedia Signal Processing, pp. 650-653. [13] Chong, R.M., and Tanaka, T. (2008). Image extrema analysis and blur detection with identification. IEEE International Conference on Signal Image Technology and Internet Based Systems (SITIS '08), pp. 320-326. [14] Liu, R., Li, Z., and Jia, J. (2008). Image partial blur detection and classification. In Proc. CVPR, pp. 23–28. [15] Aizenberg, I., Paliy, D. V., Zurada, J. M., and Astola, J. T. (2008). Blur identification by multilayer neural network based on multivalued neurons. IEEE Transactions on Neural Networks, vol. 19(5), pp.883-898. [16] Bolan S., Lu, S., and Tan, C. (2011). Blurred image region detection and classification. In Proc. ACM Multimedia, pp.1397–1400.
  • 23.
    [17] Yan, R.,and Shao, L. (2013). Image Blur classification and parameter identification using two-stage deep belief networks. British Machine Vision Conference (BMVC), Bristol, UK, pp. 1-11. [18] Tiwari, S., Shukla, V. P., Biradar, S., & Singh, A. (2013). Texture features based blur classification in barcode images. International Journal of Information Engineering and Electronic Business, 5(5), 34. [19] Tiwari, S., Shukla, V. P., Biradar, S. R., & Singh, A. K. (2014). Blur Classification Using Wavelet Transform and Feed Forward Neural Network. International Journal of Modern Education and Computer Science, 6(4), 16. [20] Tiwari, S. (2017). A Pattern Classification Based approach for Blur Classification. Indonesian Journal of Electrical Engineering and Informatics (IJEEI), 5(2). [21] Tiwari, S., Shukla, V. P., Biradar, S. R., & Singh, A. K. (2014). Blur classification using ridgelet transform and feed forward neural network. International Journal of Image, Graphics and Signal Processing, 6(9), 47. [22] Pan, H., Feng, X. F., & Daly, S. (2005, September). LCD motion blur modeling and analysis. In Image Processing, 2005. ICIP 2005. IEEE International Conference on (Vol. 2, pp. II-21). IEEE. [23] Dobes, M., Machala, L., and Frst, T. (2010). Blurred image restoration: A fast method of finding the motion length and angle. Digital Signal Processing, vol. 20(6), pp. 1677–1686. [24] Sakano, M., Suetake, N., and Uchino, E. (2007). A robust point spread function estimation for out-of-focus blurred and noisy images based on a distribution of gradient vectors on the polar plane. Journal of Optical Society of Japan, vol. 14(5), pp. 297-303.
  • 24.
    [25] Tiwari, S.,Shukla, V. P., Biradar, S. R., & Singh, A. K. (2014). Blur parameters identification for simultaneous defocus and motion blur. CSI transactions on ICT, 2(1), 11-22. [26] Gardner, M. W., & Dorling, S. R. (1998). Artificial neural networks (the multilayer perceptron)—a review of applications in the atmospheric sciences. Atmospheric environment, 32(14-15), 2627-2636. [27] Tang, J., Deng, C., & Huang, G. B. (2016). Extreme learning machine for multilayer perceptron. IEEE transactions on neural networks and learning systems, 27(4), 809-821. [28] Chaudhuri, B. B., & Bhattacharya, U. (2000). Efficient training and improved performance of multilayer perceptron in pattern classification. Neurocomputing, 34(1-4), 11-27. [29] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105). [30] Sahiner, B., Chan, H. P., Petrick, N., Wei, D., Helvie, M. A., Adler, D. D., & Goodsitt, M. M. (1996). Classification of mass and normal breast tissue: a convolution neural network classifier with spatial domain and texture images. IEEE transactions on Medical Imaging, 15(5), 598-610. [31] Howard, A. G. (2013). Some improvements on deep convolutional neural network based image classification. arXiv preprint arXiv:1312.5402.
  • 25.
    [32] Oquab, M.,Bottou, L., Laptev, I., & Sivic, J. (2014). Learning and transferring mid-level image representations using convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1717-1724). [33] Triesch, J., & Von Der Malsburg, C. (1996). Robust classification of hand postures against complex backgrounds. In Automatic Face and Gesture Recognition, Proceedings of the Second International Conference on (pp. 170-175). IEEE. [34] Billsus, D., & Pazzani, M. J. (1998, July). Learning Collaborative Information Filters. In Icml (Vol. 98, pp. 46-54). [35] Fawcett, T. (2006). An introduction to ROC analysis. Pattern recognition letters, 27(8), 861-874. [36] Ferri, C., Hernández-Orallo, J., & Salido, M. A. (2003, September). Volume under the ROC surface for multi-class problems. In European Conference on Machine Learning (pp. 108-120). Springer, Berlin, Heidelberg.
  • 26.