SlideShare a Scribd company logo
1 of 51
Download to read offline
Recent articles published in
Signal & Image Processing
Signal & Image Processing: An International
Journal (SIPIJ)
ISSN : 0976 - 710X (Online) ; 2229 - 3922 (print)
http://www.airccse.org/journal/sipij/index.html
FACIAL EXPRESSION DETECTION FOR VIDEO SEQUENCES
USING LOCAL FEATURE EXTRACTION ALGORITHMS
Kennedy Chengeta and Professor Serestina Viriri
1
University of KwaZulu Natal
2Westville Campus, South Africa
ABSTRACT
Facial expression image analysis can either be in the form of static image analysis or dynamic
temporal 3D image or video analysis. The former involves static images taken on an individual at a
specific point in time and is in 2-dimensional format. The latter involves dynamic textures extraction
of video sequences extended in a temporal domain. Dynamic texture analysis involves short term
facial expression movements in 3D in a temporal or spatial domain. Two feature extraction algorithms
are used in 3D facial expression analysis namely holistic and local algorithms. Holistic algorithms
analyze the whole face whilst the local algorithms analyze a facial image in small components namely
nose, mouth, cheek and forehead. The paper uses a popular local feature extraction algorithm called
LBP-TOP, dynamic image features based on video sequences in a temporal domain. Volume Local
Binary Patterns combine texture, motion and appearance. VLBP and LBP-TOP outperformed other
approaches by including local facial feature extraction algorithms which are resistant to gray-scale
modifications and computation. It is also crucial to note that these emotions being natural reactions,
recognition of feature selection and edge detection from the video sequences can increase accuracy
and reduce the error rate. This can be achieved by removing unimportant information from the facial
images. The results showed better percentage recognition rate by using local facial extraction
algorithms like local binary patterns and local directional patterns than holistic algorithms like GLCM
and Linear Discriminant Analysis. The study proposes local binary pattern variant LBP-TOP, local
directional patterns and support vector machines aided by genetic algorithms for feature selection.
The study was based on Facial Expressions and Emotions (FEED) and CK+ image sources.
KEYWORDS
Local binary patterns on TOP · Volume Local Binary Patterns(VLBP)
Full Text : https://aircconline.com/sipij/V10N1/10119sipij03.pdf
Signal & Image Processing: An International Journal (SIPIJ)
http://www.airccse.org/journal/sipij/vol10.html
REFERENCES
1. Y.Wang,J.See,R.C.-W.Phan,Y.-H.Oh,Lbp with six intersection points:Reducing redundant
information in lbp-top for micro-expression recognition, in: Computer Vision—ACCV 2014,
Springer, Singapore, 2014, pp. 525–537.
2. Y. Wang, J. See, R.C.-W. Phan, Y.-H. Oh, Ecient spatio-temporal local binary patterns for
spontaneous facial micro-expression recognition, PloS One 10 (5) (2015).
3. M. S. Aung, S. Kaltwang, B. Romera-Paredes, B. Martinez, A. Singh, M. Cella,M. Valstar,
H. Meng, A. Kemp, M. Shafizadeh, et al.: “The auto- matic detection of chronic pain-related
expression: requirements, challenges and a multimodal dataset,” Transactions on A↵ective
Computing, 2015.
4. P. Pavithra and A. B. Ganesh: “Detection of human facial behavioral ex- pression using
image processing,”
5. K. Nurzynska and B. Smolka, “Smiling and neutral facial display recognition with the
local binary patterns operator:” Journal of Medical Imaging and Health Informatics, vol. 5,
no. 6, pp. 1374–1382, 2015-11-01T00:00:00.
6. Rupali S Chavan et al, International Journal of Computer Science and Mobile Computing
Vol.2 Issue. 6, June- 2013, pg. 233-238
7. P. Lemaire, B. Ben Amor, M. Ardabilian, L. Chen, and M. Daoudi, “Fully automatic facial
expression recognition using a region-based approach,” in Proceedings of
the 2011 Joint ACM Workshop on Human Gesture and Behavior Understanding, J-HGBU
’11, (New York, NY, USA), pp. 53–58, ACM, 2011.
8. C. Padgett and G. W. Cottrell, “Representing face images for emotion clas- sification,”
Advances in neural information processing systems, pp. 894–900, 1997.
9. P. Viola and M. J. Jones: “Robust real-time face detection,” Int. J. Comput. Vision,vol. 57,
pp. 137–154, May 2004.
10. Yandan Wang , John See, Raphael C.-W. Phan, Yee-Hui Oh, Spatio-Temporal Local
Binary Patterns for Spontaneous Facial Micro-Expression Recognition, May
19, 2015, https://doi.org/10.1371/journal.pone.0124674
11. A. Sanin, C. Sanderson, M. T. Harandi, and B. C. Lovell, “Spatio-temporal covariance
descriptors for action and gesture recognition,” in Proc. IEEE Workshop
on Applications of Computer Vision (Clearwater, 2013), pp. 103–110.
12. K. Chengeta and S. Viriri, ”A survey on facial recognition based on local directional and
local binary patterns,” 2018 Conference on Information Communications
Technology and Society (ICTAS), Durban, 2018, pp. 1-6.
13. S. Jain, C. Hu, and J. K. Aggarwal, “Facial expression recognition with temporal
modeling of shapes,” in Proc. IEEE Int. Computer Vision Workshops (ICCV Workshops)
(Barcelona, 2011), pp. 1642–1649.
14. X. Huang, G. Zhao, M. Pietikainen, and W. Zheng, “Dynamic facial expression
recognition using boosted component-based spatiotemporal features and multiclassifier
fusion,” in Advanced Concepts for Intelligent Vision Systems (Springer, 2010), pp. 312–322.
15. R. Mattivi and L. Shao, “Human action recognition using LBP-TOP as sparse spatio-
temporal feature descriptor,” in Computer Analysis of Images and Patterns
(Springer, 2009), pp. 740–747.
16. A. S. Spizhevoy, Robust dynamic facial expressions recognition using Lbp-Top
descriptors and Bag-of-Words classification model
17. B. Jiang, M. Valstar, B. Martinez, M. Pantic, ”A dynamic appearance descriptor approach
to facial actions temporal modelling”, IEEE Transaction on Cybernetics,
vol. 44, no. 2, pp. 161-174, 2014.
18. Y. Wang, Hui Yu, B. Stevens and Honghai Liu, ”Dynamic facial expression recognition
using local patch and LBP-TOP,” 2015 8th International Conference on Human System
Interaction (HSI), Warsaw, 2015, pp. 362-367. doi: 10.1109/HSI.2015.7170694
19. Aggarwal, Charu C., Data Mining Concepts, ISBN 978-3-319-14141-1, 2015, XXIX, 734
p. 180 illus., 173 illus. in color.
20. Pietik¨ainen M, Hadid A, Zhao G, Ahonen T (2011) Computer vision using local binary
patterns. Springer, New York. https://doi.org/10.1007/978-0-85729-748-8
21. Ravi Kumar Y B and C. N. Ravi Kumar, ”Local binary pattern: An improved LBP to
extract nonuniform LBP patterns with Gabor filter to increase the rate of
face similarity,” 2016 Second International Conference on Cognitive Computing and
Information Processing (CCIP), Mysore, 2016, pp. 1-5.
22. Arana-Daniel N, Gallegos AA, L´opez-Franco C, Alan´ıs AY, Morales J, L´opezFranco
A. Support Vector Machines Trained with Evolutionary Algorithms Employing Kernel
Adatron for Large Scale Classification of Protein Structures. Evol Bioinform Online.
2016;12:285-302. Published 2016 Dec 4. doi:10.4137/EBO.S40912
23. K. Chengeta and S. Viriri, ”A survey on facial recognition based on local directional and
local binary patterns,” 2018 Conference on Information CommunicaSignal &
tions Technology and Society (ICTAS), Durban, 2018, pp. 1-6. doi:
10.1109/ICTAS.2018.8368757
CHARACTERIZING HUMAN BEHAVIOURS USING STATISTICAL
MOTION DESCRIPTOR
Eissa Jaber Alreshidi1 and Mohammad Bilal2
, 1
University of Hail, Saudi Arabia,2
Comsats University, Pakistan
ABSTRACT
Identifying human behaviors is a challenging research problem due to the complexity and
variation of appearances and postures, the variation of camera settings, and view angles. In
this paper, we try to address the problem of human behavior identification by introducing a
novel motion descriptor based on statistical features. The method first divide the video into N
number of temporal segments. Then for each segment, we compute dense optical flow, which
provides instantaneous velocity information for all the pixels. We then compute Histogram of
Optical Flow (HOOF) weighted by the norm and quantized into 32 bins. We then compute
statistical features from the obtained HOOF forming a descriptor vector of 192- dimensions.
We then train a non-linear multi-class SVM that classify different human behaviors with the
accuracy of 72.1%. We evaluate our method by using publicly available human action data
set. Experimental results shows that our proposed method out performs state of the art
methods.
KEYWORDS
Support vector machine, motion descriptor, features, human behaviours
Full Text : https://aircconline.com/sipij/V10N1/10119sipij02.pdf
Signal & Image Processing: An International Journal (SIPIJ)
http://www.airccse.org/journal/sipij/vol10.html
REFERENCES
[1] Wang, Limin, Yuanjun Xiong, Zhe Wang, Yu Qiao, Dahua Lin, Xiaoou Tang, and Luc Van Gool.
"Temporal segment networks: Towards good practices for deep action recognition." In European
Conference on Computer Vision, pp. 20-36. Springer, Cham, 2016.
[2] Feichtenhofer, Christoph, Axel Pinz, and Richard P. Wildes. "Spatiotemporal multiplier networks
for video action recognition." In 2017 IEEE Conference on Computer Vision and Pattern Recognition
(CVPR), pp. 7445-7454. IEEE, 2017.
[3] Kong, Yu, Shangqian Gao, Bin Sun, and Yun Fu. "Action Prediction From Videos via
Memorizing Hard-to-Predict Samples." In AAAI. 2018.
[4] Ma, Shugao, Leonid Sigal, and Stan Sclaroff. "Learning activity progression in lstms for activity
detection and early detection." In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, pp. 1942-1950. 2016.
[5] Hu, Weiming, Dan Xie, Zhouyu Fu, Wenrong Zeng, and Steve Maybank. "Semantic-based
surveillance video retrieval." IEEE Transactions on image processing 16, no. 4 (2007): 1168-1181
[6] Ben-Arie, Jezekiel, Zhiqian Wang, Purvin Pandit, and Shyamsundar Rajaram. "Human activity
recognition using multidimensional indexing." IEEE Transactions on Pattern Analysis & Machine
Intelligence 8 (2002): 1091-1104.
[7] Saqib, Muhammad, Sultan Daud Khan, and Michael Blumenstein. "Texture-based feature mining
for crowd density estimation: A study." In Image and Vision Computing New Zealand (IVCNZ),
2016 International Conference on, pp. 1-6. IEEE, 2016.
[8] Cutler, Ross, and Larry S. Davis. "Robust real-time periodic motion detection, analysis, and
applications." IEEE Transactions on Pattern Analysis and Machine Intelligence 22, no. 8 (2000): 781-
796.
[9] Efros, Alexei A., Alexander C. Berg, Greg Mori, and Jitendra Malik. "Recognizing action at a
distance." In null, p. 726. IEEE, 2003.
[10] Fathi, Alireza, and Greg Mori. "Action recognition by learning mid-level motion features." In
Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pp. 1-8. IEEE,
2008.
[11] Chaudhry, Rizwan, Avinash Ravichandran, Gregory Hager, and René Vidal. "Histograms of
oriented optical flow and binet-cauchy kernels on nonlinear dynamical systems for the recognition of
human actions." In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference
on, pp. 1932-1939. IEEE, 2009.
[12] Ullah, H., Altamimi, A. B., Uzair, M., & Ullah, M. (2018). Anomalous entities detection and
localization in pedestrian flows. Neurocomputing, 290, 74-86.
[13] Khan, Wilayat, Habib Ullah, Aakash Ahmad, Khalid Sultan, Abdullah J. Alzahrani, Sultan Daud
Khan, Mohammad Alhumaid, and Sultan Abdulaziz. "CrashSafe: a formal model for proving
crashsafety of Android applications." Human-centric Computing and Information Sciences 8, no. 1
(2018): 21.
[14] Ullah, H., Ullah, M., & Uzair, M. (2018). A hybrid social influence model for pedestrian motion
segmentation. Neural Computing and Applications, 1-17.
[15] Ahmad, F., Khan, A., Islam, I. U., Uzair, M., & Ullah, H. (2017). Illumination normalization
using independent component analysis and filtering. The Imaging Science Journal, 65(5), 308-313
[16] Ullah, H., Uzair, M., Ullah, M., Khan, A., Ahmad, A., & Khan, W. (2017). Density independent
hydrodynamics model for crowd coherency detection. Neurocomputing, 242, 28-39.
[17] Khan, Sultan Daud, Muhammad Tayyab, Muhammad Khurram Amin, Akram Nour, Anas
Basalamah, Saleh Basalamah, and Sohaib Ahmad Khan. "Towards a Crowd Analytic Framework For
Crowd Management in Majid-al-Haram." arXiv preprint arXiv:1709.05952 (2017).
[18] Saqib, Muhammad, Sultan Daud Khan, Nabin Sharma, and Michael Blumenstein. "Extracting
descriptive motion information from crowd scenes." In 2017 International Conference on Image and
Vision Computing New Zealand (IVCNZ), pp. 1-6. IEEE, 2017.
[19] Ullah, M., Ullah, H., Conci, N., & De Natale, F. G. (2016, September). Crowd behavior
identification. In Image Processing (ICIP), 2016 IEEE International Conference on(pp. 1195-1199).
IEEE. [20] Khan, S. "Automatic Detection and Computer Vision Analysis of Flow Dynamics and
Social Groups in Pedestrian Crowds." (2016).
[21] Arif, Muhammad, Sultan Daud, and Saleh Basalamah. "Counting of people in the extremely
dense crowd using genetic algorithm and blobs counting." IAES International Journal of Artificial
Intelligence 2, no. 2 (2013): 51.
[22] Ullah, H., Ullah, M., Afridi, H., Conci, N., & De Natale, F. G. (2015, September). Traffic
accident detection through a hydrodynamic lens. In Image Processing (ICIP), 2015 IEEE International
Conference on (pp. 2470-2474). IEEE.
[23] Ullah, H. (2015). Crowd Motion Analysis: Segmentation, Anomaly Detection, and Behavior
Classification (Doctoral dissertation, University of Trento).
[24] Khan, Sultan D., Stefania Bandini, Saleh Basalamah, and Giuseppe Vizzari. "Analyzing crowd
behavior in naturalistic conditions: Identifying sources and sinks and characterizing main flows."
Neurocomputing 177 (2016): 543-563.
[25] Shimura, Kenichiro, Sultan Daud Khan, Stefania Bandini, and Katsuhiro Nishinari. "Simulation
and Evaluation of Spiral Movement of Pedestrians: Towards the Tawaf Simulator." Journal of
Cellular Automata 11, no. 4 (2016).
[26] Khan, Sultan Daud, Giuseppe Vizzari, and Stefania Bandini. "A Computer Vision Tool Set for
Innovative Elder Pedestrians Aware Crowd Management Support Systems." In AI* AAL@ AI* IA,
pp. 75-91. 2016.
Compression Algorithm Selection for Multispectral Mastcam Images
Chiman Kwan, Jude Larkin, Bence Budavari, and Bryan Chou,
Applied Research, LLC, USA
ABSTRACT:
The two mast cameras (Mastcam) onboard the Mars rover, Curiosity, are multispectral imagers with
nine bands in each camera. Currently, the images are compressed losslessly using JPEG, which can
achieve only two to three times compression. We present a two-step approach to compressing
multispectral Mastcam images. First, we propose to apply principal component analysis (PCA) to
compress the nine bands into three or six bands. This step optimally compresses the 9-band images
through spectral correlation between the bands. Second, several well-known image compression
codecs, such as JPEG, JPEG-2000 (J2K), X264, and X265, in the literature are applied to compress
the 3-band or 6-band images coming out of PCA. The performance of dif erent algorithms was
assessed using four well-known performance metrics. Extensive experiments using actual Mastcam
images have been performed to demonstrate the proposed framework. We observed that perceptually
lossless compression can be achieved at a 10:1 compression ratio. In particular, the performance gain
of an approach using a combination of PCA and X265 is at least 5 dBs in terms peak signal-to-noise
ratio (PSNR) at a 10:1 compression ratio over that of JPEG when using our proposed approach.
KEYWORDS:
Perceptually lossless compression; Mastcam images; multispectral images; JPEG; JPEG-2000; X264;
X265
Full Text: https://aircconline.com/sipij/V10N1/10119sipij01.pdf
Signal & Image Processing: An International Journal (SIPIJ)
http://www.airccse.org/journal/sipij/vol10.html
REFERENCES:
[1] Bell III, & J. F. et al, (2017) “The Mars Science Laboratory Curiosity Rover Mast Camera
(Mastcam) Instruments: Pre-Flight and In-Flight Calibration, Validation, and Data Archiving”, AGU
Journal Earth and Space Science.
[2] Ayhan, B & Kwan, C & Vance, S, (2015) “On the Use of a Linear Spectral Unmixing Technique
for Concentration Estimation of APXS Spectrum”, J. Multidisciplinary Engineering Science and
Technology, 2, 2469-2474.
[3] Wang, W., Li, S., Qi, H., Ayhan, B., Kwan, C., Vance, S., (2014), “Revisiting the Preprocessing
Procedures for Elemental Concentration Estimation based on CHEMCAM LIBS on MARS Rover”,
6th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing
(WHISPERS)
[4] Wang, W., Ayhan, B., Kwan, C., Qi, H., Vance, S., (2014), “A Novel and Effective Multivariate
Method for Compositional Analysis using Laser Induced Breakdown Spectroscopy”, 35th
International Symposium on Remote Sensing of Environment
[5] Ayhan, B.; Dao, M.; Kwan, C.; Chen, H.; Bell, J.; Kidd, R., (2017), “A Novel Utilization of Image
Registration Techniques to Process Mastcam Images in Mars Rover with Applications to Image
Fusion, Pixel Clustering, and Anomaly Detection”, IEEE Journal of Selected Topics in Applied Earth
Observations and Remote Sensing,
[6] Kwan, C.; Dao, M.; Chou, B.; Kwan, L. M.; Ayhan, B., (2017), “Mastcam Image Enhancement
Using Estimated Point Spread Functions”, IEEE Ubiquitous Computing, Electronics & Mobile
Communication Conference, New York.
[7] Kwan, C.; Chou, B. and Ayhan B., (2018), “Enhancing Stereo Image Formation and Depth Map
Estimation for Mastcam Images”, IEEE Ubiquitous Computing, Electronics & Mobile
Communication Conference, New York.
[8] Kwan, C.; Larkin, J., (2017), “Perceptually Lossless Compression for Mastcam Images”, IEEE
Ubiquitous Computing, Electronics & Mobile Communication Conference, New York.
[9] Haines, R. F.; Chuang, S. L., (1992), “The effects of video compression on acceptability of images
for monitoring life sciences experiments”, NASA-TP-3239.
[10] Garrett-Glaser, J., (2010). “Patent skullduggery: Tandberg rips off x264 algorithm,” online
https://lwn.net/Articles/417562/.
[11] Hruska, J., (2013), “H.265 benchmarked: Does the next-generation video codec live up to
expectations?” ExtremeTech.
[12] International Organization for Standardization, “ISO/IEC 15444-1:2016 - Information
technology -- JPEG 2000 image coding system: Core coding system”, retrieved 2017-10-19.
[13] Ayhan, B.; Kwan, C. and Zhou, J., (2018), “A New Nonlinear Change Detection Approach
Based on Band Ratioing”, Algorithms and Technologies for Multispectral, Hyperspectral, and
Ultraspectral Imagery XXIV.
[14] Glaser, F., (2010), “First Look: H.264 and VP8 Compared”, Diary of An x264 Developer.
[15] Converse, A., (2015), “New video compression techniques under consideration for VP10”,
presentation at the VideoLAN Dev Days.
[16] Haykin, S., (1993), “Neural Networks and Learning Machines”, Pearson Education.
[17] Wu, J.; Liang, Q. and Kwan, C., (2012), “A Novel and Comprehensive Compressive Sensing
based System for Data Compression”, IEEE Globecom.
[18] Blanes, I., Magli, E., and Serra-Sagrista, J., (2014), “A tutorial on image compression for optical
space imaging systems”, Geoscience and Remote Sensing Magazine, IEEE, vol. 2, no. 3, pp. 8–26.
[19] Du, Q. and Fowler, J. E., (2007), “Hyperspectral image compression using JPEG2000 and
principal component analysis”, Geoscience and Remote Sensing Letters, IEEE, vol. 4, no. 2, pp. 201–
205.
[20] Zhou, J. and Kwan, C., (2018), “A Hybrid Approach for Wind Tunnel Data Compression”, Data
Compression Conference, Snowbird, Utah, USA.
[21] Kwan, C. and Luk, Y., (2018), “Hybrid sensor network data compression with error resiliency”,
Compression Conference, Snowbird, Utah, USA.
[22] Strang, G. and Nguyen, T, (1997), “Wavelets and filter banks”, Wellesley-Cambridge Press.
[23] Kwan, C.; Li, B.; Xu, R.; Tran, T. and Nguyen, T., (2001), “Very Low-Bit-Rate Video
Compression Using Wavelets”, Wavelet Applications VIII, 4391, 176-180.
[24] Kwan, C.; Li, B.; Xu, R.; Tran, T. and Nguyen, T., (2001), “SAR Image Compression Using
Wavelets”, Wavelet Applications VIII, 4391, 349-357.
[25] Kwan, C.; Li, B.; Xu, R.; Li, X.; Tran, T. and Nguyen, T. Q., (2006), “A Complete Image
Compression Codec Based on Overlapped Block Transform”, Eurosip Journal of Applied Signal
Processing, 1-15.
[26] Ponomarenko, N.; Silvestri, F.; Egiazarian, K.; Carli, M.; Astola, J. and Lukin, V., (2007), “On
between-coefficient contrast masking of DCT basis functions”, Proc. Third International Workshop
on Video Processing and Quality Metrics for Consumer Electronics, Scottsdale, AZ, USA.
[27] Kwan, C.; Shang, E. and Tran, T., (2018), “Perceptually lossless image compression with error
recovery”, 2nd International Conference on Vision, Image and Signal Processing, Las Vegas, NV,
USA.
[28] Kwan, C., Shang, E. and Tran, T., (2018), “Perceptually lossless video compression with error
concealment”, 2nd International Conference on Vision, Image and Signal Processing, Las Vegas, NV,
USA.
Perceptually Lossless Compression with Error Concealment for Periscope
and Sonar Videos
Chiman Kwan1
, Jude Larkin1
, Bence Budavari1
, Eric Shang1
, and Trac D. Tran2
, 1
Applied Research
LLC, USA and
2
The Johns Hopkins University, USA
ABSTRACT:
We present a video compression framework that has two key features. First, we aim at achieving
perceptually lossless compression for low frame rate videos (6 fps). Four well-known video codecs in
the literature have been evaluated and the performance was assessed using four well-known
performance metrics. Second, we investigated the impact of error concealment algorithms for
handling corrupted pixels due to transmission errors in communication channels. Extensive
experiments using actual videos have been performed to demonstrate the proposed framework.
KEYWORDS:
Perceptually lossless compression; error recovery; maritime and sonar videos
Full Text: https://aircconline.com/sipij/V10N2/10219sipij01.pdf
Signal & Image Processing: An International Journal (SIPIJ)
http://www.airccse.org/journal/sipij/vol10.html
REFERENCES:
[1] Strang, G. and Nguyen, T, (1997), “Wavelets and filter banks”, Wellesley-Cambridge Press.
[2] Kwan, C.; Li, B.; Xu, R.; Tran, T. and Nguyen, T., (2001), “Very Low-Bit-Rate Video
Compression Using Wavelets”, Wavelet Applications VIII, 4391, 176-180.
[3] Kwan, C., Larkin, J., Budavari, B. and Chou, B., (2019), “Compression algorithm selection for
multispectral Mastcam images,” Signal & Image Processing: An International Journal.
[4] Kwan, C. and Larkin, J., (2018), “Perceptually Lossless Compression for Mastcam Images,” IEEE
Ubiquitous Computing, Electronics & Mobile Communication Conference, New York City,
[5] Pennebaker, W. B. and Mitchell, J. L., (1993), JPEG–Still image data compression standard, Van
Nostrand Reinhold.
[6] Marpe, D., George, G., Cycon, H. L., and Barthel, K. U., (2004) “Performance evaluation of
MotionJPEG2000 in comparison with H.264/AVC operated in pure intracoding mode,” Proc. SPIE
5266, Wavelet Applications in Industrial Processing.
[7] Kwan, C.,Li, B., Xu, R., Tran, T., and Nguyen, T., (2001), “SAR image compression using
wavelets,” Wavelet Applications VIII, Proc. SPIE (vol. 4391).
[8] Tran, T. D., Liang, J., Tu, C., (2003)“Lapped transform via time-domain pre-and post-filtering,”
IEEE Transactions on Signal Processing.
[9] Valin, J.-M. and Terriberry, T. B., (2015), “Perceptual Vector Quantization for Video Coding,”
Proceedings of SPIE Visual Information Processing and Communication Conference.
[10] Kwan, C., Shi, E., Um, Y.,(2018),“High performance video codec with error concealment”, Data
Compression Conference.
[11] Kwan, C., Larkin, J., Budavari, B., Chou, B., Shang, E., Tran, T. D., (2019), “A Comparison of
Compression Codecs for Maritime and Sonar Images in Bandwidth Constrained Applications,”
Computers.
[12] Kwan, C., Shang, E. and Tran, T., (2018), “Perceptually lossless video compression with error
concealment”, 2nd International Conference on Vision, Image and Signal Processing, Las Vegas, NV,
USA.
[13] Ozer, J., (2010), “VP8 vs. H.264,” Available online.
[14] Ozer, J., (2016), “What is VP9,” Available online.
[15] Dogan, S., Sadka, A. H., Kongoz, A. M., (2005), “Error Resilient Techniques for Video
Transmission Over Wireless Channels,” Center for Communications System Research, U. Surrey,
UK.
[16] Nguyen, D., Dao, M., Tran, T. D., (2011), “Error concealment via 3-mode tensor
approximation”, IEEE Int. Conf. on ImageProcessing (ICIP), Brussels, Sep. 2011.
[17] Kwan, C., Budavari, B., Dao, M., Zhou, J.,(2017),“New Sparsity Based Pansharpening
Algorithm for Hyperspectral Images,”IEEE Ubiquitous Computing, Electronics & Mobile
Communication Conference, p 88-93.
[18] Dao, M., Kwan, C., Ayhan, B., Tran, T.,(2016),“Burn Scar Detection Using Cloudy MODIS
Images via Low-rank and Sparsity-based Models,”IEEE Global Conference on Signal and
Information Processing, p 177 – 181.
[19] Wang, W., Li, S., Qi, H., Ayhan, B., Kwan, C., Vance, S.,(2015),“Identify Anomaly Component
by Sparsity and Low Rank”, IEEE Workshop on Hyperspectral Image and Signal Processing:
Evolution in Remote Sensor (WHISPERS).
[20] Wang, W., Li, S., Qi, H., Ayhan, B., Kwan, C., Vance, S., (2014), “Revisiting the Preprocessing
Procedures for Elemental Concentration Estimation based on CHEMCAM LIBS on MARS Rover”,
6th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing
(WHISPERS).
[21] Zhou, J., Kwan, C.,(2018),“High Performance Image Completion using Sparsity based
Algorithms”, SPIE Commercial + Scientific Sensing and Imaging Conference.
[22] Zhou, J., Ayhan B., Kwan, C., Tran, T.,(2018),“ATR Performance Improvement Using Images
with Corrupted or Missing Pixels”, SPIE Defense + Security Conference.
[23] Kwan, C., Luk, Y.,(2018),“Hybrid sensor network data compression with error resiliency,”Data
Compression Conference.
[24] Zhou, J., Kwan, C., (2018),“Missing Link Prediction in Social Networks,”15th International
Symposium on Neural Networks.
[25] Kwan, C., Zhou, J.,(2015), Method for Image Denoising, Patent #9,159,121.
[26] Elad, M.,(2010), Sparse and Redundant Representations, Springer New York.
[27] Chen, Y., Hu, Y., Au, O. C., Li, H., Chen, C. W.,(2008), “Video error concealment using
spatiotemporal boundary matching and partial differential equation,” IEEE Trans. on Multimedia, vol.
10, no. 1, pp. 2-15, 2008.
[28] Ponomarenko, N., Silvestri, F., Egiazarian, K., Carli, M., Astola, J., Lukin, V., (2007), “On
betweencoefficient contrast masking of DCT basis functions”, Proc. Third International Workshop on
Video Processing and Quality Metrics for Consumer Electronics, Scottsdale, AZ, USA.
Application of A Computer Vision Method for Soiling Recognition in
Photovoltaic Modules for Autonomous Cleaning Robots
Tatiani Pivem1
, Felipe de Oliveira de Araujo2
, Laura de Oliveira de Araujo2
, Gustavo Spontoni de
Oliveira2
, 1
Federal University of Mato Grosso do Sul - UFMS, Brazil and 2
Nexsolar Energy
Solutions, Brazil
ABSTRACT :
It is well known that this soiling can reduce the generation efficiency in PV system. In some case
according to the literature of loss of energy production in photovoltaic systems can reach up to 50%.
In the industry there are various types of cleaning robots, they can substitute the human action,
reducing cleaning cost, be used in places where access is difficult, and increasing significantly the
gain of the systems. In this paper we present an application of computer vision method for soiling
recognition in photovoltaic modules for autonomous cleaning robots. Our method extends classic CV
algorithm such Region Growing and the Hough. Additionally, we adopt a pre-processing technique
based on Top Hat and Edge detection filters. We have performed a set of experiments to test and
validate this method. The article concludes that the developed method can bring more intelligence to
photovoltaic cleaning robots.
KEYWORDS :
Solar Panel, Soiling Identification, Cartesian Robots, Autonomous Robots, Computer Vision
Full Text: https://aircconline.com/sipij/V10N3/10319sipij05.pdf
Signal & Image Processing: An International Journal (SIPIJ)
http://www.airccse.org/journal/sipij/vol10.html
REFERENCES:
[1] Kannan N, Vakeesan D. Solar energy for future world: – a review. Renew Sustain Energy Rev
2016;62:1092–105. http://dx.doi.org/10.1016/j.rser.2016.05.022
[2] Ehsanul Kabir, Pawan Kumarb, Sandeep Kumarc, Adedeji A. Adelodund, Ki-Hyun Kime “Solar
energy: Potential and future prospects”. Renewable and Sustainable Energy Reviews 82 (2018) 894–
900
[3] Darwish, Z.A., Kazem, H.A., Sopian, K., Al-Goul, M.A. and Alawadhi, H., 2015. Effect of dust
pollutant type on photovoltaic performance. Renewable and Sustainable Energy Reviews, 41, pp.735-
744
[4] Mekhilef S, Saidur R, Kamalisarvestani M. Effect of dust, humidity and air velocity on efficiency
of photovoltaic cells. Renew Sustain Energy Rev2012;16:2920–5
[5] Menendez, O., Auat Cheein, F. A., Perez, M., & Kouro, S. (2017). Robotics in Power Systems:
Enabling a More Reliable and Safe Grid. IEEE Industrial Electronics Magazine, 11(2), 22–34.
doi:10.1109/mie.2017.2686458
[6] Yfantis E (2017) An Intelligent Robots-Server System for Solar Panel Cleaning and Electric
Power Output Optimization. Int Rob Auto J 3(5):00066. DOI: 10.15406/iratj.2017.03.00066
[7] El-Amiri, A., Saifi, A., Obbadi, A., Errami, Y., Sahnoun, S., & Elhassnaoui, A. (2018). Defects
Detection in Bi-Facial Photovoltaic Modules PV Using Pulsed Thermography. 2018 Renewable
Energies, Power Systems & Green Inclusive Economy (REPS-GIE).
doi:10.1109/repsgie.2018.8488833
[8] Denio, H. (2012). Aerial solar Thermography and condition monitoring of photovoltaic systems.
2012 38th IEEE Photovoltaic Specialists Conference. doi:10.1109/pvsc.2012.6317686
[9] F.P.G. Márquez, I. Segovia, Condition Monitoring System for Solar Power Plants with
Radiometric and Thermographic Sensors Embedded in Unmanned Aerial Vehicles, Measurement
(2019), doi: https://doi.org/10.1016/j.measurement.2019.02.045
[10] Deitsch, Sergiu, et al. "Automatic classification of defective photovoltaic module cells in
electroluminescence images." Solar Energy 185 (2019): 455-468.
[11] ECOPPIA. Empowering Solar. 2019. Available online: https://www.ecoppia.com/ (accessed on
20 May 2019).
[12] GEKKO. GEKKO Solar Robot. Available online: https://www.serbot.ch/en/solar-
panelscleaning/gekko-solar-robot (accessed on 20 May 2019).
[13] SMP Robotics. S5 PTZ Security Robot—Rapid Deployment Surveillance System. 2016.
Available online:https://smprobotics.com/security_robot/security-
patrolrobot/rapid_deployment_surveillance_system/ (accessed on 20 May 2019).
[14] Maurtua, I., Susperregi, L., Fernández, A., Tubío, C., Perez, C., Rodríguez, J., Ghrissi, M.
(2014). MAINBOT – Mobile Robots for Inspection and Maintenance in Extensive Industrial Plants.
Energy Procedia, 49, 1810–1819. doi:10.1016/j.egypro.2014.03.192
[15] Felsch, T., Strauss, G., Perez, C., Rego, J., Maurtua, I., Susperregi, L., & Rodríguez, J. (2015).
Robotized Inspection of Vertical Structures of a Solar Power Plant Using NDT Techniques. Robotics,
4(2), 103–119. doi:10.3390/robotics4020103
[16] Kim, K.A.; Seo, G.S.; Cho, B.H.; Krein, P.T. Photovoltaic Hot-Spot Detection for Solar Panel
Substrings Using AC Parameter Characterization. IEEE Trans. Power Electron. 2016, 31, 1121–1130.
DOI 10.1109/TPEL.2015.2417548
[17] Samani L, Mirzaei R, Model Predictive Control Method to Achieve Maximum Power Point
Tracking Without Additional Sensors in Stand-Alone Renewable Energy Systems, Optik (2019),
https://doi.org/10.1016/j.ijleo.2019.04.067
[18] Daniel Riley and Jay Johnson, “Photovoltaic Prognostics and Heath Management using Learning
Algorithms”, Photovoltaic Specialists Conference (PVSC), 2012 38th IEEE, DOI:
10.1109/PVSC.2012.6317887
[19] Zapata, J.W.; Perez, M.A.; Kouro, S.; Lensu, A.; Suuronen, A. Design of a Cleaning Program for
a PV Plant Based on Analysis of Energy Losses. IEEE J. Photovolt. 2015, 5, 1748–1756.
[20] Mohammad Hammouda,, Bassel Shokra, Ali Assia, Jaafar Hallala, Paul Khouryb . “Effect of
dust cleaning on the enhancement of the power generation of a coastal PV-power plant at Zahrani
Lebanon”, Solar Energy 184 (2019) 195–201. DOI: https://doi.org/10.1016/j.solener.2019.04.005
[21] Sonick Suri, Anjali Jain, Neelam Verma, Nopporn Prasertpoj. “SCARA Industrial Automation
Robot”, 2018 International Conference on Power Energy, Environment and Intelligent Control
(PEEIC) G. L. Bajaj Inst. of Technology and Management Greater Noida, U. P., India, Apr 13-14,
2018
[22] Biryukov, S.,Faiman, D., Goldfeld, A.: 'An optical system for the quantitative study of particulate
contamination on solar collector surfaces' Solar Energy Vol. 66, No. 5, pp. 371–378, 1999
[23] Atten P., Pang H.L., Reboud J.L., D.: ' Study of dust removal by standing wave electric curtain
for application to solar cells on mars '. IEEE Transactions on Industry Applications Vol.45, France,
Jan 2009, pp. 75–86
[24] Mehta, Sachin & P. Azad, Amar & Chemmengath, Saneem & Raykar, Vikas & Kalyanraman,
Shivkumar.: 'DeepSolarEye: Power Loss Prediction and Weakly Supervised Soiling Localization via
Fully Convolutional Networks for Solar Panels '. IEEE Winter Conference on Applications of
Computer Vision (WACV), Lake Tahoe, NV, 2018, pp. 333-342.
[25] WK Yap, R Galet, KC Yeo.: 'Quantitative analysis of dust and soiling on solar pv panels in the
tropics utilizing image-processing methods'. Asia-Pacific Solar Research Conference, 2015
[26] Gonzales, R., Woods, R.: Digital Image Processing'. Vol.2
[27] Philipe A. Dias., Henry Medeiros.: 'Semantic Segmentation Refinement by Monte Carlo Region
Growing of High Confidence Detections'. Cornell University Library, 2018, available at:
https://arxiv.org/abs/1802.07789
A Novel Data Dictionary Learning for Leaf Recognition
Shaimaa Ibrahem1
, Yasser M. Abd El-Latif2
and Naglaa M. Reda2
, 1
Higher Institute for Computer
Sciences and Information System, Egypt and 2
Ain Shams University, Egypt
ABSTRACT
Automatic leaf recognition via image processing has been greatly important for a number of
professionals, such as botanical taxonomic, environmental protectors, and foresters. Learn an over-
complete leaf dictionary is an essential step for leaf image recognition. Big leaf images dimensions
and training images number is facing of fast and complete data leaves dictionary. In this work an
efficient approach applies to construct over-complete data leaves dictionary to set of big images
diminutions based on sparse representation. In the proposed method a new cropped-contour method
has used to crop the training image. The experiments are testing using correlation between the sparse
representation and data dictionary and with focus on the computing time.
KEYWORDS
Leaf image recognition, Dictionary learning, Sparse representation, Online Dictionary Learning
Full Text : https://aircconline.com/sipij/V10N3/10319sipij04.pdf
Signal & Image Processing: An International Journal (SIPIJ)
http://www.airccse.org/journal/sipij/vol10.html
REFERENCES
[1] Jou-Ken Hsiao, Li-Wei Kang, “Learning-Based Leaf Image Recognition Frameworks”, Springer
International Publishing Switzerland 2015.
[2] C. Yang, H. Wei, and Q. Yu, “Multiscale Triangular Centroid Distance for Shape-Based Plant
Leaf Recognition,” in European Conf. on Artificial Intelligence, 2016, pp. 269–276.
[3] Wu, S.G., Bao, F.S., Xu, E.Y., Wang, Y.-X., Chang, Y.-F., Xiang, Q.-L.” A leaf recognition
algorithm for plant classification using probabilistic neural network.”, In: Proceedings of IEEE
International Symposium on Signal Processing and Information Technology, pp. 11–16,Giza, Egypt
Dec 2007.
[4] Du, J.-X., Wang, X.-F., Zhang, G.-J. ” Leaf shape based plant species recognition. “ Appl.
Math.Comput. 185(2), 883–893 (2007).
[5] Sari, C., Akgul, C.B., Sankur, B., “ Combination of gross shape features, fourier descriptors and
multiscale distance matrix for leaf recognition.” In: Proceedings of International Symposium on
ELMAR, pp. 23–26, Zadar, Croatia, Sept 2013.
[6] O. Mzoughi, I. Yahiaoui, N. Boujemaa, and E. Zagrouba, “Semanticbased automatic structuring of
leaf images for advanced plant species identification,” Multimedia Tools and Applications, vol. 75,
no. 3, pp. 1615–1646, 2016.
[7] Aakif , M.F. Khan ,” Automatic classification of plants based on their leaves”, Biosyst. Eng. 139
(2015) 66–75 .
[8] Kadir, A., Nugroho, L.E., Susanto, A., Santosa, P.I., ” Leaf classification using shape, color,and
texture features.” Int. J. Comput. Trends Technol. 1(3), 225–230 (2011).
[9] A. Olsen , S. Han , B. Calvert , P. Ridd , O. Kenny , “In situ leaf classification using histograms of
oriented gradients”, in: International Conference on Digital Image Computing, 2015, pp. 1–8 .
[10] Z. Tang , Y. Su , M.J. Er , F. Qi , L. Zhang , J. Zhou , “A local binary pattern based texture
descriptors for classification of tea leaves”, Neurocomputing 168 (2015) 1011–1023 .
[11] G.L. Grinblat , L.C. Uzal , M.G. Larese , P.M. Granitto ,” Deep learning for plant identification
using vein morphological patterns”, Comput. Electron. Agric. 127 (2016) 418–424 .
[12] Mairal, J., Bach, F., Ponce, J., Sapiro, G.: “Online learning for matrix factorization and sparse
coding”. J. Mach. Learn. Res 11, 19–60 (2010).
[13] Aharon, M., Elad, M., Bruckstein, A.M.,” The K-SVD: an algorithm for designing of
overcomplete dictionaries for sparse representation”. IEEE Trans. Sig. Process. 54(11), 4311–4322
(2006).
[14] Beleites, C. and Neugebauer, U. and Bocklitz, T. and Krafft, C. and Popp, J.: “Sample size
planning for classification models”. Anal ChimActa, 2013,
760,2533.DOI:10.1016/j.aca.2012.11.007accepted manuscript on arXiv: 1211.1323.
[15] R. Rubinstein, M. Zibulevsky, and M. Elad, "Learning Sparse Dictionaries for Sparse Signal
Approximation", Technical Report - CS, Technion, June 2009.
[16] Rodgers, J. L.; Nicewander, W. A. (1988). "Thirteen ways to look at the correlation coefficient".
The American Statistician. 42 (1): 59–66. doi:10.1080/00031305.1988.10475524. JSTOR 2685263.
[17] The leaf image dataset available from http://sourceforge.net/projects/flavia/ files/.
[18] J. Sulam, B. Ophir, M. Zibulevsky and M. Elad, "Trainlets: Dictionary Learning in High
Dimensions", IEEE Transactions on Signal Processing, Volume: 64, Issue: 12, June15, 2016.
Rain Streaks Elimination Using Image Processing Algorithms
Dinesh Kadam1
, Amol R. Madane2
, Krishnan Kutty2
and S. V. Bonde1
, 1
SGGSIET, India and 2
Tata
Consultancy Services Ltd., India
ABSTRACT
The paper addresses the problem of rain streak removal from videos. While, Rain streak removal from
scene is important but a lot of research in this area, robust and real time algorithms is unavailable in
the market. Difficulties in the rain streak removal algorithm arises due to less visibility, less
illumination, and availability of moving camera and objects. The challenge that plagues rain streak
recovery algorithm is detecting rain streaks and replacing them with original values to recover the
scene. In this paper, we discuss the use of photometric and chromatic properties for rain detection.
Updated Gaussian Mixture Model (Updated GMM) has detected moving objects. This rain streak
removal algorithm is used to detect rain streaks from videos and replace it with estimated values,
which is equivalent to original value. The spatial and temporal properties are used to replace rain
streaks with its original values.
KEYWORDS
Dynamic Scene, Edge Filters, Gaussian Mixture Model (GMM), Rain Streaks Removal, Scene
Recovery, Video Deraining
Full Text: https://aircconline.com/sipij/V10N3/10319sipij03.pdf
Signal & Image Processing: An International Journal (SIPIJ)
http://www.airccse.org/journal/sipij/vol10.html
REFERENCES
[1] Astha Modak, Samruddhi Paradkar, Shruti Manwatkar, Amol R. Madane, Ashwini M. Deshpande,
“Human Head Pose and Eye State Based Driver Distraction Monitoring System”, 3rd Computer
Vision and Image Processing (CVIP) 2018, Indian Institute of Information Technology, Design and
Manufacturing, Jabalpur (IIITDMJ), India.
[2] A. K. Tripathi, S. Mukhopadhyay, “Video Post Processing: Low latency Spatiotemporal Approach
for Detection and Removal of Rain”, IET Image Processing Journal, Vol. 6, no. 2, pp. 181-196,
March 2012.
[3] Kshitiz Garg and Shree K. Nayar, “Detection and Removal of Rain From Videos”, IEEE
Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1, pp. 528-
535, 19 july 2004.
[4] Kshitiz Garg and Shree K. Nayar, “When Does a Camera See Rain ?”, Tenth IEEE International
Conference on Computer Vision, ICCV 2005, 17-21 Oct. 2005
[5] A. K. Tripathi, S. Mukhopadhyay, “Video Post Processing: Low latency Spatiotemporal Approach
for Detection and Removal of Rain”, IET Image Processing Journal, Vol. 6, no. 2, pp. 181-196,
March 2012.
[6] Jie Chen, Lap-Pui Chau, “A Rain Pixel Recovery Algorithm for Videos with Highly Dynamic
Scene”, IEEE Transactions on Image Processing, vol. 23, no. 3, pp. 1097-1104, March 2014.
[7] Jin Hwan Kim, Jae Young Sim, Chang Su Kim, “Video Deraining and Denoising Using Temporal
Correlation and Low Rank Matrix Completion”, IEEE Transactions on Image Processing, vol 24, no.
9, September 2015
[8] A. K. Tripathi, S. Mukhopadhyay, “Meteorological Approach for Detection and Removal of Rain
From Video”, IET computer vision 2013 Vol 7, no. 1, pp. 36-47, 23 may 2013
[9] Jing Xu, Wei Zhao, Peng Liu, Xianglong Tang, “an improved guidance image based method to
remove rain and snow in a single image”, Computer and Information Science, vol. 5, no. 3, May 2012
[10] Kshitiz Garg and S. K. Nayar, “Vision and rain”, International Journal on Computer Vision, vol.
75, no. 1, pp. 3–27, 2007.
[11] X. Zhang, H. Li, Y. Qi, “Rain Removal in Video by Combining Temporal and Chromatic
Properties”, in Proc. IEEE International Conference Multimedia and Expo., 2006, pp. 461-464.
[12] Duan Yu Chen, Chien Cheng Chen and Li Wei, ‘”Visual Depth Guided Color Image Rain
Streaks Removal Using Sparce Coding”, IEEE Transaction on circuits and system for video
technology, vol. 24 , no. 8, august 2014.
[13] C. Stauffer, W.E.L. Grimson, “Adaptive background mixture models for real-time tracking”,
Computer Vision and Pattern Recognition IEEE Computer Society Conference , vol. 2, pp. 252-259,
23-25 June 1999
[14] KaewTraKulPong P, Bowden R., “An improved adaptive background mixture model for real-
time tracking with shadow detection,” Proceedings 2nd European Workshop on Advanced Video
Based Surveillance Systems (AVBS 2001) , Kingston, UK, September 2001.
[15] Zivkovic Z., “Improved adaptive Gaussian mixture model for background subtraction,” Int Conf
Pattern Recognition (ICPR 2004), 2004, 2: 28-31.
[16] Zang Q, Klette R., “Evaluation of an adaptive composite gaussian model in video surveillance,”
CITR Technical Report 114, Auckland University, August 2002.
[17] White B, Shah M., “Automatically tuning background subtraction parameters using particle
swarm optimization,” IEEE Int Conf on Multimedia & Expo (ICME 2007), Beijing, China, 2007;
1826-1829
[18] Grimson Wel, Stauffer C. Romano R. Lee L., “Using adaptive tracking to classify and monitor
activities in a site,” 1998 IEEE Computer Society Conference on Computer Vision and Pattern
Recognition (Cat. No.98CB36231). IEEE Comput. Soc. 1998. 1998.
[19] Stauffer C, Grimson W. E. L., “Learning patterns of activity using real-time tracking,” IEEE
Transactions on Pattern Analysis & Machine Intelligence, 2000. 22(8): p. 747-57.
[20] Pushkar Gorur, Bharadwaj Amrutur, “Speeded up Gaussian Mixture Model Algorithm for
Background Subtraction,” 8th IEEE Int Conf on Advanced Video and Signal-Based Surveillance,
2011.
[21] Thierry Bouwmans, Fida El Baf, Bertrand Vachon, “Background Modeling using Mixture of
Gaussians for Foreground Detection - A Survey,” Recent Patents on Computer Science, Bentham
Science Publishers, 2008, 1 (3), pp.219-237.
[22] L. Li, W. Huang, Q. Tian, “Statestical Modelling of Complex Background for Foreground Object
Detection,” IEEE Transaction on Image Processing. 13(11):1459-1472, 2004.
Method for the Detection of Mixed QPSK Signals Based on the
Calculation of Fourth-Order Cumulants
Vasyl Semenov, Pavel Omelchenko and Oleh Kruhlyk, Delta SPE LLC, Ukraine
ABSTRACT
In this paper we propose the method for the detection of Carrier-in-Carrier signals using QPSK
modulations. The method is based on the calculation of fourth-order cumulants. In accordance with
the methodology based on the Receiver Operating Characteristic (ROC) curve, a threshold value for
the decision rule is established. It was found that the proposed method provides the correct detection
of the sum of QPSK signals for a wide range of signal-to-noise ratios and also for the different
bandwidths of mixed signals. The obtained results indicate the high efficiency of the proposed
detection method. The advantage of the proposed detection method over the “radiuses” method is also
shown.
KEYWORDS
Carrier-in-Carrier, Cumulants, QPSK, Receiver Operating Curve
Full Text: https://aircconline.com/sipij/V10N3/10319sipij02.pdf
Signal & Image Processing: An International Journal (SIPIJ)
http://www.airccse.org/journal/sipij/vol10.html
REFERENCES
[1] Agne, Craig & Cornell, Billy & Dale, Mark & Keams, Ronald & Lee, Frank, (2010)
“Sharedspectrum bandwidth efficient satellite communications”, Proceedings of the IEEE Military
Communications Conference (MILCOM' 10), pp341-346.
[2] Gouldieff, Vincent & Palicot, Jacques, (2015) “MISO Estimation of Asynchronously Mixed
BPSK Sources”, Proc. IEEE Conf. EUSIPCO, pp369-373.
[3] Semenov, Vasyl, (2018) “Method of Iterative Single-Channel Blind Separation for QPSK
Signals”, Mathematical and computer modelling, Vol. 17, No. 2, pp108-116.
[4] Feng, Hao & Gao, Yong, (2016) “High-Speed Parallel Particle Filter for PCMA Signal Blind
Separation”, Radioelectronics and Communications Systems, Vol.59, No.10, pp305-313.
[5] Meyer-Basea, Anke & Grubera Peter & Theisa, Fabian,and Foo, Simon, (2006) “Blind source
separation based on self-organizing neural network”, Eng. Appl. Artificial Intelligence, Vol. 19,
pp305-311.
[6] Fernandes, Carlos Estevao R. & Comon, Pierre & Favier, Gerard, (2010) “Blind identification of
MISO-FIR channels”, Signal Processing, Vol. 90, pp490–503.
[7] Swami, Anantharam & and Sadler, Brain M., (2000) “Hierarchical digital modulation
classification using cumulants,” IEEE Trans. Commun., Vol. 48, pp416-429.
[8] Wunderlich, Adam & Goossens, Bart & Abbey, Craig K. “Optimal Joint Detection and Estimation
That Maximizes ROC-Type Curves” (2016) IEEE Transactions on Medical Imaging, Vol. 35, No.9,
pp2164– 2173.
Machine-Learning Estimation of Body Posture and Physical Activity by
Wearable Acceleration and Heartbeat Sensors
Yutaka Yoshida2
, Emi Yuda3, 1
, Kento Yamamoto4
, Yutaka Miura5
and Junichiro Hayano1
, 1
Nagoya
City University Graduate School of Medical Science, Japan, 2
Nagoya City University Graduate
School of Design and Architecture, Japan, 3
Tohoku University Graduate School of Engineering,
Japan, 4
University of Tsukuba Graduate School of Comprehensive Human Sciences, Japan
and 5
Shigakkan University, Japan
ABSTRACT
We aimed to develop the method for estimating body posture and physical activity by acceleration
signals from a Holter electrocardiographic (ECG) recorder with built-in accelerometer. In healthy
young subjects, triaxial-acceleration and ECG signal were recorded with the Holter ECG recorder
attached on their chest wall. During the recording, they randomly took eight postures, including
supine, prone, left and right recumbent, standing, sitting in a reclining chair, sitting in chairs with and
without backrest, and performed slow walking and fast walking. Machine learning (Random Forest)
was performed on acceleration and ECG variables. The best discrimination model was obtained when
the maximum values and standard deviations of accelerations in three axes and mean R-R interval
were used as feature values. The overall discrimination accuracy was 79.2% (62.6-90.9%). Supine,
prone, left recumbent, and slow and fast walk were discriminated with >80% accuracy, although
sitting and standing positions were not discriminated by this method.
KEYWORDS
Accelerometer, Holter ECG, Posture, Activity, Machine learning, Random Forest, R-R interval
Full Text: https://aircconline.com/sipij/V10N3/10319sipij01.pdf
Signal & Image Processing: An International Journal (SIPIJ)
http://www.airccse.org/journal/sipij/vol10.html
REFERENCES
[1] World Health Organization, Global recommendations on Physical Activity for Health. Geneva:
World Health Organization; 2010.
[2] Sofi, F., Valecchi, D., Bacci, D., Abbate, R., Gensini, G. F., Casini, A., Macchi, C. (2011)
"Physical activity and risk of cognitive decline: a meta-analysis of prospective studies", J. Intern.
Med., Vol. 269, No. 1, 107-117.
[3] Yeoh, W. S., Pek, I., Yong, Y. H., Chen, X., Waluyo, A. B. (2008) "Ambulatory monitoring of
human posture and walking speed using wearable accelerometer sensors", Conf Proc IEEE Eng Med
Biol Soc, Vol. 2008, No., 5184-5187.
[4] Godfrey, A., Bourke, A. K., Olaighin, G. M., van de Ven, P., Nelson, J. (2011) "Activity
classification using a single chest mounted tri-axial accelerometer", Med. Eng. Phys., Vol. 33, No. 9,
1127-1135.
[5] Fulk, G. D., Sazonov, E. (2011) "Using sensors to measure activity in people with stroke", Top
Stroke Rehabil, Vol. 18, No. 6, 746-757.
[6] Palmerini, L., Rocchi, L., Mellone, S., Valzania, F., Chiari, L. (2011) "Feature selection for
accelerometer-based posture analysis in Parkinson's disease", IEEE Trans Inf Technol Biomed, Vol.
15, No. 3, 481-490.
[7] Doulah, A., Shen, X., Sazonov, E. (2017) "Early Detection of the Initiation of Sit-to-Stand Posture
Transitions Using Orthosis-Mounted Sensors", Sensors, Vol. 17, No. 12.
[8] Vaha-Ypya, H., Husu, P., Suni, J., Vasankari, T., Sievanen, H. (2018) "Reliable recognition of
lying, sitting, and standing with a hip-worn accelerometer", Scand. J. Med. Sci. Sports, Vol. 28, No. 3,
1092-1102.
[9] Fanchamps, M. H. J., Horemans, H. L. D., Ribbers, G. M., Stam, H. J., Bussmann, J. B. J. (2018)
"The Accuracy of the Detection of Body Postures and Movements Using a Physical Activity Monitor
in People after a Stroke", Sensors, Vol. 18, No. 7.
[10] Kerr, J., Carlson, J., Godbole, S., Cadmus-Bertram, L., Bellettiere, J., Hartman, S. (2018)
"Improving Hip-Worn Accelerometer Estimates of Sitting Using Machine Learning Methods", Med.
Sci. Sports Exerc., Vol. 50, No. 7, 1518-1524.
[11] Farrahi, V., Niemela, M., Kangas, M., Korpelainen, R., Jamsa, T. (2019) "Calibration and
validation of accelerometer-based activity monitors: A systematic review of machine-learning
approaches", Gait Posture, Vol. 68, No., 285-299.
[12] Olufsen, M. S., Tran, H. T., Ottesen, J. T., Research Experiences for Undergraduates, P., Lipsitz,
L. A., Novak, V. (2006) "Modeling baroreflex regulation of heart rate during orthostatic stress", Am J
Physiol Regul Integr Comp Physiol, Vol. 291, No. 5, R1355-1368.
[13] Hayano, J., Mukai, S., Fukuta, H., Sakata, S., Ohte, N., Kimura, G. (2001) "Postural response of
lowfrequency component of heart rate variability is an increased risk for mortality in patients with
coronary artery disease", Chest, Vol. 120, No., 1942-1952.
[14] Yoshida, Y., Furukawa, Y., Ogasawara, H., Yuda, E., Hayano, J. Longer lying position causes
lower LF/HF of heart rate variability during ambulatory monitoring. Paper presented at: 2016 IEEE
5th Global Conference on Consumer Electronics (GCCE); 11-14 Oct 2016, 2016; Kyoto, Japan.
Ransac Based Motion Compensated Restoration for Colonoscopy
Images
Nidhal Azawi and John Gauch, University of Arkansas, USA
ABSTRACT
Colonoscopy is a procedure that has been used widely to detect the abnormality in a colon.
Colonoscopy images suffer from a lot of problems that make it hard for the doctor to investigate/
understand a colon patient. Unfortunately, with the current technology, three is no way for doctors to
know if the whole colon surface has been investigated or not. We have developed a method that
utilizes RANSAC-based image registration to align sequences of any length in the colonoscopy video
and restores each frame of the video using information from these aligned images. We proposed two
methods. First method used the deep neural net for the classification of informative and non-
informative image. The classification result was used as a preprocessing for alignment method. Also,
we proposed a visualization structure for the classification results. The second method used the
alignment to decide/classify the bad and good alignment by using two factors. The first factor is the
accumulated error and the second factor contain three checking steps that check the pair error
alignment beside the geometry transform status. The second method was able to align long sequences.
KEYWORDS
Visualization, RANSAC, sequence length, geometry transform, classification, Colonoscopy.
Full Text: https://aircconline.com/sipij/V10N4/10419sipij02.pdf
Signal & Image Processing: An International Journal (SIPIJ)
http://www.airccse.org/journal/sipij/vol10.html
REFERENCES
[1] N. Azawi, J. Gauch, “Automatic Method for Classification of Informative and Noninformative
Images in Colonoscopy Video”, Int. Conf. on Medical Image Processing and Analysis (ICMIPA),
Vancouver, Canada, August 2018.
[2] N. Azawi and J. Gauch, “MOTION C OMPENSATED RESTORATION OF COLONOSCOPY
VIDEO,” pp. 243–256, 2019.
[3] L. Dung, C. Huang, and Y. Wu, “Implementation of RANSAC Algorithm for Feature-Based
Image Registration,” Journal of Computer and Communications, pp. 46–50, 2013.
[4] F.P.M. Oliveira, J.M.R.S. Tavares. Medical Image Registration: a Review. Computer Methods in
Biomechanics and Biomedical Engineering 17(2):73-93, 2014.
[5] S. B. Kang, M. Uyttendaele, S. Winder, and R. Szeliski, “High dynamic range video,” ACM
Trans. Graph., vol. 23, no. 3, pp. 319–325, 2003.
[6] F. M. Candocia, “On the Featureless Registration of Differently Exposed Images,” in Proc. Int.
Conf. Imaging Science, Systems & Technology, Las Vegas, NV, USA, Jun. 2003, vol. I, pp. 163–169.
[7] Hossain and B. K. Gunturk, “High Dynamic Range Imaging of Non-Static Scenes,” in Proc.SPIE
Digital Photography VII, 2011, vol. vol. 7876.
[8] H. Q. Luong, B. Goossens, A. Pizurica, and W. Philips, “Joint photometric and geometric image
registration in the total least square sense,” Pattern. Recognition. Lett., vol. 32, no. 15, pp. 2061–
2067, 2011.
[9] O. El Meslouhi, M. Kardouchi, H. Allali, T. Gadi, and Y. A. Benkaddour, “Automatic detection
and inpainting of specular reflections for colposcopic images,” Open Comput. Sci., vol. 1, no. 3, pp.
341– 354, 2011.
[10] D. G. Lowe, "Object recognition from local scale-invariant features," Proc. of the Int. Conf. on
Computer Vision, pp. 1150–1157, 1999.
[11] B. Zitova and J. Flusser, “Image registration methods: A survey,” Image Vis. Computer, vol. 21,
pp. 977–1000, 2003.
[12] S. Oldridge, G. Miller, and S. Fels, “Mapping the problem space of image registration,” in Proc.
Can. Conf. Computer and Robot Vision, St. John’s, NF, Canada, May 2011, pp. 309–315.
[13] M. Tico and K. Pulli, “Robust image registration for multi-frame mobile applications,” in Proc.
Asilomar Conf. Signals, Systems & Computers, Pacific Grove, CA, USA, 2010, pp. 860–864.
[14] S. Wu, Z. Li, J. Zheng, and Z. Zhu, “Exposure-robust alignment of differently exposed images,”
IEEE Signal Process. Lett., vol. 21, no. 7, pp. 885–889, 2014.
[15] S. Wu, Z. Li, J. Zheng, and Z. Zhu, “Exposure-robust alignment of differently exposed images,”
IEEE Signal Process. Lett., vol. 21, no. 7, pp. 885–889, 2014.
[16] S. Wei and S. Lai, “Robust and efficient image alignment based on relative gradient matching,”
IEEE Trans. image Process., vol. 15, no. 10, pp. 2936–43, 2006.
[17] C. Wu, B. Clipp, X. Li, J. M. Frahm, and M. Pollefeys, “3D model matching with
viewpointinvariant patches (VIP),” 26th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR, pp. 1–
8, 2008.
[18] P. A. Freeborough and N. C. Fox, “Modelling Brain Deformations in Alzheimer Disease by Fluid
Registration of Serial 3D MR Images”, vol. 22. 1998.
[19] D. Leow, A. D. Klunder, C. R. Jack, A. W. Toga, A. M. Dale, M. A. Bernstein, P. J. Britson, J.
L. Gunter, C. P. Ward, J. L. Whitwell, B. J. Borowski, A. S. Fleisher, N. C. Fox, D. Harvey, J.
Kornak, N. Schuff, C. Studholme, G. E. Alexander, M. W. Weiner, and P. M. Thompson,
“Longitudinal stability of MRI for mapping brain change using tensor-based morphometry,”
Neuroimage, vol. 31, no. 2, pp. 627–640, 2006.
[20] K. A. Ganser, H. Dickhaus, R. Metzner, and C. R. Wirtz, “A deformable digital brain atlas
system according to Talaicrach and Tournoux,” Med. Image Anal., vol. 8, no. 1, pp. 3–22, 2004.
[21] X. Huang, J. Ren, G. Guiraudon, D. Boughner and T. M. Peters, "Rapid Dynamic Image
Registration of the Beating Heart for Diagnosis and Surgical Navigation," in IEEE Transactions on
Medical Imaging, vol. 28, no. 11, pp. 1802-1814, Nov. 2009. doi:10.1109/TMI.2009.2024684.
[22] R. Redzuwan, N. A. M. Radzi, N. M. Din, and I. S. Mustafa, “Affine versus projective
transformation for SIFT and RANSAC image matching methods,” 2015 IEEE Int. Conf. Signal Image
Process. Appl., pp. 447–451, 2015.
[23] O. El Meslouhi, M. Kardouchi, H. Allali, T. Gadi, and Y. A. Benkaddour, “Automatic detection
and inpainting of specular reflections for colposcopic images,” Open Comput. Sci., vol. 1, no. 3, pp.
341– 354, 2011.
The Study on Electromagnetic Scattering Characteristics of Jonswap
Spectrum Sea Surface
Xiaolin Mi, Xiaobing Wang, Xinyi He and Fei Dai, Science and Technology on Electromagnetic
Scattering Laboratory, China
ABSTRACT
The JONSWAP spectrum sea surface is mainly determined by parameters such as the wind speed, the
fetch length and the peak enhancement factor. In view of the study of electromagnetic scattering from
JONSWAP spectrum sea surface, we need to determine the above parameters. In this paper, we use
the double summation model to generate the multi-directional irregular rough JONSWAP sea surface
and analyze the distribution concentration parameter and the peak enhancement factor’s influence on
the rough sea surface model, then using physical optics method to analysis the JONSWAP spectrum
sea surface’s average backward scattering coefficient change with the different distribution
concentration parameters and the peak enhancement factors, the simulation results show that the peak
enhancement factor influence on the ocean surface of the average backward scattering coefficient is
less than 1 dB, but the distribution concentration parameter influence on the JONSWAP surface of the
average backward scattering coefficient is more than 5 dB. Therefore, when we study the
electromagnetic scattering of the JONSWAP spectral sea surface, the peak enhancement factor can be
taken as the mean value but the distribution concentration parameter have to be determined by the
wave growth state.
KEYWORDS
JONSWAP spectrum, multidirectional wave, wave pool, the peak enhancement factor,
electromagnetic scattering
Full Text: https://aircconline.com/sipij/V10N4/10419sipij01.pdf
Signal & Image Processing: An International Journal (SIPIJ)
http://www.airccse.org/journal/sipij/vol10.html
REFERENCES
[1] Hasselmann K,Barnett T P, Bouws E,et al. Measurements of wind-wave growth and swell decay
during the Joint North Sea Wave Projects (JONSWAP) [J]. Ergnzungsheft zur Deutschen
Hydrographischen Zeitschrift Reihe A8(Suppl.),1973,12:95.
[2] Estimation of JONSWAP Spectral Parameters by Using Measured Wave Data[J].China Ocean
Engineering,1995(03):275-282.
[3] Annalisa Calini,Constance M. Schober. Characterizing JONSWAP rogue waves and their statistics
via inverse spectral data[J]. Wave Motion,2016.
[4] YU Yu-xiu, LIU Shu-xue.Random Wave and Its Applications to Engineering[M],Dalian:Dalian
University of Technology Press,2016.
[5] ZHAO Ke, LI Mao-hua, ZHENG JIAN-li, TIAN Guan-nan. 3-D simulation of random ocean
wave based on spectrum of ocean wave[J]. Ship Science and Technology,2014,36(02):37-39.
[6] Mitsuyasu H, et al. Observation of the directional wave spectra of ocean waves using a cloverleaf
buoy.[J].Physical Oceanography,1975,5:750-760.
[7] Si Liu,Shu-xue Liu,Jin-xuan Li,Zhong-bin Sun. Physical simulation of multidirectional irregular
wave groups[J]. China Ocean Engineering,2012,26(3)
[8] Hong Sik Lee,Sung Duk Kim. A three-dimensional numerical modeling of multidirectional
random wave diffraction by rectangular submarine pits[J]. KSCE Journal of Civil
Engineering,2004,8(4).
[9] MI Xiao-lin, WANG Xiao-bing, HE Xin-yi , XUE Zheng-guo. Simulation and Measurement
Technology of 3-D Sea surface in Laboratory Based on Double Summation
Model[J].GUIDANCE&FUZE,2016,37(02):19-23.
[10] WEI Ying-yi, WU Zhen-sen, LU Yue. Electromagnetic scattering simulation of Kelvin wake in
rough sea surface[J],CHINESE JOURNAL OF RADIO SCIENCE.,2016,(3):438-442.
[11] Biglary, H.,Dehmollaian, M.. RCS of a target above a random rough surface with impedance
boundaries using GO and PO methods[P]. Antennas and Propagation Society International
Symposium (APSURSI), 2012 IEEE,2012.
[12] Joon--Tae Hwang. Radar Cross Section Analysis Using Physical Optics and Its Applications to
Marine Targets[A]. Scientific Research Publishing.Proceedings of 2015 Workshop 2[C].Scientific
Research Publishing,2015:6.
[13] YANG Peng-ju, WU Rui, ZHAO Ye, REN Xin-cheng. Doppler spectrum of low-flying small
target above time-varying sea surface[J]. Journal of Terahertz Science and Electronic Information
Technology,2018,16(04):614-618.
[14] MEISSNER T. WENTZ F J. The complex dielectric constant of pure and sea water from
microwave satellite observations[J]. IEEE Transactions on Geoscience and Remote Sensing,
2004,42(9):1836- 1849
Improvements of the Analysis of Human Activity Using Acceleration
Record of Electrocardiographs
Itaru Kaneko1
, Yutaka Yoshida2
and Emi Yuda3
, 1&2
Nagoya City University, Japan and 3
Tohoku
University, Japan
ABSTRACT
The use of Holter Electrocardiograph (Holter ECG) is rapidly spreading. It is a wearable
electrocardiograph that records 24-hour electrocardiograms in a built-in flash memory, making it
possible to detect atrial fibrillation (Atrial Fibrillation, AF) through all-day activities. It is also useful
for screening for diseases other than atrial fibrillation and for improving health. It is said that more
useful information can be obtained by combining electrocardiograph with the analysis of physical
activity. For that purpose, the Holter electrocardiograph is equipped with heart rate sensor and
acceleration sensors. If acceleration data is analysed, we can estimate activities in daily life, such as
getting up, eating, walking, using transportation, and sitting. In combination with such activity status,
electrocardiographic data can be expected to be more useful.
In this study, we investigate the estimation of physical activity. For the better analysis, we evaluated
activity estimation using machine learning as well as several different feature extractions. In this
report, we will show several different feature extraction methods and result of human body analysis
using machine learning.
KEYWORDS
Wearable, Biomedical Sensors, Body Activity, Machine Learning
Full Text: https://aircconline.com/sipij/V10N5/10519sipij04.pdf
Signal & Image Processing: An International Journal (SIPIJ)
http://www.airccse.org/journal/sipij/vol10.html
REFERENCES
[1] Yuda E, Hayano J, Menstrual Cycles of Autonomic Functions and Physical Activities, 2018 9th
International Conference on Awareness Science and Technology (iCAST 2018), September 19- 21,
(2018)
[2] Hayano J, Introduction to heart rate variability. In: Iwase S, Hayano J, Orimo S, eds Clinical
assessment of the autonomic nervous system. Japan.
[3] Yuda E, Furukawa Y, Yoshida Y, Hayano J, ALLSTAR Research Group, Association between
Regional Difference in Heart Rate Variability and Inter-prefecture Ranking of Healthy Life
Expectancy: ALLSTAR Big Data Project in Japan, Proceedings of the 7th EAI International
Conference on Big Data Technologies and Applications (BDTA), Chung-ang University, Seoul,
South Korea, November 17-18 (2016)
[4] YOSHIHARA Hiroyuki, gEHR Project: Nation - wide EHR Implementation in JAPAN, Kyoto
Smart city Expo, https://expo.smartcity.kyoto/2016/doc/ksce2016_doc_yoshihara.pdf (captured on
2016)
[5] J Jaybhay, R Shastri, A study of speckle noise reduction Filters‖ Signal & Image Processing, SIPJ
Vol. 6,2015
[6] Mrs V.Radhika1 & Dr G. Padmavathi , Performance of various order statistics filters in impulse
and mixed noise removal for rs images, SIPIJ, Vol. 1, No. 2, December 2010
Robust Image Watermarking Method using Wavelet Transform
Omar Adwan, The University of Jordan, Jordan
ABSTRACT
In this paper a robust watermarking method operating in the wavelet domain for grayscale digital
images is developed. The method first computes the differences between the watermark and the HH1
sub-band of the cover image values and then embed these differences in one of the frequency sub-
bands. The results show that embedding the watermark in the LH1 sub-band gave the best results. The
results were evaluated using the RMSE and the PSNR of both the original and the watermarked
image. Although the watermark was recovered perfectly in the ideal case, the addition of Gaussian
noise, or compression of the image using JPEG with quality less than 100 destroys the embedded
watermark. Different experiments were carried out to test the performance of the proposed method
and good results were obtained.
KEYWORDS
Watermarking, data hiding, wavelet transform, frequency domain
Full Text: https://aircconline.com/sipij/V10N5/10519sipij03.pdf
Signal & Image Processing: An International Journal (SIPIJ)
http://www.airccse.org/journal/sipij/vol10.html
REFERENCES
[1] J. Dugelay and S. Roche, "A servey of current watermaking techniques", in S. Katzenbeisser and
F. Petitcolas (eds), Information hiding techniques for steganography and digital watermarking, Artech
House, USA, pp. 121-148, 2000.
[2] I. Cox, M. Miller, J. Bloom, J. Fridrich and T. Kalker “Digital watermarking and steganography”,
Morgan Kaufman, 2008.
[3] R. Gozalez, R. Woods, Digital Image Processing, 3rd ed., Prentice Hall, 2008..
[4] M. Kutter and F. Hartung, "Introduction to Watermarking Techniques", in S. Katzenbeisser and F.
Petitcolas (eds), Information hiding techniques for steganography and digital watermarking, Artech
House, USA, pp. 97-120, 2000.
[5] S. Lai and F. Buonaiuti, "Copyright on the internet and watermarking", in S. Katzenbeisser and F.
Petitcolas (eds), Information hiding techniques for steganography and digital watermarking, Artech
House, USA, pp. 191-213, 2000.
[6] I. Cox, M.L. Miller, J.M.G. Linnartz, T. Kalker, “A Review of Watermarking Principles and
Practices” in Digital Signal Processing for Multimedia Systems, K.K. Parhi, T. Nishitani, eds., New
York, New York, Marcel Dekker, Inc., 1999, pp. 461-482.
[7] U. Qidwai and C. Chen, Digital image processing: An algorithmic approach with Matlab, CRC
Press, 2010.
[8] Cox, M. Miller, J. Kilian, F. Leighton and T. Shamoon, "Secure spread spectrum watermarking for
multimedia", IEEE Transactions on Image Processing, Vol. 6, No. 12, pp. 1673-1687, 1997.
[9] N. Johnson and S. Katzenbeisser, “A survey of steganographic techniques,” in S. Katzenbeisser
and F. Petitcolas (eds), Information hiding techniques for steganography and digital watermarking,
Artech House, USA, pp. 43-78, 2000.
[10] A.H.M. Jaffar Iqbal Barbhuiya1 , K. Hemachandran (2013), “Wavelet Tranformations & Its
Major Applications In Digital Image Processing”, International Journal of Engineering Research &
Technology (IJERT) Vol. 2 Issue 3, March - 2013 ISSN: 2278-0181
[11] Khan, Asifullah; Mirza, Anwar M. (October 2007). "Genetic perceptual shaping: Utilizing cover
image and conceivable attack information during watermark embedding". Information Fusion. 8 (4):
354-365. doi:10.1016/j.inffus.2005.09.007.
[12] C. Shoemaker, Hidden Bits: "A Survey of Techniques for Digital Watermarking",
http://www.vu.union.edu/~shoemakc/watermarking/, 2002. Last access: June, 2012.
[13] M. Weeks, "Digital signal processing using Matlab and Wavelets, 2nd ed.", Jones and Bartlett
publisher, 2011.
[14] D. Kundur and D. Hatzinakos, "A robust digital watermarking method using wavelet-based
fusion", in Proceeding of the International conference on image processing, Santa Barbara, pp. 544-
547, 1997.
[15] X. Xia, C. Boncelet and G. Arce, "Wavelet transform based watermark for digital images",
Optics Express, Vol. 3, No. 12, pp. 497-511, 1998.
[16] O. Adwan, et al., "Simple Image Watermarking Method using Wavelet Transform", Journal of
Basic and Applied Science, Vol. 8, No. 17, pp. 98-101, 2014.
[17] B. Gunjal and S. Mali, "Secured color image watermarking technique in DWT-DCT domain",
International journal of computer science, engineering and information technology, Vol. 1, No. 3, pp.
36-44, 2011.
[18] P. Reddy, M. Prasad and D. Rao, "Robust digital watermarking of images using wavelets",
International journal of computer and electrical engineering, Vol. 1, No. 2, pp. 111-116, 2011.
[19] G. Langelaar, I. Setyawan, R.L. Lagendijk, “Watermarking Digital Image and Video Data”, in
IEEE Signal Processing Magazine, Vol. 17, pp. 20-43, 2000.
[20] Tanya Koohpayeh Araghi, Azizah B T Abdul Manaf (2017), “Evaluation of Digital Image
Watermarking Techniques“, International Conference of Reliable Information and Communication
Technology, IRICT 2017: Recent Trends in Information and Communication Technology pp 361-
368.
[21] A.S.Kapse1, Sharayu Belokar2, Yogita Gorde3, Radha Rane4, Shrutika Yewtkar, (2018) “Digital
Image Security Using Digital Watermarking”. International Research Journal of Engineering and
Technology (IRJET), Volume: 05 Issue: 03 | Mar-2018.
Test-cost-sensitive Convolutional Neural Networks with Expert Branches
Mahdi Naghibi1
, Reza Anvari1
, Ali Forghani1
and Behrouz Minaei2
, 1
Malek-Ashtar University of
Technology, Iran and 2
Iran University of Science and Technology, Iran
ABSTRACT
It has been proven that deeper convolutional neural networks (CNN) can result in better accuracy in
many problems, but this accuracy comes with a high computational cost. Also, input instances have
not the same difficulty. As a solution for accuracy vs. computational cost dilemma, we introduce a
new test-cost-sensitive method for convolutional neural networks. This method trains a CNN with a
set of auxiliary outputs and expert branches in some middle layers of the network. The expert
branches decide to use a shallower part of the network or going deeper to the end, based on the
difficulty of input instance. The expert branches learn to determine: is the current network prediction
is wrong and if the given instance passed to deeper layers of the network it will generate right output;
If not, then the expert branches stop the computation process. The experimental results on standard
dataset CIFAR-10 show that the proposed method can train models with lower test-cost and
competitive accuracy in comparison with basic models.
KEYWORDS
Test-Cost-Sensitive Learning; Deep Learning; CNN withExpert Branches; Instance-Based Cost
Full Text: https://aircconline.com/sipij/V10N5/10519sipij02.pdf
Signal & Image Processing: An International Journal (SIPIJ)
http://www.airccse.org/journal/sipij/vol10.html
REFERENCES
[1] S. P. S. Gurjar, S. Gupta, and R. Srivastava, “Automatic Image Annotation Model Using LSTM
Approach,” Signal Image Process. An Int. J., vol. 8, no. 4, pp. 25–37, Aug. 2017.
[2] S. Maity, M. Abdel-Mottaleb, and S. S. As, “Multimodal Biometrics Recognition from Facial
Video via Deep Learning,” in Computer Science & Information Technology (CS & IT), 2017, pp. 67–
75.
[3] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” arXiv
Prepr. arXiv1512.03385, 2015.
[4] D. Kadam, A. R. Madane, K. Kutty, and B. S.V, “Rain Streaks Elimination Using Image
Processing Algorithms,” Signal Image Process. An Int. J., vol. 10, no. 03, pp. 21–32, Jun. 2019.
[5] A. Massaro, V. Vitti, and A. Galiano, “Automatic Image Processing Engine Oriented on Quality
Control of Electronic Boards,” Signal Image Process. An Int. J., vol. 9, no. 2, pp. 01–14, Apr. 2018.
[6] X. Li, Z. Liu, P. Luo, C. Change Loy, and X. Tang, “Not all pixels are equal: Difficulty-aware
semantic segmentation via deep layer cascade,” in Proceedings of the IEEE conference on computer
vision and pattern recognition, 2017, pp. 3193–3202.
[7] M. Naghibi, R. Anvari, A. Forghani, and B. Minaei, “Cost-Sensitive Topical Data Acquisition
from the Web,” Int. J. Data Min. Knowl. Manag. Process, vol. 09, no. 03, pp. 39–56, May 2019.
[8] A. Polyak and L. Wolf, “Channel-Level Acceleration of Deep Face Representations,” Access,
IEEE, vol. 3, pp. 2163–2175, 2015.
[9] A. Lavin and S. Gray, “Fast Algorithms for Convolutional Neural Networks,” in 2016 IEEE
Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 4013–4021.
[10] J. Ba and R. Caruana, “Do deep nets really need to be deep?,” in Advances in neural information
processing systems, 2014, pp. 2654–2662.
[11] A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta, and Y. Bengio, “Fitnets: Hints for
thin deep nets,” arXiv Prepr. arXiv1412.6550, 2014.
[12] X. Zhang, J. Zou, K. He, and J. Sun, “Accelerating very deep convolutional networks for
classification and detection,” 2015.
[13] E. L. Denton, W. Zaremba, J. Bruna, Y. LeCun, and R. Fergus, “Exploiting linear structure
within convolutional networks for efficient evaluation,” in Advances in Neural Information
Processing Systems, 2014, pp. 1269–1277.
[14] M. Jaderberg, A. Vedaldi, and A. Zisserman, “Speeding up convolutional neural networks with
low rank expansions,” arXiv Prepr. arXiv1405.3866, 2014.
[15] N. Ström, “Sparse connection and pruning in large dynamic artificial neural networks.,” in
EUROSPEECH, 1997.
[16] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov, “Improving
neural networks by preventing co-adaptation of feature detectors,” arXiv Prepr. arXiv1207.0580,
2012.
[17] N. Vasilache, J. Johnson, M. Mathieu, S. Chintala, S. Piantino, and Y. LeCun, “Fast
convolutional nets with fbfft: A GPU performance evaluation,” arXiv Prepr. arXiv1412.7580, 2014.
[18] M. Mathieu, M. Henaff, and Y. LeCun, “Fast training of convolutional networks through FFTs,”
arXiv Prepr. arXiv1312.5851, 2013.
[19] V. N. Murthy, V. Singh, T. Chen, R. Manmatha, and D. Comaniciu, “Deep decision network for
multi-class image classification,” in Proceedings of the IEEE conference on computer vision and
pattern recognition, 2016, pp. 2240–2248.
[20] V. Vanhoucke, A. Senior, and M. Z. Mao, “Improving the speed of neural networks on CPUs,” in
Proc. Deep Learning and Unsupervised Feature Learning NIPS Workshop, 2011, vol. 1.
[21] A. Toshev and C. Szegedy, “Deeppose: Human pose estimation via deep neural networks,” in
Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 1653–
1660.
[22] A. Krizhevsky, G. Hinton, and others, “Learning multiple layers of features from tiny images,”
2009.
[23] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception
architecture for computer vision,” in Proceedings of the IEEE conference on computer vision and
pattern recognition, 2016, pp. 2818–2826.
[24] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J.
Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L.
Kaiser, M. Kudlur, J. Levenberg, D. Mane, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J.
Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viegas, O.
Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: LargeScale
Machine Learning on Heterogeneous Distributed Systems,” Mar. 2016.
Free- Reference Image Quality Assessment Framework Using Metrics
Fusion and Dimensionality Reduction
Besma Sadou1
, Atidel Lahoulou2
, Toufik Bouden1
, Anderson R. Avila3
, Tiago H. Falk3
and Zahid
Akhtar4
, 1
Non Destructive Testing Laboratory, University of Jijel, Algeria, 2
LAOTI laboratory,
University of Jijel, Algeria, 3
University of Québec, Canada and 4
University of Memphis, USA
ABSTRACT
This paper focuses on no-reference image quality assessment(NR-IQA)metrics. In the literature, a
wide range of algorithms are proposed to automatically estimate the perceived quality of visual data.
However, most of them are not able to effectively quantify the various degradations and artifacts that
the image may undergo. Thus, merging of diverse metrics operating in different information domains
is hoped to yield better performances, which is the main theme of the proposed work. In particular,
the metric proposed in this paper is based on three well-known NR-IQA objective metrics that depend
on natural scene statistical attributes from three different domains to extract a vector of image
features. Then, Singular Value Decomposition (SVD) based dominant eigenvectors method is used to
select the most relevant image quality attributes. These latter are used as input to Relevance Vector
Machine (RVM) to derive the overall quality index. Validation experiments are divided into two
groups; in the first group, learning process (training and test phases) is applied on one single image
quality database whereas in the second group of simulations, training and test phases are separated on
two distinct datasets. Obtained results demonstrate that the proposed metric performs very well in
terms of correlation, monotonicity and accuracy in both the two scenarios.
KEYWORDS
Image quality assessment, metrics fusion, Singular Value Decomposition (SVD), dominant
eigenvectors, dimensionality reduction, Relevance Vector Machine (RVM)
Full Text: https://aircconline.com/sipij/V10N5/10519sipij01.pdf
Signal & Image Processing: An International Journal (SIPIJ)
http://www.airccse.org/journal/sipij/vol10.html
REFERENCES
[1] D. Zhang, Y. Ding , N. Zheng, “Nature scene statistics approach based on ICA for no-reference
image quality assessment”, Proceedings of International Workshop on Information and Electronics
Engineering (IWIEE), 29 (2012), 3589- 3593.
[2] A. K. Moorthy, A. C. Bovik, A two-step framework for constructing blind image quality
indices[J], IEEE Signal Process. Lett., 17 (2010), 513-516.
[3] L. Zhang, L. Zhang, A.C. Bovik, A Feature-Enriched Completely Blind Image Quality Evaluator,
IEEE Transactions on Image Processing, 24(8) (2015), 2579- 2591.
[4] M.A. Saad, A.C. Bovik, C. Charrier, A DCT statistics-based blind image quality index, Signal
Process. Lett. 17 (2010) 583–586.
[5] M. A. Saad, A. C. Bovik, C. Charrier, Blind image quality assessment: A natural scene statistics
approach in the DCT domain, IEEE Trans. Image Process., 21 (2012), 3339-3352.
[6] A. Mittal, A.K. Moorthy, A.C. Bovik, No-reference image quality assessment in the spatial
domain, IEEE Trans. Image Process. 21 (2012), 4695 - 4708.
[7] A. Mittal, R. Soundararajan, A. C. Bovik, Making a completely blind image quality analyzer,
IEEE Signal Process. Lett., 20 (2013), 209-212.
[8] N. Kruger, P. Janssen, S. Kalkan, M. Lappe, A. Leonardis, J. Piater, A. Rodriguez-Sanchez, L.
Wiskott, “Deep hierarchies in the primate visual cortex: What can we learn for computer vision?”,
IEEE Trans. Pattern Anal. Mach. Intell., 35 (2013), 1847–1871.
[9] D. J. Felleman, D. C. Van Essen,“Distributed hierarchical processing in the primate cerebral
cortex,”“Distributed hierarchical processing in the primate cerebral cortex,”
[10] B. Sadou, A. Lahoulou, T. Bouden, A New No-reference Color Image Quality Assessment
Metric in Wavelet and Gradient Domains, 6th International Conference on Control Engineering and
Information Technologies, Istanbul, Turkey, 25-27 October (2018), 954-959.
[11] Q. Wu, H. Li, F. Meng, K. N. Ngan, S. Zhu, No reference image quality assessment metric via
multidomain structural information and piecewise regression. J. Vis. Commun. Image R., 32(2015),
205– 216.
[12] X. Shang, X. Zhao, Y. Ding, Image quality assessment based on joint quality-aware
representation construction in multiple domains, Journal of Engineering 4 (2018), 1-12.
[13] B. Sadou, A.Lahoulou, T.Bouden, A.R. Avila, T.H. Falk, Z. Akhtar, "Blind Image Quality
Assessment Using Singular Value Decomposition Based Dominant Eigenvectors for Feature
Selection", 5th Int. Conf. on Signal and Image Processing (SIPRO’19), Toronto, Canada, pp. 233-242,
2019.
[14] H. R. Sheikh, Z. Wang, L. Cormack, A. C. Bovik, LIVE Image Quality Assessment Database
Release 2 (2005), http://live.ece.utexas.edu/research/quality
[15] E. Larson, D. M. Chandler, Categorical image quality assessment (CSIQ)
database.http://vision.okstate.edu/?loc=csiq
[16] M. W. Mahoney, P. Drineas, “CUR matrix decompositions for improved data analysis,” in Proc.
the National Academy of Sciences, February 2009.
[17] M.E. Tipping. The relevance vector machines. In Advances in Neural Information Processing
Systems 12, Solla SA, Leen TK, Muller K-R (eds). MIT Press: Cambridge, MA (2000), 652-658.
[18] D. Basak, S. Pal, D.C. Patranabis, Support vector regression, Neural Information Processing –
Letters and Reviews, 11 (2007).
[19] B. SchÖlkopf, A.J. Smola, Learning with Kernels. MIT press, Cambridge, (2002).
[20] Final VQEG report on the validation of objective quality metrics for video quality assessment:
http://www.its.bldrdoc.gov/vqeg/projects/frtv_phaseI/
[21] H. R. Sheikh, M. F. Sabir, A. C. Bovik, A statistical evaluation of recent full reference image
quality assessment algorithms, IEEE Trans. Image Process., 15 (2006), 3440–3451.
Textons of Irregular Shape to Identify Patterns in the Human Parasite
Eggs
Roxana Flores-Quispe and Yuber Velazco-Paredes, Universidad Nacional de San Agustín de
Arequipa, Perú
ABSTRACT
This paper proposes a method based on Multitexton Histogram (MTH) descriptor to identify patterns
in images of human parasite eggs of the following species: Ascaris, Uncinarias, Trichuris,
Hymenolepis Nana, Dyphillobothrium-Pacificum, Taenia-Solium, Fasciola Hepática and Enterobius-
Vermicularis. These patterns are represented by textons of irregular shapes in their microscopic
images. This proposed method could be used for diagnosis of Parasitic disease and it can be helpful
especially in remote places. This paper includes two stages. In the first a feature extraction mechanism
integrates the advantages of cooccurrence matrix and histograms to identify irregular morphological
structures in the biological images through textons of irregular shape. In the second stage the Support
Vector Machine (SVM) is used to classificate the different human parasite eggs. The results were
obtaining using a dataset with 2053 human parasite eggs images achieving a success rate of 96,82% in
the classification. In addition, this research shows that the proposed method also works with natural
images.
KEYWORDS
Patterns, Human Parasite Eggs, Multitexton Histogram descriptor, Textons.
Full Text: https://aircconline.com/sipij/V10N6/10619sipij03.pdf
Signal & Image Processing: An International Journal (SIPIJ)
http://www.airccse.org/journal/sipij/vol10.html
REFERENCES
[1] Avci, Derya & Varol, Asaf (2009) “An expert diagnosis system for classification of human
parasite eggs based on multi-class SVM”, Expert Systems with Applications, Vol. 36, No.1, pp43 -
48.
[2] Chuctaya , Juan & Mena-Chalco, Jesús & Humpire , Gabriel & Rodriguez, Alexander. & Beltrán ,
Cesar & Patiño,, Raquel. (2010) “Detección de huevos helmintos mediante plantillas dinámicas”,
Conferencia Latinoamericana de Informática - CLEI.
[3] Dogantekin , Esin & Yilmaz , Mustafa & Dogantekin , Akif & Avci, Engin & Sengur, Abdulkadir
(2008). “A robust technique based on invariant moments - ANFIS for recognition of human parasite
eggs in microscopic images”, Expert Syst. Appl., Vol. 35, No. 3, pp728-738.
[4] Flores-Quispe, Roxana & Patiño Escarcina , Raquel Esperanza & Velazco-Paredes, Yuber &
Beltran Castañon , Cesar A. (2014) “ Classification of human parasite eggs based on enhanced
multitexton histogram”, Proceeding of Communications and Computing (COLCOM) IEEE
Colombian Conference on, pp1-6.
[5] Flores-Quispe, Roxana & Velazco-Paredes, Yuber & Patiño Escarcina , Raquel Esperanza &
Beltran Castañon , Cesar A. (2014) “ Automatic identification of human parasite eggs based on
multitexton histogram retrieving the relationships between textons”, In 33rd International Conference
of the Chilean Computer Science Society (SCCC), pp102-106.
[6] Kamarul H. Ghazali, & Hadi, Raafat S. & Mohamed. Zeehaida, (2013) “Automated system for
diagnosis intestinal parasites by computerized image analysis”, Modern Applied Science, Vol.7, No.5,
pp98-114.
[7] Gonzalez & Woods. (2008) “Digital Image Processing”. Prentice Hall, 3rd edition.
[8] Julesz, B. (1981) “Textons, the elements of texture perception, and their interactions”. Nature,
Vol.290, pp91-97.
[9] Julesz, B. (1986) “Texton gradients: the texton theory revisited”. Biological Cybernetics, Vol.54,
pp.245-251.
[10] Liu, G.-H. & Zhang, L. & Hou, Y.-K. & Li, Z.-Y. & Yang, J.-Y. (2010) “Image retrieval based
on multi-texton histogram”, Pattern Recognition, Vol.43 pp2380-2389.
[11] Peixinho, A.Z. & Martins, S.B. & Vargas, J.E. & Facão , A.X. & Gomes, J.F. & Suzuki, C.T.N.
(2016) “Diagnosis of human intestinal parasites by deep learning”. pp 07-112.
[12] Sengür, Abdulkadir & Türkoglu, Ibrahim. (2004) “Parasite egg cell classification using invariant
moments”. 4th Internatinal Symposium on Intelligent Manufacturing Systems, pp98-106.
[13] Wang & Yunling (2017). “Introduction to Parasitic Disease. Springer Netherlands”.
[14] Yang, Yoon Seok & Park, Duck Kun & Kim, Hee Chan & Choi, Min-Ho & Chai , Jong-Yil.
(2001) “Automatic identification of human helminth eggs on microscopic fecal specimens using
digital image processing and an artificial neural network”, IEEE Trans. Biomed. Engineering, Vol.48,
No.6, pp718-730.
Deep Learning Based Target Tracking and Classification Directly in
Compressive Measurement for Low Quality Videos
Chiman Kwan1
, Bryan Chou1
, Jonathan Yang2
and Trac Tran3
, 1
Applied Research LLC,
USA, 2
Google, Inc., USA and
3
Johns Hopkins University, USA
ABSTRACT
Past research has found that compressive measurements save data storage and bandwidth usage.
However, it is also observed that compressive measurements are difficult to be used directly for target
tracking and classification without pixel reconstruction. This is because the Gaussian random matrix
destroys the target location information in the original video frames. This paper summarizes our
research effort on target tracking and classification directly in the compressive measurement domain.
We focus on one type of compressive measurement using pixel subsampling. That is, the compressive
measurements are obtained by randomly subsample the original pixels in video frames. Even in such
special setting, conventional trackers still do not work well. We propose a deep learning approach that
integrates YOLO (You Only Look Once) and ResNet (residual network) for target tracking and
classification in low quality videos. YOLO is for multiple target detection and ResNet is for target
classification. Extensive experiments using optical and mid-wave infrared (MWIR) videos in the
SENSIAC database demonstrated the efficacy of the proposed approach.
KEYWORDS
Compressive measurements, target tracking, target classification, deep learning, YOLO, ResNet,
optical videos, infrared videos, SENSIAC database
Full Text: https://aircconline.com/sipij/V10N6/10619sipij02.pdf
Signal & Image Processing: An International Journal (SIPIJ)
http://www.airccse.org/journal/sipij/vol10.html
REFERENCES
[1] Li, X., Kwan, C., Mei, G. and Li, B., (2006) “A Generic Approach to Object Matching and
Tracking,” Proc. Third International Conference Image Analysis and Recognition, Lecture Notes in
Computer Science, pp 839-849,
[2] Zhou, J. and Kwan, C., (2018) “Tracking of Multiple Pixel Targets Using Multiple Cameras,” 15th
International Symposium on Neural Networks.
[3] Zhou, J. and Kwan, C., (2018) “Anomaly Detection in Low Quality Traffic Monitoring Videos
Using Optical Flow,” Proc. SPIE 10649, Pattern Recognition and Tracking XXIX, 106490F.
[4] Kwan, C., Zhou, J., Wang, Z. and Li, B., (2018) “Efficient Anomaly Detection Algorithms for
Summarizing Low Quality Videos,” Proc. SPIE 10649, Pattern Recognition and Tracking XXIX,
1064906.
[5] Kwan, C., Yin, J. and Zhou, J., (2018) “The Development of a Video Browsing and Video
Summary Review Tool,” Proc. SPIE 10649, Pattern Recognition and Tracking XXIX, 1064907.
[6] Zhao, Z., Chen, H., Chen, G., Kwan, C. and Li, X. R., (2006) “IMM-LMMSE Filtering Algorithm
for Ballistic Target Tracking with Unknown Ballistic Coefficient,” Proc. SPIE, Volume 6236, Signal
and Data Processing of Small Targets.
[7] Zhao, Z., Chen, H., Chen, G., Kwan, C. and Li, X. R., (2006) “Comparison of several ballistic
target tracking filters,” Proc. American Control Conference, pp 2197-2202.
[8] Candes, E. J. and Wakin, M. B., (2008) “An Introduction to Compressive Sampling,” IEEE Signal
Processing Magazine, vol. 25, no. 2, pp. 21-30.
[9] Kwan, C., Chou, B. and Kwan, L. M., (2018) “A Comparative Study of Conventional and Deep
Learning Target Tracking Algorithms for Low Quality Videos,” 15th International Symposium on
Neural Networks.
[10] Kwan, C., Chou, B., Yang, J. and Tran, T., (2019) “Compressive object tracking and
classification using deep learning for infrared videos,” Pattern Recognition and Tracking XXX
(Conference SI120).
[11] Kwan, C., Chou, B., Yang, J., Rangamani, A., Tran, T., Zhang, J. and Etienne-Cummings, R.,
(2019) “Target Tracking and Classification Directly Using Compressive Sensing Camera for SWIR
videos,” Journal of Signal, Image, and Video Processing.
[12] Kwan, C., Chou, B., Echavarren, A., Budavari, B., Li, J. and Tran, T., (2018) “Compressive
vehicle tracking using deep learning,” IEEE Ubiquitous Computing, Electronics & Mobile
Communication Conference.
[13] Tropp, J. A., (2004) “Greed is good: Algorithmic results for sparse approximation,” IEEE
Transactions on Information Theory, vol. 50, no. 10, pp 2231–2242.
[14] Yang, J. and Zhang, Y., (2011) “Alternating direction algorithms for l1-problems in compressive
sensing,” SIAM journal on scientific computing, 33, pp 250–278.
[15] Dao, M., Kwan, C., Koperski, K. and Marchisio, G., (2017) “A Joint Sparsity Approach to
Tunnel Activity Monitoring Using High Resolution Satellite Images,” IEEE Ubiquitous Computing,
Electronics & Mobile Communication Conference, pp 322-328,
[16] Zhou, J., Ayhan, B., Kwan, C. and Tran, T., (2018) “ATR Performance Improvement Using
Images with Corrupted or Missing Pixels,” Proc. SPIE 10649, Pattern Recognition and Tracking
XXIX, 106490E.
[17] Yang, M. H., Zhang, K. and Zhang, L., (2012) “Real-Time Compressive Tracking,” European
Conference on Computer Vision.
[18] Applied Research LLC, Phase 1 Final Report, 2017.
[19] Kwan, C., Gribben, D. and Tran, T. (2019) “Multiple Human Objects Tracking and Classification
Directly in Compressive Measurement Domain for Long Range Infrared Videos,” IEEE Ubiquitous
Computing, Electronics & Mobile Communication Conference, New York City.
[20] Kwan, C., Chou, B., Yang, J., and Tran, T. (2019) “Deep Learning based Target Tracking and
Classification for Infrared Videos Using Compressive Measurements,” Journal Signal and
Information Processing.
[21] Kwan, C., Gribben, D. and Tran, T. (2019) “Tracking and Classification of Multiple Human
Objects Directly in Compressive Measurement Domain for Low Quality Optical Videos,” IEEE
Ubiquitous Computing, Electronics & Mobile Communication Conference, New York City.
[22] Redmon, J. and Farhadi, A., (2018) “YOLOv3: An Incremental Improvement,” arxiv, April.
[23] Ren S., He, K., Girshick, R. and Sun, J., (2015) “Faster R-CNN: Towards real-time object
detection with region proposal networks,” Advances in neural information processing systems.
[24] He, K., Zhang, X., Ren, S. Ren and Sun, J., (2016) “Deep Residual Learning for Image
Recognition,” Conference on Computer Vision and Pattern Recognition.
[25] Kwan, C., Chou, B., Yang, J., and Tran, T., (2019) “Target Tracking and Classification Directly
in Compressive Measurement Domain for Low Quality Videos,” Pattern Recognition and Tracking
XXX (Conference SI120).
[26] Stauffer, C. and Grimson, W. E. L., (1999) “Adaptive Background Mixture Models for Real-
Time Tracking,” Conference on Computer Vision and Pattern Recognition, Vol. 2, pp. 2246-252.
[27] Bertinetto, L., Valmadre, J., Golodetz, S., Miksik, O. and Torr, P., (2016) “Staple:
Complementary Learners for Real-Time Tracking,” Conference on Computer Vision and Pattern
Recognition.
[28] Kulkarni, K. and Turaga, P. K. (2016) “Reconstruction-Free Action Inference from Compressive
Imagers,” IEEE Trans. Pattern Anal. Mach. Intell. 38(4), pp 772-784.
[29] Lohit, S., Kulkarni, K. and Turaga, P. K. (2016) “Direct inference on compressive measurements
using convolutional neural networks,” Int. Conference on Image Processing, pp 1913-1917.
[30] Adler, A., Elad, M. and Zibulevsky, M. (2016) “Compressed Learning: A Deep Neural Network
Approach,” arXiv:1610.09615v1 [cs.CV].
[31] Xu, Y. and Kelly, K. F. (2019) “Compressed domain image classification using a multi-rate
neural network,” arXiv:1901.09983 [cs.CV].
[32] Kulkarni, K. and Turaga, P. K. (2016) “Fast Integral Image Estimation at 1% measurement rate,”
arXiv:1601.07258v1 [cs.CV].
[33] Wang, Z. W., Vineet, V., Pittaluga, F., Sinha, S. N., Cossairt, O. and Kang, S. B. (2019)
“PrivacyPreserving Action Recognition Using Coded Aperture Videos,” IEEE Conference on
Computer Vision and Pattern Recognition (CVPR) Workshops.
[34] Vargas, H., Fonseca, Y. and Arguello, H. (2018) “Object Detection on Compressive
Measurements using Correlation Filters and Sparse Representation,” 26th European Signal Processing
Conference (EUSIPCO), pp 1960-1964.
[35] Değerli, A., Aslan, S., Yamac, M., Sankur, B. and Gabbouj, M. (2018) “Compressively Sensed
Image Recognition,” 7th European Workshop on Visual Information Processing (EUVIP), Tampere,
pp. 1-6.
[36] Latorre-Carmona, P., Traver, V. J., Sánchez, J. S. and Tajahuerce, E. (2019) “Online
reconstruction-free single-pixel image classification,” Image and Vision Computing, Vol. 86.
[37] Kwan, C., Chou, B., Yang, J., Rangamani, A., Tran, T., Zhang, J. and Etienne-Cummings, R.,
(2019) “Target Tracking and Classification Using Compressive Measurements of MWIR and LWIR
Coded Aperture Cameras,” Journal Signal and Information Processing, vol. 10, no. 3.
[38] Kwan, C., Chou, B., Yang, J., Rangamani, A., Tran, T., Zhang, J. and Etienne-Cummings, R.,
(2019) “Deep Learning based Target Tracking and Classification for Low Quality Videos Using
Coded Aperture Camera,” Sensors, vol. 19, no. 17, 3702
Efficient Method to find Nearest Neighbours in Flocking Behaviours
Omar Adwan, The University of Jordan, Jordan
ABSTRACT
Flocking is a behaviour in which objects move or work together as a group. This behaviour is very
common in nature think of a flock of flying geese or a school of fish in the sea. Flocking behaviours
have been simulated in different areas such as computer animation, graphics and games. However, the
simulation of the flocking behaviours of large number of objects in real time is computationally
intensive task. This intensity is due to the n-squared complexity of the nearest neighbour (NN)
algorithm used to separate objects, where n is the number of objects. This paper proposes an efficient
NN method based on the partial distance approach to enhance the performance of the flocking
algorithm and its application to flocking behaviour. The proposed method was implemented and the
experimental results showed that the proposed method outperformed conventional NN methods when
applied to flocking fish.
KEYWORDS
Flocking behaviours, nearest neighbours, partial distance approach, computer graphics and games
Full Text: https://aircconline.com/sipij/V10N6/10619sipij01.pdf
Signal & Image Processing: An International Journal (SIPIJ)
http://www.airccse.org/journal/sipij/vol10.html
Recent articles published in Signal & Image Processing: An InternationalJournal (SIPIJ)

More Related Content

What's hot

Facial expression recognition using pca and gabor with jaffe database 11748
Facial expression recognition using pca and gabor with jaffe database 11748Facial expression recognition using pca and gabor with jaffe database 11748
Facial expression recognition using pca and gabor with jaffe database 11748EditorIJAERD
 
Graduation Project - Face Login : A Robust Face Identification System for Sec...
Graduation Project - Face Login : A Robust Face Identification System for Sec...Graduation Project - Face Login : A Robust Face Identification System for Sec...
Graduation Project - Face Login : A Robust Face Identification System for Sec...Ahmed Gad
 
Deep learning for person re-identification
Deep learning for person re-identificationDeep learning for person re-identification
Deep learning for person re-identification哲东 郑
 
Facial Expression Recognition Using Enhanced Deep 3D Convolutional Neural Net...
Facial Expression Recognition Using Enhanced Deep 3D Convolutional Neural Net...Facial Expression Recognition Using Enhanced Deep 3D Convolutional Neural Net...
Facial Expression Recognition Using Enhanced Deep 3D Convolutional Neural Net...Willy Marroquin (WillyDevNET)
 
Deep re-id: 关于行人重识别的深度学习方法
Deep re-id: 关于行人重识别的深度学习方法Deep re-id: 关于行人重识别的深度学习方法
Deep re-id: 关于行人重识别的深度学习方法哲东 郑
 
15 8484 9348-1-rv crowd edit septian
15 8484 9348-1-rv crowd edit septian15 8484 9348-1-rv crowd edit septian
15 8484 9348-1-rv crowd edit septianIAESIJEECS
 
Facial Expression Recognition System using Deep Convolutional Neural Networks.
Facial Expression Recognition  System using Deep Convolutional Neural Networks.Facial Expression Recognition  System using Deep Convolutional Neural Networks.
Facial Expression Recognition System using Deep Convolutional Neural Networks.Sandeep Wakchaure
 
IRJET- Automated Detection of Gender from Face Images
IRJET-  	  Automated Detection of Gender from Face ImagesIRJET-  	  Automated Detection of Gender from Face Images
IRJET- Automated Detection of Gender from Face ImagesIRJET Journal
 
Image recognition
Image recognitionImage recognition
Image recognitionJoel Jose
 
Facial expression recognition on real world face images (OPTIK)
Facial expression recognition on real world face images (OPTIK)Facial expression recognition on real world face images (OPTIK)
Facial expression recognition on real world face images (OPTIK)Sohail Ahmed
 
Facial Expression Recognition
Facial Expression Recognition Facial Expression Recognition
Facial Expression Recognition Rupinder Saini
 
Model Based Emotion Detection using Point Clouds
Model Based Emotion Detection using Point CloudsModel Based Emotion Detection using Point Clouds
Model Based Emotion Detection using Point CloudsLakshmi Sarvani Videla
 
IRJET - Facial In-Painting using Deep Learning in Machine Learning
IRJET -  	  Facial In-Painting using Deep Learning in Machine LearningIRJET -  	  Facial In-Painting using Deep Learning in Machine Learning
IRJET - Facial In-Painting using Deep Learning in Machine LearningIRJET Journal
 
HUMAN FACE IDENTIFICATION
HUMAN FACE IDENTIFICATION HUMAN FACE IDENTIFICATION
HUMAN FACE IDENTIFICATION bhupesh lahare
 

What's hot (20)

Facial expression recognition using pca and gabor with jaffe database 11748
Facial expression recognition using pca and gabor with jaffe database 11748Facial expression recognition using pca and gabor with jaffe database 11748
Facial expression recognition using pca and gabor with jaffe database 11748
 
Graduation Project - Face Login : A Robust Face Identification System for Sec...
Graduation Project - Face Login : A Robust Face Identification System for Sec...Graduation Project - Face Login : A Robust Face Identification System for Sec...
Graduation Project - Face Login : A Robust Face Identification System for Sec...
 
Deep learning for person re-identification
Deep learning for person re-identificationDeep learning for person re-identification
Deep learning for person re-identification
 
Facial Expression Recognition Using Enhanced Deep 3D Convolutional Neural Net...
Facial Expression Recognition Using Enhanced Deep 3D Convolutional Neural Net...Facial Expression Recognition Using Enhanced Deep 3D Convolutional Neural Net...
Facial Expression Recognition Using Enhanced Deep 3D Convolutional Neural Net...
 
Word
WordWord
Word
 
Avinash_CV_long
Avinash_CV_longAvinash_CV_long
Avinash_CV_long
 
Deep re-id: 关于行人重识别的深度学习方法
Deep re-id: 关于行人重识别的深度学习方法Deep re-id: 关于行人重识别的深度学习方法
Deep re-id: 关于行人重识别的深度学习方法
 
15 8484 9348-1-rv crowd edit septian
15 8484 9348-1-rv crowd edit septian15 8484 9348-1-rv crowd edit septian
15 8484 9348-1-rv crowd edit septian
 
Facial Expression Recognition System using Deep Convolutional Neural Networks.
Facial Expression Recognition  System using Deep Convolutional Neural Networks.Facial Expression Recognition  System using Deep Convolutional Neural Networks.
Facial Expression Recognition System using Deep Convolutional Neural Networks.
 
Image recognition
Image recognitionImage recognition
Image recognition
 
IRJET- Automated Detection of Gender from Face Images
IRJET-  	  Automated Detection of Gender from Face ImagesIRJET-  	  Automated Detection of Gender from Face Images
IRJET- Automated Detection of Gender from Face Images
 
Image recognition
Image recognitionImage recognition
Image recognition
 
Image recognition
Image recognitionImage recognition
Image recognition
 
Ay4103315317
Ay4103315317Ay4103315317
Ay4103315317
 
Facial expression recognition on real world face images (OPTIK)
Facial expression recognition on real world face images (OPTIK)Facial expression recognition on real world face images (OPTIK)
Facial expression recognition on real world face images (OPTIK)
 
Facial Expression Recognition
Facial Expression Recognition Facial Expression Recognition
Facial Expression Recognition
 
Model Based Emotion Detection using Point Clouds
Model Based Emotion Detection using Point CloudsModel Based Emotion Detection using Point Clouds
Model Based Emotion Detection using Point Clouds
 
IRJET - Facial In-Painting using Deep Learning in Machine Learning
IRJET -  	  Facial In-Painting using Deep Learning in Machine LearningIRJET -  	  Facial In-Painting using Deep Learning in Machine Learning
IRJET - Facial In-Painting using Deep Learning in Machine Learning
 
HUMAN FACE IDENTIFICATION
HUMAN FACE IDENTIFICATION HUMAN FACE IDENTIFICATION
HUMAN FACE IDENTIFICATION
 
Facial Expression Recognitino
Facial Expression RecognitinoFacial Expression Recognitino
Facial Expression Recognitino
 

Similar to Recent articles published in Signal & Image Processing: An InternationalJournal (SIPIJ)

TOP 5 Most View Article From Academia in 2019
TOP 5 Most View Article From Academia in 2019TOP 5 Most View Article From Academia in 2019
TOP 5 Most View Article From Academia in 2019sipij
 
Extracting individual information using facial recognition in a smart mirror....
Extracting individual information using facial recognition in a smart mirror....Extracting individual information using facial recognition in a smart mirror....
Extracting individual information using facial recognition in a smart mirror....IQRARANI11
 
Review on Hand Gesture Recognition
Review on Hand Gesture RecognitionReview on Hand Gesture Recognition
Review on Hand Gesture Recognitiondbpublications
 
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...MangaiK4
 
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...MangaiK4
 
Humangenderidentificationusingmachinelearning.pptx
Humangenderidentificationusingmachinelearning.pptxHumangenderidentificationusingmachinelearning.pptx
Humangenderidentificationusingmachinelearning.pptxkhajurianavam
 
Feature based head pose estimation for controlling movement of
Feature based head pose estimation for controlling movement ofFeature based head pose estimation for controlling movement of
Feature based head pose estimation for controlling movement ofIAEME Publication
 
An Improved Self Organizing Feature Map Classifier for Multimodal Biometric R...
An Improved Self Organizing Feature Map Classifier for Multimodal Biometric R...An Improved Self Organizing Feature Map Classifier for Multimodal Biometric R...
An Improved Self Organizing Feature Map Classifier for Multimodal Biometric R...ijtsrd
 
Survey on Facial Expression Analysis and Recognition
Survey on Facial Expression Analysis and RecognitionSurvey on Facial Expression Analysis and Recognition
Survey on Facial Expression Analysis and RecognitionIRJET Journal
 
Recognition of Facial Emotions Based on Sparse Coding
Recognition of Facial Emotions Based on Sparse CodingRecognition of Facial Emotions Based on Sparse Coding
Recognition of Facial Emotions Based on Sparse CodingIJERA Editor
 
Facial Expression Recognition Using Local Binary Pattern and Support Vector M...
Facial Expression Recognition Using Local Binary Pattern and Support Vector M...Facial Expression Recognition Using Local Binary Pattern and Support Vector M...
Facial Expression Recognition Using Local Binary Pattern and Support Vector M...AM Publications
 
IRJET- Application of MCNN in Object Detection
IRJET-  	  Application of MCNN in Object DetectionIRJET-  	  Application of MCNN in Object Detection
IRJET- Application of MCNN in Object DetectionIRJET Journal
 
Sign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture AnalysisSign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture Analysispaperpublications3
 
COMPRESSION BASED FACE RECOGNITION USING TRANSFORM DOMAIN FEATURES FUSED AT M...
COMPRESSION BASED FACE RECOGNITION USING TRANSFORM DOMAIN FEATURES FUSED AT M...COMPRESSION BASED FACE RECOGNITION USING TRANSFORM DOMAIN FEATURES FUSED AT M...
COMPRESSION BASED FACE RECOGNITION USING TRANSFORM DOMAIN FEATURES FUSED AT M...sipij
 
Resume_HaoZhang_Dec07
Resume_HaoZhang_Dec07Resume_HaoZhang_Dec07
Resume_HaoZhang_Dec07Hao Zhang
 
Synops emotion recognize
Synops emotion recognizeSynops emotion recognize
Synops emotion recognizeAvdhesh Gupta
 
IRJET- Facial Expression Recognition
IRJET- Facial Expression RecognitionIRJET- Facial Expression Recognition
IRJET- Facial Expression RecognitionIRJET Journal
 

Similar to Recent articles published in Signal & Image Processing: An InternationalJournal (SIPIJ) (20)

TOP 5 Most View Article From Academia in 2019
TOP 5 Most View Article From Academia in 2019TOP 5 Most View Article From Academia in 2019
TOP 5 Most View Article From Academia in 2019
 
Extracting individual information using facial recognition in a smart mirror....
Extracting individual information using facial recognition in a smart mirror....Extracting individual information using facial recognition in a smart mirror....
Extracting individual information using facial recognition in a smart mirror....
 
Review on Hand Gesture Recognition
Review on Hand Gesture RecognitionReview on Hand Gesture Recognition
Review on Hand Gesture Recognition
 
40120140505010 2-3
40120140505010 2-340120140505010 2-3
40120140505010 2-3
 
40120140505010
4012014050501040120140505010
40120140505010
 
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...
 
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...
 
Humangenderidentificationusingmachinelearning.pptx
Humangenderidentificationusingmachinelearning.pptxHumangenderidentificationusingmachinelearning.pptx
Humangenderidentificationusingmachinelearning.pptx
 
Feature based head pose estimation for controlling movement of
Feature based head pose estimation for controlling movement ofFeature based head pose estimation for controlling movement of
Feature based head pose estimation for controlling movement of
 
An Improved Self Organizing Feature Map Classifier for Multimodal Biometric R...
An Improved Self Organizing Feature Map Classifier for Multimodal Biometric R...An Improved Self Organizing Feature Map Classifier for Multimodal Biometric R...
An Improved Self Organizing Feature Map Classifier for Multimodal Biometric R...
 
Survey on Facial Expression Analysis and Recognition
Survey on Facial Expression Analysis and RecognitionSurvey on Facial Expression Analysis and Recognition
Survey on Facial Expression Analysis and Recognition
 
Ijarcce 27
Ijarcce 27Ijarcce 27
Ijarcce 27
 
Recognition of Facial Emotions Based on Sparse Coding
Recognition of Facial Emotions Based on Sparse CodingRecognition of Facial Emotions Based on Sparse Coding
Recognition of Facial Emotions Based on Sparse Coding
 
Facial Expression Recognition Using Local Binary Pattern and Support Vector M...
Facial Expression Recognition Using Local Binary Pattern and Support Vector M...Facial Expression Recognition Using Local Binary Pattern and Support Vector M...
Facial Expression Recognition Using Local Binary Pattern and Support Vector M...
 
IRJET- Application of MCNN in Object Detection
IRJET-  	  Application of MCNN in Object DetectionIRJET-  	  Application of MCNN in Object Detection
IRJET- Application of MCNN in Object Detection
 
Sign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture AnalysisSign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture Analysis
 
COMPRESSION BASED FACE RECOGNITION USING TRANSFORM DOMAIN FEATURES FUSED AT M...
COMPRESSION BASED FACE RECOGNITION USING TRANSFORM DOMAIN FEATURES FUSED AT M...COMPRESSION BASED FACE RECOGNITION USING TRANSFORM DOMAIN FEATURES FUSED AT M...
COMPRESSION BASED FACE RECOGNITION USING TRANSFORM DOMAIN FEATURES FUSED AT M...
 
Resume_HaoZhang_Dec07
Resume_HaoZhang_Dec07Resume_HaoZhang_Dec07
Resume_HaoZhang_Dec07
 
Synops emotion recognize
Synops emotion recognizeSynops emotion recognize
Synops emotion recognize
 
IRJET- Facial Expression Recognition
IRJET- Facial Expression RecognitionIRJET- Facial Expression Recognition
IRJET- Facial Expression Recognition
 

Recently uploaded

Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfciinovamais
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Sapana Sha
 
Student login on Anyboli platform.helpin
Student login on Anyboli platform.helpinStudent login on Anyboli platform.helpin
Student login on Anyboli platform.helpinRaunakKeshri1
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfJayanti Pande
 
Class 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdfClass 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdfAyushMahapatra5
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...EduSkills OECD
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdfQucHHunhnh
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityGeoBlogs
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13Steve Thomason
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfsanyamsingh5019
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfchloefrazer622
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsTechSoup
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationnomboosow
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDThiyagu K
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104misteraugie
 
fourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writingfourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writingTeacherCyreneCayanan
 
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in DelhiRussian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhikauryashika82
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxheathfieldcps1
 

Recently uploaded (20)

INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptxINDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
 
Student login on Anyboli platform.helpin
Student login on Anyboli platform.helpinStudent login on Anyboli platform.helpin
Student login on Anyboli platform.helpin
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdf
 
Class 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdfClass 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdf
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activity
 
Advance Mobile Application Development class 07
Advance Mobile Application Development class 07Advance Mobile Application Development class 07
Advance Mobile Application Development class 07
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdf
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdf
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The Basics
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communication
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SD
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104
 
fourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writingfourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writing
 
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in DelhiRussian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
 

Recent articles published in Signal & Image Processing: An InternationalJournal (SIPIJ)

  • 1. Recent articles published in Signal & Image Processing Signal & Image Processing: An International Journal (SIPIJ) ISSN : 0976 - 710X (Online) ; 2229 - 3922 (print) http://www.airccse.org/journal/sipij/index.html
  • 2. FACIAL EXPRESSION DETECTION FOR VIDEO SEQUENCES USING LOCAL FEATURE EXTRACTION ALGORITHMS Kennedy Chengeta and Professor Serestina Viriri 1 University of KwaZulu Natal 2Westville Campus, South Africa ABSTRACT Facial expression image analysis can either be in the form of static image analysis or dynamic temporal 3D image or video analysis. The former involves static images taken on an individual at a specific point in time and is in 2-dimensional format. The latter involves dynamic textures extraction of video sequences extended in a temporal domain. Dynamic texture analysis involves short term facial expression movements in 3D in a temporal or spatial domain. Two feature extraction algorithms are used in 3D facial expression analysis namely holistic and local algorithms. Holistic algorithms analyze the whole face whilst the local algorithms analyze a facial image in small components namely nose, mouth, cheek and forehead. The paper uses a popular local feature extraction algorithm called LBP-TOP, dynamic image features based on video sequences in a temporal domain. Volume Local Binary Patterns combine texture, motion and appearance. VLBP and LBP-TOP outperformed other approaches by including local facial feature extraction algorithms which are resistant to gray-scale modifications and computation. It is also crucial to note that these emotions being natural reactions, recognition of feature selection and edge detection from the video sequences can increase accuracy and reduce the error rate. This can be achieved by removing unimportant information from the facial images. The results showed better percentage recognition rate by using local facial extraction algorithms like local binary patterns and local directional patterns than holistic algorithms like GLCM and Linear Discriminant Analysis. The study proposes local binary pattern variant LBP-TOP, local directional patterns and support vector machines aided by genetic algorithms for feature selection. The study was based on Facial Expressions and Emotions (FEED) and CK+ image sources. KEYWORDS Local binary patterns on TOP · Volume Local Binary Patterns(VLBP) Full Text : https://aircconline.com/sipij/V10N1/10119sipij03.pdf Signal & Image Processing: An International Journal (SIPIJ) http://www.airccse.org/journal/sipij/vol10.html
  • 3. REFERENCES 1. Y.Wang,J.See,R.C.-W.Phan,Y.-H.Oh,Lbp with six intersection points:Reducing redundant information in lbp-top for micro-expression recognition, in: Computer Vision—ACCV 2014, Springer, Singapore, 2014, pp. 525–537. 2. Y. Wang, J. See, R.C.-W. Phan, Y.-H. Oh, Ecient spatio-temporal local binary patterns for spontaneous facial micro-expression recognition, PloS One 10 (5) (2015). 3. M. S. Aung, S. Kaltwang, B. Romera-Paredes, B. Martinez, A. Singh, M. Cella,M. Valstar, H. Meng, A. Kemp, M. Shafizadeh, et al.: “The auto- matic detection of chronic pain-related expression: requirements, challenges and a multimodal dataset,” Transactions on A↵ective Computing, 2015. 4. P. Pavithra and A. B. Ganesh: “Detection of human facial behavioral ex- pression using image processing,” 5. K. Nurzynska and B. Smolka, “Smiling and neutral facial display recognition with the local binary patterns operator:” Journal of Medical Imaging and Health Informatics, vol. 5, no. 6, pp. 1374–1382, 2015-11-01T00:00:00. 6. Rupali S Chavan et al, International Journal of Computer Science and Mobile Computing Vol.2 Issue. 6, June- 2013, pg. 233-238 7. P. Lemaire, B. Ben Amor, M. Ardabilian, L. Chen, and M. Daoudi, “Fully automatic facial expression recognition using a region-based approach,” in Proceedings of the 2011 Joint ACM Workshop on Human Gesture and Behavior Understanding, J-HGBU ’11, (New York, NY, USA), pp. 53–58, ACM, 2011. 8. C. Padgett and G. W. Cottrell, “Representing face images for emotion clas- sification,” Advances in neural information processing systems, pp. 894–900, 1997. 9. P. Viola and M. J. Jones: “Robust real-time face detection,” Int. J. Comput. Vision,vol. 57, pp. 137–154, May 2004. 10. Yandan Wang , John See, Raphael C.-W. Phan, Yee-Hui Oh, Spatio-Temporal Local Binary Patterns for Spontaneous Facial Micro-Expression Recognition, May 19, 2015, https://doi.org/10.1371/journal.pone.0124674 11. A. Sanin, C. Sanderson, M. T. Harandi, and B. C. Lovell, “Spatio-temporal covariance descriptors for action and gesture recognition,” in Proc. IEEE Workshop on Applications of Computer Vision (Clearwater, 2013), pp. 103–110. 12. K. Chengeta and S. Viriri, ”A survey on facial recognition based on local directional and local binary patterns,” 2018 Conference on Information Communications Technology and Society (ICTAS), Durban, 2018, pp. 1-6.
  • 4. 13. S. Jain, C. Hu, and J. K. Aggarwal, “Facial expression recognition with temporal modeling of shapes,” in Proc. IEEE Int. Computer Vision Workshops (ICCV Workshops) (Barcelona, 2011), pp. 1642–1649. 14. X. Huang, G. Zhao, M. Pietikainen, and W. Zheng, “Dynamic facial expression recognition using boosted component-based spatiotemporal features and multiclassifier fusion,” in Advanced Concepts for Intelligent Vision Systems (Springer, 2010), pp. 312–322. 15. R. Mattivi and L. Shao, “Human action recognition using LBP-TOP as sparse spatio- temporal feature descriptor,” in Computer Analysis of Images and Patterns (Springer, 2009), pp. 740–747. 16. A. S. Spizhevoy, Robust dynamic facial expressions recognition using Lbp-Top descriptors and Bag-of-Words classification model 17. B. Jiang, M. Valstar, B. Martinez, M. Pantic, ”A dynamic appearance descriptor approach to facial actions temporal modelling”, IEEE Transaction on Cybernetics, vol. 44, no. 2, pp. 161-174, 2014. 18. Y. Wang, Hui Yu, B. Stevens and Honghai Liu, ”Dynamic facial expression recognition using local patch and LBP-TOP,” 2015 8th International Conference on Human System Interaction (HSI), Warsaw, 2015, pp. 362-367. doi: 10.1109/HSI.2015.7170694 19. Aggarwal, Charu C., Data Mining Concepts, ISBN 978-3-319-14141-1, 2015, XXIX, 734 p. 180 illus., 173 illus. in color. 20. Pietik¨ainen M, Hadid A, Zhao G, Ahonen T (2011) Computer vision using local binary patterns. Springer, New York. https://doi.org/10.1007/978-0-85729-748-8 21. Ravi Kumar Y B and C. N. Ravi Kumar, ”Local binary pattern: An improved LBP to extract nonuniform LBP patterns with Gabor filter to increase the rate of face similarity,” 2016 Second International Conference on Cognitive Computing and Information Processing (CCIP), Mysore, 2016, pp. 1-5. 22. Arana-Daniel N, Gallegos AA, L´opez-Franco C, Alan´ıs AY, Morales J, L´opezFranco A. Support Vector Machines Trained with Evolutionary Algorithms Employing Kernel Adatron for Large Scale Classification of Protein Structures. Evol Bioinform Online. 2016;12:285-302. Published 2016 Dec 4. doi:10.4137/EBO.S40912 23. K. Chengeta and S. Viriri, ”A survey on facial recognition based on local directional and local binary patterns,” 2018 Conference on Information CommunicaSignal & tions Technology and Society (ICTAS), Durban, 2018, pp. 1-6. doi: 10.1109/ICTAS.2018.8368757
  • 5. CHARACTERIZING HUMAN BEHAVIOURS USING STATISTICAL MOTION DESCRIPTOR Eissa Jaber Alreshidi1 and Mohammad Bilal2 , 1 University of Hail, Saudi Arabia,2 Comsats University, Pakistan ABSTRACT Identifying human behaviors is a challenging research problem due to the complexity and variation of appearances and postures, the variation of camera settings, and view angles. In this paper, we try to address the problem of human behavior identification by introducing a novel motion descriptor based on statistical features. The method first divide the video into N number of temporal segments. Then for each segment, we compute dense optical flow, which provides instantaneous velocity information for all the pixels. We then compute Histogram of Optical Flow (HOOF) weighted by the norm and quantized into 32 bins. We then compute statistical features from the obtained HOOF forming a descriptor vector of 192- dimensions. We then train a non-linear multi-class SVM that classify different human behaviors with the accuracy of 72.1%. We evaluate our method by using publicly available human action data set. Experimental results shows that our proposed method out performs state of the art methods. KEYWORDS Support vector machine, motion descriptor, features, human behaviours Full Text : https://aircconline.com/sipij/V10N1/10119sipij02.pdf Signal & Image Processing: An International Journal (SIPIJ) http://www.airccse.org/journal/sipij/vol10.html
  • 6. REFERENCES [1] Wang, Limin, Yuanjun Xiong, Zhe Wang, Yu Qiao, Dahua Lin, Xiaoou Tang, and Luc Van Gool. "Temporal segment networks: Towards good practices for deep action recognition." In European Conference on Computer Vision, pp. 20-36. Springer, Cham, 2016. [2] Feichtenhofer, Christoph, Axel Pinz, and Richard P. Wildes. "Spatiotemporal multiplier networks for video action recognition." In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7445-7454. IEEE, 2017. [3] Kong, Yu, Shangqian Gao, Bin Sun, and Yun Fu. "Action Prediction From Videos via Memorizing Hard-to-Predict Samples." In AAAI. 2018. [4] Ma, Shugao, Leonid Sigal, and Stan Sclaroff. "Learning activity progression in lstms for activity detection and early detection." In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1942-1950. 2016. [5] Hu, Weiming, Dan Xie, Zhouyu Fu, Wenrong Zeng, and Steve Maybank. "Semantic-based surveillance video retrieval." IEEE Transactions on image processing 16, no. 4 (2007): 1168-1181 [6] Ben-Arie, Jezekiel, Zhiqian Wang, Purvin Pandit, and Shyamsundar Rajaram. "Human activity recognition using multidimensional indexing." IEEE Transactions on Pattern Analysis & Machine Intelligence 8 (2002): 1091-1104. [7] Saqib, Muhammad, Sultan Daud Khan, and Michael Blumenstein. "Texture-based feature mining for crowd density estimation: A study." In Image and Vision Computing New Zealand (IVCNZ), 2016 International Conference on, pp. 1-6. IEEE, 2016. [8] Cutler, Ross, and Larry S. Davis. "Robust real-time periodic motion detection, analysis, and applications." IEEE Transactions on Pattern Analysis and Machine Intelligence 22, no. 8 (2000): 781- 796. [9] Efros, Alexei A., Alexander C. Berg, Greg Mori, and Jitendra Malik. "Recognizing action at a distance." In null, p. 726. IEEE, 2003. [10] Fathi, Alireza, and Greg Mori. "Action recognition by learning mid-level motion features." In Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pp. 1-8. IEEE, 2008. [11] Chaudhry, Rizwan, Avinash Ravichandran, Gregory Hager, and René Vidal. "Histograms of oriented optical flow and binet-cauchy kernels on nonlinear dynamical systems for the recognition of human actions." In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 1932-1939. IEEE, 2009. [12] Ullah, H., Altamimi, A. B., Uzair, M., & Ullah, M. (2018). Anomalous entities detection and localization in pedestrian flows. Neurocomputing, 290, 74-86. [13] Khan, Wilayat, Habib Ullah, Aakash Ahmad, Khalid Sultan, Abdullah J. Alzahrani, Sultan Daud Khan, Mohammad Alhumaid, and Sultan Abdulaziz. "CrashSafe: a formal model for proving crashsafety of Android applications." Human-centric Computing and Information Sciences 8, no. 1 (2018): 21.
  • 7. [14] Ullah, H., Ullah, M., & Uzair, M. (2018). A hybrid social influence model for pedestrian motion segmentation. Neural Computing and Applications, 1-17. [15] Ahmad, F., Khan, A., Islam, I. U., Uzair, M., & Ullah, H. (2017). Illumination normalization using independent component analysis and filtering. The Imaging Science Journal, 65(5), 308-313 [16] Ullah, H., Uzair, M., Ullah, M., Khan, A., Ahmad, A., & Khan, W. (2017). Density independent hydrodynamics model for crowd coherency detection. Neurocomputing, 242, 28-39. [17] Khan, Sultan Daud, Muhammad Tayyab, Muhammad Khurram Amin, Akram Nour, Anas Basalamah, Saleh Basalamah, and Sohaib Ahmad Khan. "Towards a Crowd Analytic Framework For Crowd Management in Majid-al-Haram." arXiv preprint arXiv:1709.05952 (2017). [18] Saqib, Muhammad, Sultan Daud Khan, Nabin Sharma, and Michael Blumenstein. "Extracting descriptive motion information from crowd scenes." In 2017 International Conference on Image and Vision Computing New Zealand (IVCNZ), pp. 1-6. IEEE, 2017. [19] Ullah, M., Ullah, H., Conci, N., & De Natale, F. G. (2016, September). Crowd behavior identification. In Image Processing (ICIP), 2016 IEEE International Conference on(pp. 1195-1199). IEEE. [20] Khan, S. "Automatic Detection and Computer Vision Analysis of Flow Dynamics and Social Groups in Pedestrian Crowds." (2016). [21] Arif, Muhammad, Sultan Daud, and Saleh Basalamah. "Counting of people in the extremely dense crowd using genetic algorithm and blobs counting." IAES International Journal of Artificial Intelligence 2, no. 2 (2013): 51. [22] Ullah, H., Ullah, M., Afridi, H., Conci, N., & De Natale, F. G. (2015, September). Traffic accident detection through a hydrodynamic lens. In Image Processing (ICIP), 2015 IEEE International Conference on (pp. 2470-2474). IEEE. [23] Ullah, H. (2015). Crowd Motion Analysis: Segmentation, Anomaly Detection, and Behavior Classification (Doctoral dissertation, University of Trento). [24] Khan, Sultan D., Stefania Bandini, Saleh Basalamah, and Giuseppe Vizzari. "Analyzing crowd behavior in naturalistic conditions: Identifying sources and sinks and characterizing main flows." Neurocomputing 177 (2016): 543-563. [25] Shimura, Kenichiro, Sultan Daud Khan, Stefania Bandini, and Katsuhiro Nishinari. "Simulation and Evaluation of Spiral Movement of Pedestrians: Towards the Tawaf Simulator." Journal of Cellular Automata 11, no. 4 (2016). [26] Khan, Sultan Daud, Giuseppe Vizzari, and Stefania Bandini. "A Computer Vision Tool Set for Innovative Elder Pedestrians Aware Crowd Management Support Systems." In AI* AAL@ AI* IA, pp. 75-91. 2016.
  • 8. Compression Algorithm Selection for Multispectral Mastcam Images Chiman Kwan, Jude Larkin, Bence Budavari, and Bryan Chou, Applied Research, LLC, USA ABSTRACT: The two mast cameras (Mastcam) onboard the Mars rover, Curiosity, are multispectral imagers with nine bands in each camera. Currently, the images are compressed losslessly using JPEG, which can achieve only two to three times compression. We present a two-step approach to compressing multispectral Mastcam images. First, we propose to apply principal component analysis (PCA) to compress the nine bands into three or six bands. This step optimally compresses the 9-band images through spectral correlation between the bands. Second, several well-known image compression codecs, such as JPEG, JPEG-2000 (J2K), X264, and X265, in the literature are applied to compress the 3-band or 6-band images coming out of PCA. The performance of dif erent algorithms was assessed using four well-known performance metrics. Extensive experiments using actual Mastcam images have been performed to demonstrate the proposed framework. We observed that perceptually lossless compression can be achieved at a 10:1 compression ratio. In particular, the performance gain of an approach using a combination of PCA and X265 is at least 5 dBs in terms peak signal-to-noise ratio (PSNR) at a 10:1 compression ratio over that of JPEG when using our proposed approach. KEYWORDS: Perceptually lossless compression; Mastcam images; multispectral images; JPEG; JPEG-2000; X264; X265 Full Text: https://aircconline.com/sipij/V10N1/10119sipij01.pdf Signal & Image Processing: An International Journal (SIPIJ) http://www.airccse.org/journal/sipij/vol10.html
  • 9. REFERENCES: [1] Bell III, & J. F. et al, (2017) “The Mars Science Laboratory Curiosity Rover Mast Camera (Mastcam) Instruments: Pre-Flight and In-Flight Calibration, Validation, and Data Archiving”, AGU Journal Earth and Space Science. [2] Ayhan, B & Kwan, C & Vance, S, (2015) “On the Use of a Linear Spectral Unmixing Technique for Concentration Estimation of APXS Spectrum”, J. Multidisciplinary Engineering Science and Technology, 2, 2469-2474. [3] Wang, W., Li, S., Qi, H., Ayhan, B., Kwan, C., Vance, S., (2014), “Revisiting the Preprocessing Procedures for Elemental Concentration Estimation based on CHEMCAM LIBS on MARS Rover”, 6th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS) [4] Wang, W., Ayhan, B., Kwan, C., Qi, H., Vance, S., (2014), “A Novel and Effective Multivariate Method for Compositional Analysis using Laser Induced Breakdown Spectroscopy”, 35th International Symposium on Remote Sensing of Environment [5] Ayhan, B.; Dao, M.; Kwan, C.; Chen, H.; Bell, J.; Kidd, R., (2017), “A Novel Utilization of Image Registration Techniques to Process Mastcam Images in Mars Rover with Applications to Image Fusion, Pixel Clustering, and Anomaly Detection”, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, [6] Kwan, C.; Dao, M.; Chou, B.; Kwan, L. M.; Ayhan, B., (2017), “Mastcam Image Enhancement Using Estimated Point Spread Functions”, IEEE Ubiquitous Computing, Electronics & Mobile Communication Conference, New York. [7] Kwan, C.; Chou, B. and Ayhan B., (2018), “Enhancing Stereo Image Formation and Depth Map Estimation for Mastcam Images”, IEEE Ubiquitous Computing, Electronics & Mobile Communication Conference, New York. [8] Kwan, C.; Larkin, J., (2017), “Perceptually Lossless Compression for Mastcam Images”, IEEE Ubiquitous Computing, Electronics & Mobile Communication Conference, New York. [9] Haines, R. F.; Chuang, S. L., (1992), “The effects of video compression on acceptability of images for monitoring life sciences experiments”, NASA-TP-3239. [10] Garrett-Glaser, J., (2010). “Patent skullduggery: Tandberg rips off x264 algorithm,” online https://lwn.net/Articles/417562/. [11] Hruska, J., (2013), “H.265 benchmarked: Does the next-generation video codec live up to expectations?” ExtremeTech. [12] International Organization for Standardization, “ISO/IEC 15444-1:2016 - Information technology -- JPEG 2000 image coding system: Core coding system”, retrieved 2017-10-19. [13] Ayhan, B.; Kwan, C. and Zhou, J., (2018), “A New Nonlinear Change Detection Approach
  • 10. Based on Band Ratioing”, Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XXIV. [14] Glaser, F., (2010), “First Look: H.264 and VP8 Compared”, Diary of An x264 Developer. [15] Converse, A., (2015), “New video compression techniques under consideration for VP10”, presentation at the VideoLAN Dev Days. [16] Haykin, S., (1993), “Neural Networks and Learning Machines”, Pearson Education. [17] Wu, J.; Liang, Q. and Kwan, C., (2012), “A Novel and Comprehensive Compressive Sensing based System for Data Compression”, IEEE Globecom. [18] Blanes, I., Magli, E., and Serra-Sagrista, J., (2014), “A tutorial on image compression for optical space imaging systems”, Geoscience and Remote Sensing Magazine, IEEE, vol. 2, no. 3, pp. 8–26. [19] Du, Q. and Fowler, J. E., (2007), “Hyperspectral image compression using JPEG2000 and principal component analysis”, Geoscience and Remote Sensing Letters, IEEE, vol. 4, no. 2, pp. 201– 205. [20] Zhou, J. and Kwan, C., (2018), “A Hybrid Approach for Wind Tunnel Data Compression”, Data Compression Conference, Snowbird, Utah, USA. [21] Kwan, C. and Luk, Y., (2018), “Hybrid sensor network data compression with error resiliency”, Compression Conference, Snowbird, Utah, USA. [22] Strang, G. and Nguyen, T, (1997), “Wavelets and filter banks”, Wellesley-Cambridge Press. [23] Kwan, C.; Li, B.; Xu, R.; Tran, T. and Nguyen, T., (2001), “Very Low-Bit-Rate Video Compression Using Wavelets”, Wavelet Applications VIII, 4391, 176-180. [24] Kwan, C.; Li, B.; Xu, R.; Tran, T. and Nguyen, T., (2001), “SAR Image Compression Using Wavelets”, Wavelet Applications VIII, 4391, 349-357. [25] Kwan, C.; Li, B.; Xu, R.; Li, X.; Tran, T. and Nguyen, T. Q., (2006), “A Complete Image Compression Codec Based on Overlapped Block Transform”, Eurosip Journal of Applied Signal Processing, 1-15. [26] Ponomarenko, N.; Silvestri, F.; Egiazarian, K.; Carli, M.; Astola, J. and Lukin, V., (2007), “On between-coefficient contrast masking of DCT basis functions”, Proc. Third International Workshop on Video Processing and Quality Metrics for Consumer Electronics, Scottsdale, AZ, USA. [27] Kwan, C.; Shang, E. and Tran, T., (2018), “Perceptually lossless image compression with error recovery”, 2nd International Conference on Vision, Image and Signal Processing, Las Vegas, NV, USA. [28] Kwan, C., Shang, E. and Tran, T., (2018), “Perceptually lossless video compression with error concealment”, 2nd International Conference on Vision, Image and Signal Processing, Las Vegas, NV, USA.
  • 11. Perceptually Lossless Compression with Error Concealment for Periscope and Sonar Videos Chiman Kwan1 , Jude Larkin1 , Bence Budavari1 , Eric Shang1 , and Trac D. Tran2 , 1 Applied Research LLC, USA and 2 The Johns Hopkins University, USA ABSTRACT: We present a video compression framework that has two key features. First, we aim at achieving perceptually lossless compression for low frame rate videos (6 fps). Four well-known video codecs in the literature have been evaluated and the performance was assessed using four well-known performance metrics. Second, we investigated the impact of error concealment algorithms for handling corrupted pixels due to transmission errors in communication channels. Extensive experiments using actual videos have been performed to demonstrate the proposed framework. KEYWORDS: Perceptually lossless compression; error recovery; maritime and sonar videos Full Text: https://aircconline.com/sipij/V10N2/10219sipij01.pdf Signal & Image Processing: An International Journal (SIPIJ) http://www.airccse.org/journal/sipij/vol10.html
  • 12. REFERENCES: [1] Strang, G. and Nguyen, T, (1997), “Wavelets and filter banks”, Wellesley-Cambridge Press. [2] Kwan, C.; Li, B.; Xu, R.; Tran, T. and Nguyen, T., (2001), “Very Low-Bit-Rate Video Compression Using Wavelets”, Wavelet Applications VIII, 4391, 176-180. [3] Kwan, C., Larkin, J., Budavari, B. and Chou, B., (2019), “Compression algorithm selection for multispectral Mastcam images,” Signal & Image Processing: An International Journal. [4] Kwan, C. and Larkin, J., (2018), “Perceptually Lossless Compression for Mastcam Images,” IEEE Ubiquitous Computing, Electronics & Mobile Communication Conference, New York City, [5] Pennebaker, W. B. and Mitchell, J. L., (1993), JPEG–Still image data compression standard, Van Nostrand Reinhold. [6] Marpe, D., George, G., Cycon, H. L., and Barthel, K. U., (2004) “Performance evaluation of MotionJPEG2000 in comparison with H.264/AVC operated in pure intracoding mode,” Proc. SPIE 5266, Wavelet Applications in Industrial Processing. [7] Kwan, C.,Li, B., Xu, R., Tran, T., and Nguyen, T., (2001), “SAR image compression using wavelets,” Wavelet Applications VIII, Proc. SPIE (vol. 4391). [8] Tran, T. D., Liang, J., Tu, C., (2003)“Lapped transform via time-domain pre-and post-filtering,” IEEE Transactions on Signal Processing. [9] Valin, J.-M. and Terriberry, T. B., (2015), “Perceptual Vector Quantization for Video Coding,” Proceedings of SPIE Visual Information Processing and Communication Conference. [10] Kwan, C., Shi, E., Um, Y.,(2018),“High performance video codec with error concealment”, Data Compression Conference. [11] Kwan, C., Larkin, J., Budavari, B., Chou, B., Shang, E., Tran, T. D., (2019), “A Comparison of Compression Codecs for Maritime and Sonar Images in Bandwidth Constrained Applications,” Computers. [12] Kwan, C., Shang, E. and Tran, T., (2018), “Perceptually lossless video compression with error concealment”, 2nd International Conference on Vision, Image and Signal Processing, Las Vegas, NV, USA. [13] Ozer, J., (2010), “VP8 vs. H.264,” Available online. [14] Ozer, J., (2016), “What is VP9,” Available online. [15] Dogan, S., Sadka, A. H., Kongoz, A. M., (2005), “Error Resilient Techniques for Video Transmission Over Wireless Channels,” Center for Communications System Research, U. Surrey, UK. [16] Nguyen, D., Dao, M., Tran, T. D., (2011), “Error concealment via 3-mode tensor approximation”, IEEE Int. Conf. on ImageProcessing (ICIP), Brussels, Sep. 2011. [17] Kwan, C., Budavari, B., Dao, M., Zhou, J.,(2017),“New Sparsity Based Pansharpening Algorithm for Hyperspectral Images,”IEEE Ubiquitous Computing, Electronics & Mobile Communication Conference, p 88-93.
  • 13. [18] Dao, M., Kwan, C., Ayhan, B., Tran, T.,(2016),“Burn Scar Detection Using Cloudy MODIS Images via Low-rank and Sparsity-based Models,”IEEE Global Conference on Signal and Information Processing, p 177 – 181. [19] Wang, W., Li, S., Qi, H., Ayhan, B., Kwan, C., Vance, S.,(2015),“Identify Anomaly Component by Sparsity and Low Rank”, IEEE Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensor (WHISPERS). [20] Wang, W., Li, S., Qi, H., Ayhan, B., Kwan, C., Vance, S., (2014), “Revisiting the Preprocessing Procedures for Elemental Concentration Estimation based on CHEMCAM LIBS on MARS Rover”, 6th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS). [21] Zhou, J., Kwan, C.,(2018),“High Performance Image Completion using Sparsity based Algorithms”, SPIE Commercial + Scientific Sensing and Imaging Conference. [22] Zhou, J., Ayhan B., Kwan, C., Tran, T.,(2018),“ATR Performance Improvement Using Images with Corrupted or Missing Pixels”, SPIE Defense + Security Conference. [23] Kwan, C., Luk, Y.,(2018),“Hybrid sensor network data compression with error resiliency,”Data Compression Conference. [24] Zhou, J., Kwan, C., (2018),“Missing Link Prediction in Social Networks,”15th International Symposium on Neural Networks. [25] Kwan, C., Zhou, J.,(2015), Method for Image Denoising, Patent #9,159,121. [26] Elad, M.,(2010), Sparse and Redundant Representations, Springer New York. [27] Chen, Y., Hu, Y., Au, O. C., Li, H., Chen, C. W.,(2008), “Video error concealment using spatiotemporal boundary matching and partial differential equation,” IEEE Trans. on Multimedia, vol. 10, no. 1, pp. 2-15, 2008. [28] Ponomarenko, N., Silvestri, F., Egiazarian, K., Carli, M., Astola, J., Lukin, V., (2007), “On betweencoefficient contrast masking of DCT basis functions”, Proc. Third International Workshop on Video Processing and Quality Metrics for Consumer Electronics, Scottsdale, AZ, USA.
  • 14. Application of A Computer Vision Method for Soiling Recognition in Photovoltaic Modules for Autonomous Cleaning Robots Tatiani Pivem1 , Felipe de Oliveira de Araujo2 , Laura de Oliveira de Araujo2 , Gustavo Spontoni de Oliveira2 , 1 Federal University of Mato Grosso do Sul - UFMS, Brazil and 2 Nexsolar Energy Solutions, Brazil ABSTRACT : It is well known that this soiling can reduce the generation efficiency in PV system. In some case according to the literature of loss of energy production in photovoltaic systems can reach up to 50%. In the industry there are various types of cleaning robots, they can substitute the human action, reducing cleaning cost, be used in places where access is difficult, and increasing significantly the gain of the systems. In this paper we present an application of computer vision method for soiling recognition in photovoltaic modules for autonomous cleaning robots. Our method extends classic CV algorithm such Region Growing and the Hough. Additionally, we adopt a pre-processing technique based on Top Hat and Edge detection filters. We have performed a set of experiments to test and validate this method. The article concludes that the developed method can bring more intelligence to photovoltaic cleaning robots. KEYWORDS : Solar Panel, Soiling Identification, Cartesian Robots, Autonomous Robots, Computer Vision Full Text: https://aircconline.com/sipij/V10N3/10319sipij05.pdf Signal & Image Processing: An International Journal (SIPIJ) http://www.airccse.org/journal/sipij/vol10.html
  • 15. REFERENCES: [1] Kannan N, Vakeesan D. Solar energy for future world: – a review. Renew Sustain Energy Rev 2016;62:1092–105. http://dx.doi.org/10.1016/j.rser.2016.05.022 [2] Ehsanul Kabir, Pawan Kumarb, Sandeep Kumarc, Adedeji A. Adelodund, Ki-Hyun Kime “Solar energy: Potential and future prospects”. Renewable and Sustainable Energy Reviews 82 (2018) 894– 900 [3] Darwish, Z.A., Kazem, H.A., Sopian, K., Al-Goul, M.A. and Alawadhi, H., 2015. Effect of dust pollutant type on photovoltaic performance. Renewable and Sustainable Energy Reviews, 41, pp.735- 744 [4] Mekhilef S, Saidur R, Kamalisarvestani M. Effect of dust, humidity and air velocity on efficiency of photovoltaic cells. Renew Sustain Energy Rev2012;16:2920–5 [5] Menendez, O., Auat Cheein, F. A., Perez, M., & Kouro, S. (2017). Robotics in Power Systems: Enabling a More Reliable and Safe Grid. IEEE Industrial Electronics Magazine, 11(2), 22–34. doi:10.1109/mie.2017.2686458 [6] Yfantis E (2017) An Intelligent Robots-Server System for Solar Panel Cleaning and Electric Power Output Optimization. Int Rob Auto J 3(5):00066. DOI: 10.15406/iratj.2017.03.00066 [7] El-Amiri, A., Saifi, A., Obbadi, A., Errami, Y., Sahnoun, S., & Elhassnaoui, A. (2018). Defects Detection in Bi-Facial Photovoltaic Modules PV Using Pulsed Thermography. 2018 Renewable Energies, Power Systems & Green Inclusive Economy (REPS-GIE). doi:10.1109/repsgie.2018.8488833 [8] Denio, H. (2012). Aerial solar Thermography and condition monitoring of photovoltaic systems. 2012 38th IEEE Photovoltaic Specialists Conference. doi:10.1109/pvsc.2012.6317686 [9] F.P.G. Márquez, I. Segovia, Condition Monitoring System for Solar Power Plants with Radiometric and Thermographic Sensors Embedded in Unmanned Aerial Vehicles, Measurement (2019), doi: https://doi.org/10.1016/j.measurement.2019.02.045 [10] Deitsch, Sergiu, et al. "Automatic classification of defective photovoltaic module cells in electroluminescence images." Solar Energy 185 (2019): 455-468. [11] ECOPPIA. Empowering Solar. 2019. Available online: https://www.ecoppia.com/ (accessed on 20 May 2019). [12] GEKKO. GEKKO Solar Robot. Available online: https://www.serbot.ch/en/solar- panelscleaning/gekko-solar-robot (accessed on 20 May 2019). [13] SMP Robotics. S5 PTZ Security Robot—Rapid Deployment Surveillance System. 2016. Available online:https://smprobotics.com/security_robot/security- patrolrobot/rapid_deployment_surveillance_system/ (accessed on 20 May 2019). [14] Maurtua, I., Susperregi, L., Fernández, A., Tubío, C., Perez, C., Rodríguez, J., Ghrissi, M. (2014). MAINBOT – Mobile Robots for Inspection and Maintenance in Extensive Industrial Plants. Energy Procedia, 49, 1810–1819. doi:10.1016/j.egypro.2014.03.192
  • 16. [15] Felsch, T., Strauss, G., Perez, C., Rego, J., Maurtua, I., Susperregi, L., & Rodríguez, J. (2015). Robotized Inspection of Vertical Structures of a Solar Power Plant Using NDT Techniques. Robotics, 4(2), 103–119. doi:10.3390/robotics4020103 [16] Kim, K.A.; Seo, G.S.; Cho, B.H.; Krein, P.T. Photovoltaic Hot-Spot Detection for Solar Panel Substrings Using AC Parameter Characterization. IEEE Trans. Power Electron. 2016, 31, 1121–1130. DOI 10.1109/TPEL.2015.2417548 [17] Samani L, Mirzaei R, Model Predictive Control Method to Achieve Maximum Power Point Tracking Without Additional Sensors in Stand-Alone Renewable Energy Systems, Optik (2019), https://doi.org/10.1016/j.ijleo.2019.04.067 [18] Daniel Riley and Jay Johnson, “Photovoltaic Prognostics and Heath Management using Learning Algorithms”, Photovoltaic Specialists Conference (PVSC), 2012 38th IEEE, DOI: 10.1109/PVSC.2012.6317887 [19] Zapata, J.W.; Perez, M.A.; Kouro, S.; Lensu, A.; Suuronen, A. Design of a Cleaning Program for a PV Plant Based on Analysis of Energy Losses. IEEE J. Photovolt. 2015, 5, 1748–1756. [20] Mohammad Hammouda,, Bassel Shokra, Ali Assia, Jaafar Hallala, Paul Khouryb . “Effect of dust cleaning on the enhancement of the power generation of a coastal PV-power plant at Zahrani Lebanon”, Solar Energy 184 (2019) 195–201. DOI: https://doi.org/10.1016/j.solener.2019.04.005 [21] Sonick Suri, Anjali Jain, Neelam Verma, Nopporn Prasertpoj. “SCARA Industrial Automation Robot”, 2018 International Conference on Power Energy, Environment and Intelligent Control (PEEIC) G. L. Bajaj Inst. of Technology and Management Greater Noida, U. P., India, Apr 13-14, 2018 [22] Biryukov, S.,Faiman, D., Goldfeld, A.: 'An optical system for the quantitative study of particulate contamination on solar collector surfaces' Solar Energy Vol. 66, No. 5, pp. 371–378, 1999 [23] Atten P., Pang H.L., Reboud J.L., D.: ' Study of dust removal by standing wave electric curtain for application to solar cells on mars '. IEEE Transactions on Industry Applications Vol.45, France, Jan 2009, pp. 75–86 [24] Mehta, Sachin & P. Azad, Amar & Chemmengath, Saneem & Raykar, Vikas & Kalyanraman, Shivkumar.: 'DeepSolarEye: Power Loss Prediction and Weakly Supervised Soiling Localization via Fully Convolutional Networks for Solar Panels '. IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Tahoe, NV, 2018, pp. 333-342. [25] WK Yap, R Galet, KC Yeo.: 'Quantitative analysis of dust and soiling on solar pv panels in the tropics utilizing image-processing methods'. Asia-Pacific Solar Research Conference, 2015 [26] Gonzales, R., Woods, R.: Digital Image Processing'. Vol.2 [27] Philipe A. Dias., Henry Medeiros.: 'Semantic Segmentation Refinement by Monte Carlo Region Growing of High Confidence Detections'. Cornell University Library, 2018, available at: https://arxiv.org/abs/1802.07789
  • 17. A Novel Data Dictionary Learning for Leaf Recognition Shaimaa Ibrahem1 , Yasser M. Abd El-Latif2 and Naglaa M. Reda2 , 1 Higher Institute for Computer Sciences and Information System, Egypt and 2 Ain Shams University, Egypt ABSTRACT Automatic leaf recognition via image processing has been greatly important for a number of professionals, such as botanical taxonomic, environmental protectors, and foresters. Learn an over- complete leaf dictionary is an essential step for leaf image recognition. Big leaf images dimensions and training images number is facing of fast and complete data leaves dictionary. In this work an efficient approach applies to construct over-complete data leaves dictionary to set of big images diminutions based on sparse representation. In the proposed method a new cropped-contour method has used to crop the training image. The experiments are testing using correlation between the sparse representation and data dictionary and with focus on the computing time. KEYWORDS Leaf image recognition, Dictionary learning, Sparse representation, Online Dictionary Learning Full Text : https://aircconline.com/sipij/V10N3/10319sipij04.pdf Signal & Image Processing: An International Journal (SIPIJ) http://www.airccse.org/journal/sipij/vol10.html
  • 18. REFERENCES [1] Jou-Ken Hsiao, Li-Wei Kang, “Learning-Based Leaf Image Recognition Frameworks”, Springer International Publishing Switzerland 2015. [2] C. Yang, H. Wei, and Q. Yu, “Multiscale Triangular Centroid Distance for Shape-Based Plant Leaf Recognition,” in European Conf. on Artificial Intelligence, 2016, pp. 269–276. [3] Wu, S.G., Bao, F.S., Xu, E.Y., Wang, Y.-X., Chang, Y.-F., Xiang, Q.-L.” A leaf recognition algorithm for plant classification using probabilistic neural network.”, In: Proceedings of IEEE International Symposium on Signal Processing and Information Technology, pp. 11–16,Giza, Egypt Dec 2007. [4] Du, J.-X., Wang, X.-F., Zhang, G.-J. ” Leaf shape based plant species recognition. “ Appl. Math.Comput. 185(2), 883–893 (2007). [5] Sari, C., Akgul, C.B., Sankur, B., “ Combination of gross shape features, fourier descriptors and multiscale distance matrix for leaf recognition.” In: Proceedings of International Symposium on ELMAR, pp. 23–26, Zadar, Croatia, Sept 2013. [6] O. Mzoughi, I. Yahiaoui, N. Boujemaa, and E. Zagrouba, “Semanticbased automatic structuring of leaf images for advanced plant species identification,” Multimedia Tools and Applications, vol. 75, no. 3, pp. 1615–1646, 2016. [7] Aakif , M.F. Khan ,” Automatic classification of plants based on their leaves”, Biosyst. Eng. 139 (2015) 66–75 . [8] Kadir, A., Nugroho, L.E., Susanto, A., Santosa, P.I., ” Leaf classification using shape, color,and texture features.” Int. J. Comput. Trends Technol. 1(3), 225–230 (2011). [9] A. Olsen , S. Han , B. Calvert , P. Ridd , O. Kenny , “In situ leaf classification using histograms of oriented gradients”, in: International Conference on Digital Image Computing, 2015, pp. 1–8 . [10] Z. Tang , Y. Su , M.J. Er , F. Qi , L. Zhang , J. Zhou , “A local binary pattern based texture descriptors for classification of tea leaves”, Neurocomputing 168 (2015) 1011–1023 . [11] G.L. Grinblat , L.C. Uzal , M.G. Larese , P.M. Granitto ,” Deep learning for plant identification using vein morphological patterns”, Comput. Electron. Agric. 127 (2016) 418–424 . [12] Mairal, J., Bach, F., Ponce, J., Sapiro, G.: “Online learning for matrix factorization and sparse coding”. J. Mach. Learn. Res 11, 19–60 (2010). [13] Aharon, M., Elad, M., Bruckstein, A.M.,” The K-SVD: an algorithm for designing of overcomplete dictionaries for sparse representation”. IEEE Trans. Sig. Process. 54(11), 4311–4322 (2006). [14] Beleites, C. and Neugebauer, U. and Bocklitz, T. and Krafft, C. and Popp, J.: “Sample size planning for classification models”. Anal ChimActa, 2013, 760,2533.DOI:10.1016/j.aca.2012.11.007accepted manuscript on arXiv: 1211.1323. [15] R. Rubinstein, M. Zibulevsky, and M. Elad, "Learning Sparse Dictionaries for Sparse Signal
  • 19. Approximation", Technical Report - CS, Technion, June 2009. [16] Rodgers, J. L.; Nicewander, W. A. (1988). "Thirteen ways to look at the correlation coefficient". The American Statistician. 42 (1): 59–66. doi:10.1080/00031305.1988.10475524. JSTOR 2685263. [17] The leaf image dataset available from http://sourceforge.net/projects/flavia/ files/. [18] J. Sulam, B. Ophir, M. Zibulevsky and M. Elad, "Trainlets: Dictionary Learning in High Dimensions", IEEE Transactions on Signal Processing, Volume: 64, Issue: 12, June15, 2016.
  • 20. Rain Streaks Elimination Using Image Processing Algorithms Dinesh Kadam1 , Amol R. Madane2 , Krishnan Kutty2 and S. V. Bonde1 , 1 SGGSIET, India and 2 Tata Consultancy Services Ltd., India ABSTRACT The paper addresses the problem of rain streak removal from videos. While, Rain streak removal from scene is important but a lot of research in this area, robust and real time algorithms is unavailable in the market. Difficulties in the rain streak removal algorithm arises due to less visibility, less illumination, and availability of moving camera and objects. The challenge that plagues rain streak recovery algorithm is detecting rain streaks and replacing them with original values to recover the scene. In this paper, we discuss the use of photometric and chromatic properties for rain detection. Updated Gaussian Mixture Model (Updated GMM) has detected moving objects. This rain streak removal algorithm is used to detect rain streaks from videos and replace it with estimated values, which is equivalent to original value. The spatial and temporal properties are used to replace rain streaks with its original values. KEYWORDS Dynamic Scene, Edge Filters, Gaussian Mixture Model (GMM), Rain Streaks Removal, Scene Recovery, Video Deraining Full Text: https://aircconline.com/sipij/V10N3/10319sipij03.pdf Signal & Image Processing: An International Journal (SIPIJ) http://www.airccse.org/journal/sipij/vol10.html
  • 21. REFERENCES [1] Astha Modak, Samruddhi Paradkar, Shruti Manwatkar, Amol R. Madane, Ashwini M. Deshpande, “Human Head Pose and Eye State Based Driver Distraction Monitoring System”, 3rd Computer Vision and Image Processing (CVIP) 2018, Indian Institute of Information Technology, Design and Manufacturing, Jabalpur (IIITDMJ), India. [2] A. K. Tripathi, S. Mukhopadhyay, “Video Post Processing: Low latency Spatiotemporal Approach for Detection and Removal of Rain”, IET Image Processing Journal, Vol. 6, no. 2, pp. 181-196, March 2012. [3] Kshitiz Garg and Shree K. Nayar, “Detection and Removal of Rain From Videos”, IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1, pp. 528- 535, 19 july 2004. [4] Kshitiz Garg and Shree K. Nayar, “When Does a Camera See Rain ?”, Tenth IEEE International Conference on Computer Vision, ICCV 2005, 17-21 Oct. 2005 [5] A. K. Tripathi, S. Mukhopadhyay, “Video Post Processing: Low latency Spatiotemporal Approach for Detection and Removal of Rain”, IET Image Processing Journal, Vol. 6, no. 2, pp. 181-196, March 2012. [6] Jie Chen, Lap-Pui Chau, “A Rain Pixel Recovery Algorithm for Videos with Highly Dynamic Scene”, IEEE Transactions on Image Processing, vol. 23, no. 3, pp. 1097-1104, March 2014. [7] Jin Hwan Kim, Jae Young Sim, Chang Su Kim, “Video Deraining and Denoising Using Temporal Correlation and Low Rank Matrix Completion”, IEEE Transactions on Image Processing, vol 24, no. 9, September 2015 [8] A. K. Tripathi, S. Mukhopadhyay, “Meteorological Approach for Detection and Removal of Rain From Video”, IET computer vision 2013 Vol 7, no. 1, pp. 36-47, 23 may 2013 [9] Jing Xu, Wei Zhao, Peng Liu, Xianglong Tang, “an improved guidance image based method to remove rain and snow in a single image”, Computer and Information Science, vol. 5, no. 3, May 2012 [10] Kshitiz Garg and S. K. Nayar, “Vision and rain”, International Journal on Computer Vision, vol. 75, no. 1, pp. 3–27, 2007. [11] X. Zhang, H. Li, Y. Qi, “Rain Removal in Video by Combining Temporal and Chromatic Properties”, in Proc. IEEE International Conference Multimedia and Expo., 2006, pp. 461-464. [12] Duan Yu Chen, Chien Cheng Chen and Li Wei, ‘”Visual Depth Guided Color Image Rain Streaks Removal Using Sparce Coding”, IEEE Transaction on circuits and system for video technology, vol. 24 , no. 8, august 2014. [13] C. Stauffer, W.E.L. Grimson, “Adaptive background mixture models for real-time tracking”, Computer Vision and Pattern Recognition IEEE Computer Society Conference , vol. 2, pp. 252-259, 23-25 June 1999 [14] KaewTraKulPong P, Bowden R., “An improved adaptive background mixture model for real- time tracking with shadow detection,” Proceedings 2nd European Workshop on Advanced Video
  • 22. Based Surveillance Systems (AVBS 2001) , Kingston, UK, September 2001. [15] Zivkovic Z., “Improved adaptive Gaussian mixture model for background subtraction,” Int Conf Pattern Recognition (ICPR 2004), 2004, 2: 28-31. [16] Zang Q, Klette R., “Evaluation of an adaptive composite gaussian model in video surveillance,” CITR Technical Report 114, Auckland University, August 2002. [17] White B, Shah M., “Automatically tuning background subtraction parameters using particle swarm optimization,” IEEE Int Conf on Multimedia & Expo (ICME 2007), Beijing, China, 2007; 1826-1829 [18] Grimson Wel, Stauffer C. Romano R. Lee L., “Using adaptive tracking to classify and monitor activities in a site,” 1998 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No.98CB36231). IEEE Comput. Soc. 1998. 1998. [19] Stauffer C, Grimson W. E. L., “Learning patterns of activity using real-time tracking,” IEEE Transactions on Pattern Analysis & Machine Intelligence, 2000. 22(8): p. 747-57. [20] Pushkar Gorur, Bharadwaj Amrutur, “Speeded up Gaussian Mixture Model Algorithm for Background Subtraction,” 8th IEEE Int Conf on Advanced Video and Signal-Based Surveillance, 2011. [21] Thierry Bouwmans, Fida El Baf, Bertrand Vachon, “Background Modeling using Mixture of Gaussians for Foreground Detection - A Survey,” Recent Patents on Computer Science, Bentham Science Publishers, 2008, 1 (3), pp.219-237. [22] L. Li, W. Huang, Q. Tian, “Statestical Modelling of Complex Background for Foreground Object Detection,” IEEE Transaction on Image Processing. 13(11):1459-1472, 2004.
  • 23. Method for the Detection of Mixed QPSK Signals Based on the Calculation of Fourth-Order Cumulants Vasyl Semenov, Pavel Omelchenko and Oleh Kruhlyk, Delta SPE LLC, Ukraine ABSTRACT In this paper we propose the method for the detection of Carrier-in-Carrier signals using QPSK modulations. The method is based on the calculation of fourth-order cumulants. In accordance with the methodology based on the Receiver Operating Characteristic (ROC) curve, a threshold value for the decision rule is established. It was found that the proposed method provides the correct detection of the sum of QPSK signals for a wide range of signal-to-noise ratios and also for the different bandwidths of mixed signals. The obtained results indicate the high efficiency of the proposed detection method. The advantage of the proposed detection method over the “radiuses” method is also shown. KEYWORDS Carrier-in-Carrier, Cumulants, QPSK, Receiver Operating Curve Full Text: https://aircconline.com/sipij/V10N3/10319sipij02.pdf Signal & Image Processing: An International Journal (SIPIJ) http://www.airccse.org/journal/sipij/vol10.html
  • 24. REFERENCES [1] Agne, Craig & Cornell, Billy & Dale, Mark & Keams, Ronald & Lee, Frank, (2010) “Sharedspectrum bandwidth efficient satellite communications”, Proceedings of the IEEE Military Communications Conference (MILCOM' 10), pp341-346. [2] Gouldieff, Vincent & Palicot, Jacques, (2015) “MISO Estimation of Asynchronously Mixed BPSK Sources”, Proc. IEEE Conf. EUSIPCO, pp369-373. [3] Semenov, Vasyl, (2018) “Method of Iterative Single-Channel Blind Separation for QPSK Signals”, Mathematical and computer modelling, Vol. 17, No. 2, pp108-116. [4] Feng, Hao & Gao, Yong, (2016) “High-Speed Parallel Particle Filter for PCMA Signal Blind Separation”, Radioelectronics and Communications Systems, Vol.59, No.10, pp305-313. [5] Meyer-Basea, Anke & Grubera Peter & Theisa, Fabian,and Foo, Simon, (2006) “Blind source separation based on self-organizing neural network”, Eng. Appl. Artificial Intelligence, Vol. 19, pp305-311. [6] Fernandes, Carlos Estevao R. & Comon, Pierre & Favier, Gerard, (2010) “Blind identification of MISO-FIR channels”, Signal Processing, Vol. 90, pp490–503. [7] Swami, Anantharam & and Sadler, Brain M., (2000) “Hierarchical digital modulation classification using cumulants,” IEEE Trans. Commun., Vol. 48, pp416-429. [8] Wunderlich, Adam & Goossens, Bart & Abbey, Craig K. “Optimal Joint Detection and Estimation That Maximizes ROC-Type Curves” (2016) IEEE Transactions on Medical Imaging, Vol. 35, No.9, pp2164– 2173.
  • 25. Machine-Learning Estimation of Body Posture and Physical Activity by Wearable Acceleration and Heartbeat Sensors Yutaka Yoshida2 , Emi Yuda3, 1 , Kento Yamamoto4 , Yutaka Miura5 and Junichiro Hayano1 , 1 Nagoya City University Graduate School of Medical Science, Japan, 2 Nagoya City University Graduate School of Design and Architecture, Japan, 3 Tohoku University Graduate School of Engineering, Japan, 4 University of Tsukuba Graduate School of Comprehensive Human Sciences, Japan and 5 Shigakkan University, Japan ABSTRACT We aimed to develop the method for estimating body posture and physical activity by acceleration signals from a Holter electrocardiographic (ECG) recorder with built-in accelerometer. In healthy young subjects, triaxial-acceleration and ECG signal were recorded with the Holter ECG recorder attached on their chest wall. During the recording, they randomly took eight postures, including supine, prone, left and right recumbent, standing, sitting in a reclining chair, sitting in chairs with and without backrest, and performed slow walking and fast walking. Machine learning (Random Forest) was performed on acceleration and ECG variables. The best discrimination model was obtained when the maximum values and standard deviations of accelerations in three axes and mean R-R interval were used as feature values. The overall discrimination accuracy was 79.2% (62.6-90.9%). Supine, prone, left recumbent, and slow and fast walk were discriminated with >80% accuracy, although sitting and standing positions were not discriminated by this method. KEYWORDS Accelerometer, Holter ECG, Posture, Activity, Machine learning, Random Forest, R-R interval Full Text: https://aircconline.com/sipij/V10N3/10319sipij01.pdf Signal & Image Processing: An International Journal (SIPIJ) http://www.airccse.org/journal/sipij/vol10.html
  • 26. REFERENCES [1] World Health Organization, Global recommendations on Physical Activity for Health. Geneva: World Health Organization; 2010. [2] Sofi, F., Valecchi, D., Bacci, D., Abbate, R., Gensini, G. F., Casini, A., Macchi, C. (2011) "Physical activity and risk of cognitive decline: a meta-analysis of prospective studies", J. Intern. Med., Vol. 269, No. 1, 107-117. [3] Yeoh, W. S., Pek, I., Yong, Y. H., Chen, X., Waluyo, A. B. (2008) "Ambulatory monitoring of human posture and walking speed using wearable accelerometer sensors", Conf Proc IEEE Eng Med Biol Soc, Vol. 2008, No., 5184-5187. [4] Godfrey, A., Bourke, A. K., Olaighin, G. M., van de Ven, P., Nelson, J. (2011) "Activity classification using a single chest mounted tri-axial accelerometer", Med. Eng. Phys., Vol. 33, No. 9, 1127-1135. [5] Fulk, G. D., Sazonov, E. (2011) "Using sensors to measure activity in people with stroke", Top Stroke Rehabil, Vol. 18, No. 6, 746-757. [6] Palmerini, L., Rocchi, L., Mellone, S., Valzania, F., Chiari, L. (2011) "Feature selection for accelerometer-based posture analysis in Parkinson's disease", IEEE Trans Inf Technol Biomed, Vol. 15, No. 3, 481-490. [7] Doulah, A., Shen, X., Sazonov, E. (2017) "Early Detection of the Initiation of Sit-to-Stand Posture Transitions Using Orthosis-Mounted Sensors", Sensors, Vol. 17, No. 12. [8] Vaha-Ypya, H., Husu, P., Suni, J., Vasankari, T., Sievanen, H. (2018) "Reliable recognition of lying, sitting, and standing with a hip-worn accelerometer", Scand. J. Med. Sci. Sports, Vol. 28, No. 3, 1092-1102. [9] Fanchamps, M. H. J., Horemans, H. L. D., Ribbers, G. M., Stam, H. J., Bussmann, J. B. J. (2018) "The Accuracy of the Detection of Body Postures and Movements Using a Physical Activity Monitor in People after a Stroke", Sensors, Vol. 18, No. 7. [10] Kerr, J., Carlson, J., Godbole, S., Cadmus-Bertram, L., Bellettiere, J., Hartman, S. (2018) "Improving Hip-Worn Accelerometer Estimates of Sitting Using Machine Learning Methods", Med. Sci. Sports Exerc., Vol. 50, No. 7, 1518-1524. [11] Farrahi, V., Niemela, M., Kangas, M., Korpelainen, R., Jamsa, T. (2019) "Calibration and validation of accelerometer-based activity monitors: A systematic review of machine-learning approaches", Gait Posture, Vol. 68, No., 285-299. [12] Olufsen, M. S., Tran, H. T., Ottesen, J. T., Research Experiences for Undergraduates, P., Lipsitz, L. A., Novak, V. (2006) "Modeling baroreflex regulation of heart rate during orthostatic stress", Am J Physiol Regul Integr Comp Physiol, Vol. 291, No. 5, R1355-1368. [13] Hayano, J., Mukai, S., Fukuta, H., Sakata, S., Ohte, N., Kimura, G. (2001) "Postural response of lowfrequency component of heart rate variability is an increased risk for mortality in patients with
  • 27. coronary artery disease", Chest, Vol. 120, No., 1942-1952. [14] Yoshida, Y., Furukawa, Y., Ogasawara, H., Yuda, E., Hayano, J. Longer lying position causes lower LF/HF of heart rate variability during ambulatory monitoring. Paper presented at: 2016 IEEE 5th Global Conference on Consumer Electronics (GCCE); 11-14 Oct 2016, 2016; Kyoto, Japan.
  • 28. Ransac Based Motion Compensated Restoration for Colonoscopy Images Nidhal Azawi and John Gauch, University of Arkansas, USA ABSTRACT Colonoscopy is a procedure that has been used widely to detect the abnormality in a colon. Colonoscopy images suffer from a lot of problems that make it hard for the doctor to investigate/ understand a colon patient. Unfortunately, with the current technology, three is no way for doctors to know if the whole colon surface has been investigated or not. We have developed a method that utilizes RANSAC-based image registration to align sequences of any length in the colonoscopy video and restores each frame of the video using information from these aligned images. We proposed two methods. First method used the deep neural net for the classification of informative and non- informative image. The classification result was used as a preprocessing for alignment method. Also, we proposed a visualization structure for the classification results. The second method used the alignment to decide/classify the bad and good alignment by using two factors. The first factor is the accumulated error and the second factor contain three checking steps that check the pair error alignment beside the geometry transform status. The second method was able to align long sequences. KEYWORDS Visualization, RANSAC, sequence length, geometry transform, classification, Colonoscopy. Full Text: https://aircconline.com/sipij/V10N4/10419sipij02.pdf Signal & Image Processing: An International Journal (SIPIJ) http://www.airccse.org/journal/sipij/vol10.html
  • 29. REFERENCES [1] N. Azawi, J. Gauch, “Automatic Method for Classification of Informative and Noninformative Images in Colonoscopy Video”, Int. Conf. on Medical Image Processing and Analysis (ICMIPA), Vancouver, Canada, August 2018. [2] N. Azawi and J. Gauch, “MOTION C OMPENSATED RESTORATION OF COLONOSCOPY VIDEO,” pp. 243–256, 2019. [3] L. Dung, C. Huang, and Y. Wu, “Implementation of RANSAC Algorithm for Feature-Based Image Registration,” Journal of Computer and Communications, pp. 46–50, 2013. [4] F.P.M. Oliveira, J.M.R.S. Tavares. Medical Image Registration: a Review. Computer Methods in Biomechanics and Biomedical Engineering 17(2):73-93, 2014. [5] S. B. Kang, M. Uyttendaele, S. Winder, and R. Szeliski, “High dynamic range video,” ACM Trans. Graph., vol. 23, no. 3, pp. 319–325, 2003. [6] F. M. Candocia, “On the Featureless Registration of Differently Exposed Images,” in Proc. Int. Conf. Imaging Science, Systems & Technology, Las Vegas, NV, USA, Jun. 2003, vol. I, pp. 163–169. [7] Hossain and B. K. Gunturk, “High Dynamic Range Imaging of Non-Static Scenes,” in Proc.SPIE Digital Photography VII, 2011, vol. vol. 7876. [8] H. Q. Luong, B. Goossens, A. Pizurica, and W. Philips, “Joint photometric and geometric image registration in the total least square sense,” Pattern. Recognition. Lett., vol. 32, no. 15, pp. 2061– 2067, 2011. [9] O. El Meslouhi, M. Kardouchi, H. Allali, T. Gadi, and Y. A. Benkaddour, “Automatic detection and inpainting of specular reflections for colposcopic images,” Open Comput. Sci., vol. 1, no. 3, pp. 341– 354, 2011. [10] D. G. Lowe, "Object recognition from local scale-invariant features," Proc. of the Int. Conf. on Computer Vision, pp. 1150–1157, 1999. [11] B. Zitova and J. Flusser, “Image registration methods: A survey,” Image Vis. Computer, vol. 21, pp. 977–1000, 2003. [12] S. Oldridge, G. Miller, and S. Fels, “Mapping the problem space of image registration,” in Proc. Can. Conf. Computer and Robot Vision, St. John’s, NF, Canada, May 2011, pp. 309–315. [13] M. Tico and K. Pulli, “Robust image registration for multi-frame mobile applications,” in Proc. Asilomar Conf. Signals, Systems & Computers, Pacific Grove, CA, USA, 2010, pp. 860–864. [14] S. Wu, Z. Li, J. Zheng, and Z. Zhu, “Exposure-robust alignment of differently exposed images,” IEEE Signal Process. Lett., vol. 21, no. 7, pp. 885–889, 2014. [15] S. Wu, Z. Li, J. Zheng, and Z. Zhu, “Exposure-robust alignment of differently exposed images,” IEEE Signal Process. Lett., vol. 21, no. 7, pp. 885–889, 2014. [16] S. Wei and S. Lai, “Robust and efficient image alignment based on relative gradient matching,” IEEE Trans. image Process., vol. 15, no. 10, pp. 2936–43, 2006. [17] C. Wu, B. Clipp, X. Li, J. M. Frahm, and M. Pollefeys, “3D model matching with
  • 30. viewpointinvariant patches (VIP),” 26th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR, pp. 1– 8, 2008. [18] P. A. Freeborough and N. C. Fox, “Modelling Brain Deformations in Alzheimer Disease by Fluid Registration of Serial 3D MR Images”, vol. 22. 1998. [19] D. Leow, A. D. Klunder, C. R. Jack, A. W. Toga, A. M. Dale, M. A. Bernstein, P. J. Britson, J. L. Gunter, C. P. Ward, J. L. Whitwell, B. J. Borowski, A. S. Fleisher, N. C. Fox, D. Harvey, J. Kornak, N. Schuff, C. Studholme, G. E. Alexander, M. W. Weiner, and P. M. Thompson, “Longitudinal stability of MRI for mapping brain change using tensor-based morphometry,” Neuroimage, vol. 31, no. 2, pp. 627–640, 2006. [20] K. A. Ganser, H. Dickhaus, R. Metzner, and C. R. Wirtz, “A deformable digital brain atlas system according to Talaicrach and Tournoux,” Med. Image Anal., vol. 8, no. 1, pp. 3–22, 2004. [21] X. Huang, J. Ren, G. Guiraudon, D. Boughner and T. M. Peters, "Rapid Dynamic Image Registration of the Beating Heart for Diagnosis and Surgical Navigation," in IEEE Transactions on Medical Imaging, vol. 28, no. 11, pp. 1802-1814, Nov. 2009. doi:10.1109/TMI.2009.2024684. [22] R. Redzuwan, N. A. M. Radzi, N. M. Din, and I. S. Mustafa, “Affine versus projective transformation for SIFT and RANSAC image matching methods,” 2015 IEEE Int. Conf. Signal Image Process. Appl., pp. 447–451, 2015. [23] O. El Meslouhi, M. Kardouchi, H. Allali, T. Gadi, and Y. A. Benkaddour, “Automatic detection and inpainting of specular reflections for colposcopic images,” Open Comput. Sci., vol. 1, no. 3, pp. 341– 354, 2011.
  • 31. The Study on Electromagnetic Scattering Characteristics of Jonswap Spectrum Sea Surface Xiaolin Mi, Xiaobing Wang, Xinyi He and Fei Dai, Science and Technology on Electromagnetic Scattering Laboratory, China ABSTRACT The JONSWAP spectrum sea surface is mainly determined by parameters such as the wind speed, the fetch length and the peak enhancement factor. In view of the study of electromagnetic scattering from JONSWAP spectrum sea surface, we need to determine the above parameters. In this paper, we use the double summation model to generate the multi-directional irregular rough JONSWAP sea surface and analyze the distribution concentration parameter and the peak enhancement factor’s influence on the rough sea surface model, then using physical optics method to analysis the JONSWAP spectrum sea surface’s average backward scattering coefficient change with the different distribution concentration parameters and the peak enhancement factors, the simulation results show that the peak enhancement factor influence on the ocean surface of the average backward scattering coefficient is less than 1 dB, but the distribution concentration parameter influence on the JONSWAP surface of the average backward scattering coefficient is more than 5 dB. Therefore, when we study the electromagnetic scattering of the JONSWAP spectral sea surface, the peak enhancement factor can be taken as the mean value but the distribution concentration parameter have to be determined by the wave growth state. KEYWORDS JONSWAP spectrum, multidirectional wave, wave pool, the peak enhancement factor, electromagnetic scattering Full Text: https://aircconline.com/sipij/V10N4/10419sipij01.pdf Signal & Image Processing: An International Journal (SIPIJ) http://www.airccse.org/journal/sipij/vol10.html
  • 32. REFERENCES [1] Hasselmann K,Barnett T P, Bouws E,et al. Measurements of wind-wave growth and swell decay during the Joint North Sea Wave Projects (JONSWAP) [J]. Ergnzungsheft zur Deutschen Hydrographischen Zeitschrift Reihe A8(Suppl.),1973,12:95. [2] Estimation of JONSWAP Spectral Parameters by Using Measured Wave Data[J].China Ocean Engineering,1995(03):275-282. [3] Annalisa Calini,Constance M. Schober. Characterizing JONSWAP rogue waves and their statistics via inverse spectral data[J]. Wave Motion,2016. [4] YU Yu-xiu, LIU Shu-xue.Random Wave and Its Applications to Engineering[M],Dalian:Dalian University of Technology Press,2016. [5] ZHAO Ke, LI Mao-hua, ZHENG JIAN-li, TIAN Guan-nan. 3-D simulation of random ocean wave based on spectrum of ocean wave[J]. Ship Science and Technology,2014,36(02):37-39. [6] Mitsuyasu H, et al. Observation of the directional wave spectra of ocean waves using a cloverleaf buoy.[J].Physical Oceanography,1975,5:750-760. [7] Si Liu,Shu-xue Liu,Jin-xuan Li,Zhong-bin Sun. Physical simulation of multidirectional irregular wave groups[J]. China Ocean Engineering,2012,26(3) [8] Hong Sik Lee,Sung Duk Kim. A three-dimensional numerical modeling of multidirectional random wave diffraction by rectangular submarine pits[J]. KSCE Journal of Civil Engineering,2004,8(4). [9] MI Xiao-lin, WANG Xiao-bing, HE Xin-yi , XUE Zheng-guo. Simulation and Measurement Technology of 3-D Sea surface in Laboratory Based on Double Summation Model[J].GUIDANCE&FUZE,2016,37(02):19-23. [10] WEI Ying-yi, WU Zhen-sen, LU Yue. Electromagnetic scattering simulation of Kelvin wake in rough sea surface[J],CHINESE JOURNAL OF RADIO SCIENCE.,2016,(3):438-442. [11] Biglary, H.,Dehmollaian, M.. RCS of a target above a random rough surface with impedance boundaries using GO and PO methods[P]. Antennas and Propagation Society International Symposium (APSURSI), 2012 IEEE,2012. [12] Joon--Tae Hwang. Radar Cross Section Analysis Using Physical Optics and Its Applications to Marine Targets[A]. Scientific Research Publishing.Proceedings of 2015 Workshop 2[C].Scientific Research Publishing,2015:6. [13] YANG Peng-ju, WU Rui, ZHAO Ye, REN Xin-cheng. Doppler spectrum of low-flying small target above time-varying sea surface[J]. Journal of Terahertz Science and Electronic Information Technology,2018,16(04):614-618. [14] MEISSNER T. WENTZ F J. The complex dielectric constant of pure and sea water from microwave satellite observations[J]. IEEE Transactions on Geoscience and Remote Sensing,
  • 33. 2004,42(9):1836- 1849 Improvements of the Analysis of Human Activity Using Acceleration Record of Electrocardiographs Itaru Kaneko1 , Yutaka Yoshida2 and Emi Yuda3 , 1&2 Nagoya City University, Japan and 3 Tohoku University, Japan ABSTRACT The use of Holter Electrocardiograph (Holter ECG) is rapidly spreading. It is a wearable electrocardiograph that records 24-hour electrocardiograms in a built-in flash memory, making it possible to detect atrial fibrillation (Atrial Fibrillation, AF) through all-day activities. It is also useful for screening for diseases other than atrial fibrillation and for improving health. It is said that more useful information can be obtained by combining electrocardiograph with the analysis of physical activity. For that purpose, the Holter electrocardiograph is equipped with heart rate sensor and acceleration sensors. If acceleration data is analysed, we can estimate activities in daily life, such as getting up, eating, walking, using transportation, and sitting. In combination with such activity status, electrocardiographic data can be expected to be more useful. In this study, we investigate the estimation of physical activity. For the better analysis, we evaluated activity estimation using machine learning as well as several different feature extractions. In this report, we will show several different feature extraction methods and result of human body analysis using machine learning. KEYWORDS Wearable, Biomedical Sensors, Body Activity, Machine Learning Full Text: https://aircconline.com/sipij/V10N5/10519sipij04.pdf Signal & Image Processing: An International Journal (SIPIJ) http://www.airccse.org/journal/sipij/vol10.html
  • 34. REFERENCES [1] Yuda E, Hayano J, Menstrual Cycles of Autonomic Functions and Physical Activities, 2018 9th International Conference on Awareness Science and Technology (iCAST 2018), September 19- 21, (2018) [2] Hayano J, Introduction to heart rate variability. In: Iwase S, Hayano J, Orimo S, eds Clinical assessment of the autonomic nervous system. Japan. [3] Yuda E, Furukawa Y, Yoshida Y, Hayano J, ALLSTAR Research Group, Association between Regional Difference in Heart Rate Variability and Inter-prefecture Ranking of Healthy Life Expectancy: ALLSTAR Big Data Project in Japan, Proceedings of the 7th EAI International Conference on Big Data Technologies and Applications (BDTA), Chung-ang University, Seoul, South Korea, November 17-18 (2016) [4] YOSHIHARA Hiroyuki, gEHR Project: Nation - wide EHR Implementation in JAPAN, Kyoto Smart city Expo, https://expo.smartcity.kyoto/2016/doc/ksce2016_doc_yoshihara.pdf (captured on 2016) [5] J Jaybhay, R Shastri, A study of speckle noise reduction Filters‖ Signal & Image Processing, SIPJ Vol. 6,2015 [6] Mrs V.Radhika1 & Dr G. Padmavathi , Performance of various order statistics filters in impulse and mixed noise removal for rs images, SIPIJ, Vol. 1, No. 2, December 2010
  • 35. Robust Image Watermarking Method using Wavelet Transform Omar Adwan, The University of Jordan, Jordan ABSTRACT In this paper a robust watermarking method operating in the wavelet domain for grayscale digital images is developed. The method first computes the differences between the watermark and the HH1 sub-band of the cover image values and then embed these differences in one of the frequency sub- bands. The results show that embedding the watermark in the LH1 sub-band gave the best results. The results were evaluated using the RMSE and the PSNR of both the original and the watermarked image. Although the watermark was recovered perfectly in the ideal case, the addition of Gaussian noise, or compression of the image using JPEG with quality less than 100 destroys the embedded watermark. Different experiments were carried out to test the performance of the proposed method and good results were obtained. KEYWORDS Watermarking, data hiding, wavelet transform, frequency domain Full Text: https://aircconline.com/sipij/V10N5/10519sipij03.pdf Signal & Image Processing: An International Journal (SIPIJ) http://www.airccse.org/journal/sipij/vol10.html
  • 36. REFERENCES [1] J. Dugelay and S. Roche, "A servey of current watermaking techniques", in S. Katzenbeisser and F. Petitcolas (eds), Information hiding techniques for steganography and digital watermarking, Artech House, USA, pp. 121-148, 2000. [2] I. Cox, M. Miller, J. Bloom, J. Fridrich and T. Kalker “Digital watermarking and steganography”, Morgan Kaufman, 2008. [3] R. Gozalez, R. Woods, Digital Image Processing, 3rd ed., Prentice Hall, 2008.. [4] M. Kutter and F. Hartung, "Introduction to Watermarking Techniques", in S. Katzenbeisser and F. Petitcolas (eds), Information hiding techniques for steganography and digital watermarking, Artech House, USA, pp. 97-120, 2000. [5] S. Lai and F. Buonaiuti, "Copyright on the internet and watermarking", in S. Katzenbeisser and F. Petitcolas (eds), Information hiding techniques for steganography and digital watermarking, Artech House, USA, pp. 191-213, 2000. [6] I. Cox, M.L. Miller, J.M.G. Linnartz, T. Kalker, “A Review of Watermarking Principles and Practices” in Digital Signal Processing for Multimedia Systems, K.K. Parhi, T. Nishitani, eds., New York, New York, Marcel Dekker, Inc., 1999, pp. 461-482. [7] U. Qidwai and C. Chen, Digital image processing: An algorithmic approach with Matlab, CRC Press, 2010. [8] Cox, M. Miller, J. Kilian, F. Leighton and T. Shamoon, "Secure spread spectrum watermarking for multimedia", IEEE Transactions on Image Processing, Vol. 6, No. 12, pp. 1673-1687, 1997. [9] N. Johnson and S. Katzenbeisser, “A survey of steganographic techniques,” in S. Katzenbeisser and F. Petitcolas (eds), Information hiding techniques for steganography and digital watermarking, Artech House, USA, pp. 43-78, 2000. [10] A.H.M. Jaffar Iqbal Barbhuiya1 , K. Hemachandran (2013), “Wavelet Tranformations & Its Major Applications In Digital Image Processing”, International Journal of Engineering Research & Technology (IJERT) Vol. 2 Issue 3, March - 2013 ISSN: 2278-0181 [11] Khan, Asifullah; Mirza, Anwar M. (October 2007). "Genetic perceptual shaping: Utilizing cover image and conceivable attack information during watermark embedding". Information Fusion. 8 (4): 354-365. doi:10.1016/j.inffus.2005.09.007. [12] C. Shoemaker, Hidden Bits: "A Survey of Techniques for Digital Watermarking", http://www.vu.union.edu/~shoemakc/watermarking/, 2002. Last access: June, 2012. [13] M. Weeks, "Digital signal processing using Matlab and Wavelets, 2nd ed.", Jones and Bartlett publisher, 2011. [14] D. Kundur and D. Hatzinakos, "A robust digital watermarking method using wavelet-based fusion", in Proceeding of the International conference on image processing, Santa Barbara, pp. 544-
  • 37. 547, 1997. [15] X. Xia, C. Boncelet and G. Arce, "Wavelet transform based watermark for digital images", Optics Express, Vol. 3, No. 12, pp. 497-511, 1998. [16] O. Adwan, et al., "Simple Image Watermarking Method using Wavelet Transform", Journal of Basic and Applied Science, Vol. 8, No. 17, pp. 98-101, 2014. [17] B. Gunjal and S. Mali, "Secured color image watermarking technique in DWT-DCT domain", International journal of computer science, engineering and information technology, Vol. 1, No. 3, pp. 36-44, 2011. [18] P. Reddy, M. Prasad and D. Rao, "Robust digital watermarking of images using wavelets", International journal of computer and electrical engineering, Vol. 1, No. 2, pp. 111-116, 2011. [19] G. Langelaar, I. Setyawan, R.L. Lagendijk, “Watermarking Digital Image and Video Data”, in IEEE Signal Processing Magazine, Vol. 17, pp. 20-43, 2000. [20] Tanya Koohpayeh Araghi, Azizah B T Abdul Manaf (2017), “Evaluation of Digital Image Watermarking Techniques“, International Conference of Reliable Information and Communication Technology, IRICT 2017: Recent Trends in Information and Communication Technology pp 361- 368. [21] A.S.Kapse1, Sharayu Belokar2, Yogita Gorde3, Radha Rane4, Shrutika Yewtkar, (2018) “Digital Image Security Using Digital Watermarking”. International Research Journal of Engineering and Technology (IRJET), Volume: 05 Issue: 03 | Mar-2018.
  • 38. Test-cost-sensitive Convolutional Neural Networks with Expert Branches Mahdi Naghibi1 , Reza Anvari1 , Ali Forghani1 and Behrouz Minaei2 , 1 Malek-Ashtar University of Technology, Iran and 2 Iran University of Science and Technology, Iran ABSTRACT It has been proven that deeper convolutional neural networks (CNN) can result in better accuracy in many problems, but this accuracy comes with a high computational cost. Also, input instances have not the same difficulty. As a solution for accuracy vs. computational cost dilemma, we introduce a new test-cost-sensitive method for convolutional neural networks. This method trains a CNN with a set of auxiliary outputs and expert branches in some middle layers of the network. The expert branches decide to use a shallower part of the network or going deeper to the end, based on the difficulty of input instance. The expert branches learn to determine: is the current network prediction is wrong and if the given instance passed to deeper layers of the network it will generate right output; If not, then the expert branches stop the computation process. The experimental results on standard dataset CIFAR-10 show that the proposed method can train models with lower test-cost and competitive accuracy in comparison with basic models. KEYWORDS Test-Cost-Sensitive Learning; Deep Learning; CNN withExpert Branches; Instance-Based Cost Full Text: https://aircconline.com/sipij/V10N5/10519sipij02.pdf Signal & Image Processing: An International Journal (SIPIJ) http://www.airccse.org/journal/sipij/vol10.html
  • 39. REFERENCES [1] S. P. S. Gurjar, S. Gupta, and R. Srivastava, “Automatic Image Annotation Model Using LSTM Approach,” Signal Image Process. An Int. J., vol. 8, no. 4, pp. 25–37, Aug. 2017. [2] S. Maity, M. Abdel-Mottaleb, and S. S. As, “Multimodal Biometrics Recognition from Facial Video via Deep Learning,” in Computer Science & Information Technology (CS & IT), 2017, pp. 67– 75. [3] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” arXiv Prepr. arXiv1512.03385, 2015. [4] D. Kadam, A. R. Madane, K. Kutty, and B. S.V, “Rain Streaks Elimination Using Image Processing Algorithms,” Signal Image Process. An Int. J., vol. 10, no. 03, pp. 21–32, Jun. 2019. [5] A. Massaro, V. Vitti, and A. Galiano, “Automatic Image Processing Engine Oriented on Quality Control of Electronic Boards,” Signal Image Process. An Int. J., vol. 9, no. 2, pp. 01–14, Apr. 2018. [6] X. Li, Z. Liu, P. Luo, C. Change Loy, and X. Tang, “Not all pixels are equal: Difficulty-aware semantic segmentation via deep layer cascade,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 3193–3202. [7] M. Naghibi, R. Anvari, A. Forghani, and B. Minaei, “Cost-Sensitive Topical Data Acquisition from the Web,” Int. J. Data Min. Knowl. Manag. Process, vol. 09, no. 03, pp. 39–56, May 2019. [8] A. Polyak and L. Wolf, “Channel-Level Acceleration of Deep Face Representations,” Access, IEEE, vol. 3, pp. 2163–2175, 2015. [9] A. Lavin and S. Gray, “Fast Algorithms for Convolutional Neural Networks,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 4013–4021. [10] J. Ba and R. Caruana, “Do deep nets really need to be deep?,” in Advances in neural information processing systems, 2014, pp. 2654–2662. [11] A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta, and Y. Bengio, “Fitnets: Hints for thin deep nets,” arXiv Prepr. arXiv1412.6550, 2014. [12] X. Zhang, J. Zou, K. He, and J. Sun, “Accelerating very deep convolutional networks for classification and detection,” 2015. [13] E. L. Denton, W. Zaremba, J. Bruna, Y. LeCun, and R. Fergus, “Exploiting linear structure within convolutional networks for efficient evaluation,” in Advances in Neural Information Processing Systems, 2014, pp. 1269–1277. [14] M. Jaderberg, A. Vedaldi, and A. Zisserman, “Speeding up convolutional neural networks with low rank expansions,” arXiv Prepr. arXiv1405.3866, 2014. [15] N. Ström, “Sparse connection and pruning in large dynamic artificial neural networks.,” in EUROSPEECH, 1997. [16] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov, “Improving neural networks by preventing co-adaptation of feature detectors,” arXiv Prepr. arXiv1207.0580, 2012.
  • 40. [17] N. Vasilache, J. Johnson, M. Mathieu, S. Chintala, S. Piantino, and Y. LeCun, “Fast convolutional nets with fbfft: A GPU performance evaluation,” arXiv Prepr. arXiv1412.7580, 2014. [18] M. Mathieu, M. Henaff, and Y. LeCun, “Fast training of convolutional networks through FFTs,” arXiv Prepr. arXiv1312.5851, 2013. [19] V. N. Murthy, V. Singh, T. Chen, R. Manmatha, and D. Comaniciu, “Deep decision network for multi-class image classification,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2240–2248. [20] V. Vanhoucke, A. Senior, and M. Z. Mao, “Improving the speed of neural networks on CPUs,” in Proc. Deep Learning and Unsupervised Feature Learning NIPS Workshop, 2011, vol. 1. [21] A. Toshev and C. Szegedy, “Deeppose: Human pose estimation via deep neural networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 1653– 1660. [22] A. Krizhevsky, G. Hinton, and others, “Learning multiple layers of features from tiny images,” 2009. [23] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2818–2826. [24] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mane, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viegas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: LargeScale Machine Learning on Heterogeneous Distributed Systems,” Mar. 2016.
  • 41. Free- Reference Image Quality Assessment Framework Using Metrics Fusion and Dimensionality Reduction Besma Sadou1 , Atidel Lahoulou2 , Toufik Bouden1 , Anderson R. Avila3 , Tiago H. Falk3 and Zahid Akhtar4 , 1 Non Destructive Testing Laboratory, University of Jijel, Algeria, 2 LAOTI laboratory, University of Jijel, Algeria, 3 University of Québec, Canada and 4 University of Memphis, USA ABSTRACT This paper focuses on no-reference image quality assessment(NR-IQA)metrics. In the literature, a wide range of algorithms are proposed to automatically estimate the perceived quality of visual data. However, most of them are not able to effectively quantify the various degradations and artifacts that the image may undergo. Thus, merging of diverse metrics operating in different information domains is hoped to yield better performances, which is the main theme of the proposed work. In particular, the metric proposed in this paper is based on three well-known NR-IQA objective metrics that depend on natural scene statistical attributes from three different domains to extract a vector of image features. Then, Singular Value Decomposition (SVD) based dominant eigenvectors method is used to select the most relevant image quality attributes. These latter are used as input to Relevance Vector Machine (RVM) to derive the overall quality index. Validation experiments are divided into two groups; in the first group, learning process (training and test phases) is applied on one single image quality database whereas in the second group of simulations, training and test phases are separated on two distinct datasets. Obtained results demonstrate that the proposed metric performs very well in terms of correlation, monotonicity and accuracy in both the two scenarios. KEYWORDS Image quality assessment, metrics fusion, Singular Value Decomposition (SVD), dominant eigenvectors, dimensionality reduction, Relevance Vector Machine (RVM) Full Text: https://aircconline.com/sipij/V10N5/10519sipij01.pdf Signal & Image Processing: An International Journal (SIPIJ) http://www.airccse.org/journal/sipij/vol10.html
  • 42. REFERENCES [1] D. Zhang, Y. Ding , N. Zheng, “Nature scene statistics approach based on ICA for no-reference image quality assessment”, Proceedings of International Workshop on Information and Electronics Engineering (IWIEE), 29 (2012), 3589- 3593. [2] A. K. Moorthy, A. C. Bovik, A two-step framework for constructing blind image quality indices[J], IEEE Signal Process. Lett., 17 (2010), 513-516. [3] L. Zhang, L. Zhang, A.C. Bovik, A Feature-Enriched Completely Blind Image Quality Evaluator, IEEE Transactions on Image Processing, 24(8) (2015), 2579- 2591. [4] M.A. Saad, A.C. Bovik, C. Charrier, A DCT statistics-based blind image quality index, Signal Process. Lett. 17 (2010) 583–586. [5] M. A. Saad, A. C. Bovik, C. Charrier, Blind image quality assessment: A natural scene statistics approach in the DCT domain, IEEE Trans. Image Process., 21 (2012), 3339-3352. [6] A. Mittal, A.K. Moorthy, A.C. Bovik, No-reference image quality assessment in the spatial domain, IEEE Trans. Image Process. 21 (2012), 4695 - 4708. [7] A. Mittal, R. Soundararajan, A. C. Bovik, Making a completely blind image quality analyzer, IEEE Signal Process. Lett., 20 (2013), 209-212. [8] N. Kruger, P. Janssen, S. Kalkan, M. Lappe, A. Leonardis, J. Piater, A. Rodriguez-Sanchez, L. Wiskott, “Deep hierarchies in the primate visual cortex: What can we learn for computer vision?”, IEEE Trans. Pattern Anal. Mach. Intell., 35 (2013), 1847–1871. [9] D. J. Felleman, D. C. Van Essen,“Distributed hierarchical processing in the primate cerebral cortex,”“Distributed hierarchical processing in the primate cerebral cortex,” [10] B. Sadou, A. Lahoulou, T. Bouden, A New No-reference Color Image Quality Assessment Metric in Wavelet and Gradient Domains, 6th International Conference on Control Engineering and Information Technologies, Istanbul, Turkey, 25-27 October (2018), 954-959. [11] Q. Wu, H. Li, F. Meng, K. N. Ngan, S. Zhu, No reference image quality assessment metric via multidomain structural information and piecewise regression. J. Vis. Commun. Image R., 32(2015), 205– 216. [12] X. Shang, X. Zhao, Y. Ding, Image quality assessment based on joint quality-aware representation construction in multiple domains, Journal of Engineering 4 (2018), 1-12. [13] B. Sadou, A.Lahoulou, T.Bouden, A.R. Avila, T.H. Falk, Z. Akhtar, "Blind Image Quality Assessment Using Singular Value Decomposition Based Dominant Eigenvectors for Feature Selection", 5th Int. Conf. on Signal and Image Processing (SIPRO’19), Toronto, Canada, pp. 233-242, 2019. [14] H. R. Sheikh, Z. Wang, L. Cormack, A. C. Bovik, LIVE Image Quality Assessment Database Release 2 (2005), http://live.ece.utexas.edu/research/quality [15] E. Larson, D. M. Chandler, Categorical image quality assessment (CSIQ) database.http://vision.okstate.edu/?loc=csiq
  • 43. [16] M. W. Mahoney, P. Drineas, “CUR matrix decompositions for improved data analysis,” in Proc. the National Academy of Sciences, February 2009. [17] M.E. Tipping. The relevance vector machines. In Advances in Neural Information Processing Systems 12, Solla SA, Leen TK, Muller K-R (eds). MIT Press: Cambridge, MA (2000), 652-658. [18] D. Basak, S. Pal, D.C. Patranabis, Support vector regression, Neural Information Processing – Letters and Reviews, 11 (2007). [19] B. SchÖlkopf, A.J. Smola, Learning with Kernels. MIT press, Cambridge, (2002). [20] Final VQEG report on the validation of objective quality metrics for video quality assessment: http://www.its.bldrdoc.gov/vqeg/projects/frtv_phaseI/ [21] H. R. Sheikh, M. F. Sabir, A. C. Bovik, A statistical evaluation of recent full reference image quality assessment algorithms, IEEE Trans. Image Process., 15 (2006), 3440–3451.
  • 44. Textons of Irregular Shape to Identify Patterns in the Human Parasite Eggs Roxana Flores-Quispe and Yuber Velazco-Paredes, Universidad Nacional de San Agustín de Arequipa, Perú ABSTRACT This paper proposes a method based on Multitexton Histogram (MTH) descriptor to identify patterns in images of human parasite eggs of the following species: Ascaris, Uncinarias, Trichuris, Hymenolepis Nana, Dyphillobothrium-Pacificum, Taenia-Solium, Fasciola Hepática and Enterobius- Vermicularis. These patterns are represented by textons of irregular shapes in their microscopic images. This proposed method could be used for diagnosis of Parasitic disease and it can be helpful especially in remote places. This paper includes two stages. In the first a feature extraction mechanism integrates the advantages of cooccurrence matrix and histograms to identify irregular morphological structures in the biological images through textons of irregular shape. In the second stage the Support Vector Machine (SVM) is used to classificate the different human parasite eggs. The results were obtaining using a dataset with 2053 human parasite eggs images achieving a success rate of 96,82% in the classification. In addition, this research shows that the proposed method also works with natural images. KEYWORDS Patterns, Human Parasite Eggs, Multitexton Histogram descriptor, Textons. Full Text: https://aircconline.com/sipij/V10N6/10619sipij03.pdf Signal & Image Processing: An International Journal (SIPIJ) http://www.airccse.org/journal/sipij/vol10.html REFERENCES [1] Avci, Derya & Varol, Asaf (2009) “An expert diagnosis system for classification of human
  • 45. parasite eggs based on multi-class SVM”, Expert Systems with Applications, Vol. 36, No.1, pp43 - 48. [2] Chuctaya , Juan & Mena-Chalco, Jesús & Humpire , Gabriel & Rodriguez, Alexander. & Beltrán , Cesar & Patiño,, Raquel. (2010) “Detección de huevos helmintos mediante plantillas dinámicas”, Conferencia Latinoamericana de Informática - CLEI. [3] Dogantekin , Esin & Yilmaz , Mustafa & Dogantekin , Akif & Avci, Engin & Sengur, Abdulkadir (2008). “A robust technique based on invariant moments - ANFIS for recognition of human parasite eggs in microscopic images”, Expert Syst. Appl., Vol. 35, No. 3, pp728-738. [4] Flores-Quispe, Roxana & Patiño Escarcina , Raquel Esperanza & Velazco-Paredes, Yuber & Beltran Castañon , Cesar A. (2014) “ Classification of human parasite eggs based on enhanced multitexton histogram”, Proceeding of Communications and Computing (COLCOM) IEEE Colombian Conference on, pp1-6. [5] Flores-Quispe, Roxana & Velazco-Paredes, Yuber & Patiño Escarcina , Raquel Esperanza & Beltran Castañon , Cesar A. (2014) “ Automatic identification of human parasite eggs based on multitexton histogram retrieving the relationships between textons”, In 33rd International Conference of the Chilean Computer Science Society (SCCC), pp102-106. [6] Kamarul H. Ghazali, & Hadi, Raafat S. & Mohamed. Zeehaida, (2013) “Automated system for diagnosis intestinal parasites by computerized image analysis”, Modern Applied Science, Vol.7, No.5, pp98-114. [7] Gonzalez & Woods. (2008) “Digital Image Processing”. Prentice Hall, 3rd edition. [8] Julesz, B. (1981) “Textons, the elements of texture perception, and their interactions”. Nature, Vol.290, pp91-97. [9] Julesz, B. (1986) “Texton gradients: the texton theory revisited”. Biological Cybernetics, Vol.54, pp.245-251. [10] Liu, G.-H. & Zhang, L. & Hou, Y.-K. & Li, Z.-Y. & Yang, J.-Y. (2010) “Image retrieval based on multi-texton histogram”, Pattern Recognition, Vol.43 pp2380-2389. [11] Peixinho, A.Z. & Martins, S.B. & Vargas, J.E. & Facão , A.X. & Gomes, J.F. & Suzuki, C.T.N. (2016) “Diagnosis of human intestinal parasites by deep learning”. pp 07-112. [12] Sengür, Abdulkadir & Türkoglu, Ibrahim. (2004) “Parasite egg cell classification using invariant moments”. 4th Internatinal Symposium on Intelligent Manufacturing Systems, pp98-106. [13] Wang & Yunling (2017). “Introduction to Parasitic Disease. Springer Netherlands”. [14] Yang, Yoon Seok & Park, Duck Kun & Kim, Hee Chan & Choi, Min-Ho & Chai , Jong-Yil. (2001) “Automatic identification of human helminth eggs on microscopic fecal specimens using digital image processing and an artificial neural network”, IEEE Trans. Biomed. Engineering, Vol.48, No.6, pp718-730.
  • 46. Deep Learning Based Target Tracking and Classification Directly in Compressive Measurement for Low Quality Videos Chiman Kwan1 , Bryan Chou1 , Jonathan Yang2 and Trac Tran3 , 1 Applied Research LLC, USA, 2 Google, Inc., USA and 3 Johns Hopkins University, USA ABSTRACT Past research has found that compressive measurements save data storage and bandwidth usage. However, it is also observed that compressive measurements are difficult to be used directly for target tracking and classification without pixel reconstruction. This is because the Gaussian random matrix destroys the target location information in the original video frames. This paper summarizes our research effort on target tracking and classification directly in the compressive measurement domain. We focus on one type of compressive measurement using pixel subsampling. That is, the compressive measurements are obtained by randomly subsample the original pixels in video frames. Even in such special setting, conventional trackers still do not work well. We propose a deep learning approach that integrates YOLO (You Only Look Once) and ResNet (residual network) for target tracking and classification in low quality videos. YOLO is for multiple target detection and ResNet is for target classification. Extensive experiments using optical and mid-wave infrared (MWIR) videos in the SENSIAC database demonstrated the efficacy of the proposed approach. KEYWORDS Compressive measurements, target tracking, target classification, deep learning, YOLO, ResNet, optical videos, infrared videos, SENSIAC database Full Text: https://aircconline.com/sipij/V10N6/10619sipij02.pdf Signal & Image Processing: An International Journal (SIPIJ) http://www.airccse.org/journal/sipij/vol10.html
  • 47. REFERENCES [1] Li, X., Kwan, C., Mei, G. and Li, B., (2006) “A Generic Approach to Object Matching and Tracking,” Proc. Third International Conference Image Analysis and Recognition, Lecture Notes in Computer Science, pp 839-849, [2] Zhou, J. and Kwan, C., (2018) “Tracking of Multiple Pixel Targets Using Multiple Cameras,” 15th International Symposium on Neural Networks. [3] Zhou, J. and Kwan, C., (2018) “Anomaly Detection in Low Quality Traffic Monitoring Videos Using Optical Flow,” Proc. SPIE 10649, Pattern Recognition and Tracking XXIX, 106490F. [4] Kwan, C., Zhou, J., Wang, Z. and Li, B., (2018) “Efficient Anomaly Detection Algorithms for Summarizing Low Quality Videos,” Proc. SPIE 10649, Pattern Recognition and Tracking XXIX, 1064906. [5] Kwan, C., Yin, J. and Zhou, J., (2018) “The Development of a Video Browsing and Video Summary Review Tool,” Proc. SPIE 10649, Pattern Recognition and Tracking XXIX, 1064907. [6] Zhao, Z., Chen, H., Chen, G., Kwan, C. and Li, X. R., (2006) “IMM-LMMSE Filtering Algorithm for Ballistic Target Tracking with Unknown Ballistic Coefficient,” Proc. SPIE, Volume 6236, Signal and Data Processing of Small Targets. [7] Zhao, Z., Chen, H., Chen, G., Kwan, C. and Li, X. R., (2006) “Comparison of several ballistic target tracking filters,” Proc. American Control Conference, pp 2197-2202. [8] Candes, E. J. and Wakin, M. B., (2008) “An Introduction to Compressive Sampling,” IEEE Signal Processing Magazine, vol. 25, no. 2, pp. 21-30. [9] Kwan, C., Chou, B. and Kwan, L. M., (2018) “A Comparative Study of Conventional and Deep Learning Target Tracking Algorithms for Low Quality Videos,” 15th International Symposium on Neural Networks. [10] Kwan, C., Chou, B., Yang, J. and Tran, T., (2019) “Compressive object tracking and classification using deep learning for infrared videos,” Pattern Recognition and Tracking XXX (Conference SI120). [11] Kwan, C., Chou, B., Yang, J., Rangamani, A., Tran, T., Zhang, J. and Etienne-Cummings, R., (2019) “Target Tracking and Classification Directly Using Compressive Sensing Camera for SWIR videos,” Journal of Signal, Image, and Video Processing. [12] Kwan, C., Chou, B., Echavarren, A., Budavari, B., Li, J. and Tran, T., (2018) “Compressive vehicle tracking using deep learning,” IEEE Ubiquitous Computing, Electronics & Mobile Communication Conference. [13] Tropp, J. A., (2004) “Greed is good: Algorithmic results for sparse approximation,” IEEE Transactions on Information Theory, vol. 50, no. 10, pp 2231–2242. [14] Yang, J. and Zhang, Y., (2011) “Alternating direction algorithms for l1-problems in compressive sensing,” SIAM journal on scientific computing, 33, pp 250–278.
  • 48. [15] Dao, M., Kwan, C., Koperski, K. and Marchisio, G., (2017) “A Joint Sparsity Approach to Tunnel Activity Monitoring Using High Resolution Satellite Images,” IEEE Ubiquitous Computing, Electronics & Mobile Communication Conference, pp 322-328, [16] Zhou, J., Ayhan, B., Kwan, C. and Tran, T., (2018) “ATR Performance Improvement Using Images with Corrupted or Missing Pixels,” Proc. SPIE 10649, Pattern Recognition and Tracking XXIX, 106490E. [17] Yang, M. H., Zhang, K. and Zhang, L., (2012) “Real-Time Compressive Tracking,” European Conference on Computer Vision. [18] Applied Research LLC, Phase 1 Final Report, 2017. [19] Kwan, C., Gribben, D. and Tran, T. (2019) “Multiple Human Objects Tracking and Classification Directly in Compressive Measurement Domain for Long Range Infrared Videos,” IEEE Ubiquitous Computing, Electronics & Mobile Communication Conference, New York City. [20] Kwan, C., Chou, B., Yang, J., and Tran, T. (2019) “Deep Learning based Target Tracking and Classification for Infrared Videos Using Compressive Measurements,” Journal Signal and Information Processing. [21] Kwan, C., Gribben, D. and Tran, T. (2019) “Tracking and Classification of Multiple Human Objects Directly in Compressive Measurement Domain for Low Quality Optical Videos,” IEEE Ubiquitous Computing, Electronics & Mobile Communication Conference, New York City. [22] Redmon, J. and Farhadi, A., (2018) “YOLOv3: An Incremental Improvement,” arxiv, April. [23] Ren S., He, K., Girshick, R. and Sun, J., (2015) “Faster R-CNN: Towards real-time object detection with region proposal networks,” Advances in neural information processing systems. [24] He, K., Zhang, X., Ren, S. Ren and Sun, J., (2016) “Deep Residual Learning for Image Recognition,” Conference on Computer Vision and Pattern Recognition. [25] Kwan, C., Chou, B., Yang, J., and Tran, T., (2019) “Target Tracking and Classification Directly in Compressive Measurement Domain for Low Quality Videos,” Pattern Recognition and Tracking XXX (Conference SI120). [26] Stauffer, C. and Grimson, W. E. L., (1999) “Adaptive Background Mixture Models for Real- Time Tracking,” Conference on Computer Vision and Pattern Recognition, Vol. 2, pp. 2246-252. [27] Bertinetto, L., Valmadre, J., Golodetz, S., Miksik, O. and Torr, P., (2016) “Staple: Complementary Learners for Real-Time Tracking,” Conference on Computer Vision and Pattern Recognition. [28] Kulkarni, K. and Turaga, P. K. (2016) “Reconstruction-Free Action Inference from Compressive Imagers,” IEEE Trans. Pattern Anal. Mach. Intell. 38(4), pp 772-784. [29] Lohit, S., Kulkarni, K. and Turaga, P. K. (2016) “Direct inference on compressive measurements using convolutional neural networks,” Int. Conference on Image Processing, pp 1913-1917. [30] Adler, A., Elad, M. and Zibulevsky, M. (2016) “Compressed Learning: A Deep Neural Network Approach,” arXiv:1610.09615v1 [cs.CV].
  • 49. [31] Xu, Y. and Kelly, K. F. (2019) “Compressed domain image classification using a multi-rate neural network,” arXiv:1901.09983 [cs.CV]. [32] Kulkarni, K. and Turaga, P. K. (2016) “Fast Integral Image Estimation at 1% measurement rate,” arXiv:1601.07258v1 [cs.CV]. [33] Wang, Z. W., Vineet, V., Pittaluga, F., Sinha, S. N., Cossairt, O. and Kang, S. B. (2019) “PrivacyPreserving Action Recognition Using Coded Aperture Videos,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops. [34] Vargas, H., Fonseca, Y. and Arguello, H. (2018) “Object Detection on Compressive Measurements using Correlation Filters and Sparse Representation,” 26th European Signal Processing Conference (EUSIPCO), pp 1960-1964. [35] Değerli, A., Aslan, S., Yamac, M., Sankur, B. and Gabbouj, M. (2018) “Compressively Sensed Image Recognition,” 7th European Workshop on Visual Information Processing (EUVIP), Tampere, pp. 1-6. [36] Latorre-Carmona, P., Traver, V. J., Sánchez, J. S. and Tajahuerce, E. (2019) “Online reconstruction-free single-pixel image classification,” Image and Vision Computing, Vol. 86. [37] Kwan, C., Chou, B., Yang, J., Rangamani, A., Tran, T., Zhang, J. and Etienne-Cummings, R., (2019) “Target Tracking and Classification Using Compressive Measurements of MWIR and LWIR Coded Aperture Cameras,” Journal Signal and Information Processing, vol. 10, no. 3. [38] Kwan, C., Chou, B., Yang, J., Rangamani, A., Tran, T., Zhang, J. and Etienne-Cummings, R., (2019) “Deep Learning based Target Tracking and Classification for Low Quality Videos Using Coded Aperture Camera,” Sensors, vol. 19, no. 17, 3702
  • 50. Efficient Method to find Nearest Neighbours in Flocking Behaviours Omar Adwan, The University of Jordan, Jordan ABSTRACT Flocking is a behaviour in which objects move or work together as a group. This behaviour is very common in nature think of a flock of flying geese or a school of fish in the sea. Flocking behaviours have been simulated in different areas such as computer animation, graphics and games. However, the simulation of the flocking behaviours of large number of objects in real time is computationally intensive task. This intensity is due to the n-squared complexity of the nearest neighbour (NN) algorithm used to separate objects, where n is the number of objects. This paper proposes an efficient NN method based on the partial distance approach to enhance the performance of the flocking algorithm and its application to flocking behaviour. The proposed method was implemented and the experimental results showed that the proposed method outperformed conventional NN methods when applied to flocking fish. KEYWORDS Flocking behaviours, nearest neighbours, partial distance approach, computer graphics and games Full Text: https://aircconline.com/sipij/V10N6/10619sipij01.pdf Signal & Image Processing: An International Journal (SIPIJ) http://www.airccse.org/journal/sipij/vol10.html