SlideShare a Scribd company logo
1 of 52
Download to read offline
Top SIP Research Articles of 2019
International Journal of VLSI design &
Communication Systems (VLSICS)
ISSN : 0976 - 1357 (Online); 0976 - 1527(print)
http://airccse.org/journal/vlsi/vlsics.html
COLOR CONVERTING OF ENDOSCOPIC IMAGES USING
DECOMPOSITION THEORY AND PRINCIPAL COMPONENT
ANALYSIS
Keivan Ansari1,2
, Alexandre Krebs1
, Yannick Benezeth1
and Franck Marzani1
1
ImViA-Imaging and Artificial Vision,Université de Bourgogne, Dijon, France
2
Dept. Color Imaging and Color Image Processing, Institute for Color Science and Technology, Tehran,
Iran
ABSTRACT
Endoscopic color imaging technology has been a great improvement to assist clinicians in making better
decisions since the initial introduction. In this study, a novel combined method, including quadratic
objective functions for the dichromatic model by Krebs et al. and Wyszecki`s spectral decomposition
theory and the well-known principal component analysis technique is employed. New algorithm method
working for color space converting of a conventional endoscopic color image, as a target image, with a
Narrow Band Image (NBI), as a source image. The images of the target and the source are captured under
known illuminant/sensor/filters combinations, and matrix Q of the decomposition theory is computed for
such combinations. The intrinsic images which are extracted from the Krebs technique are multiplied by
the matrix Q to obtain their corresponding fundamental stimuli. Subsequently, the principal component
analysis technique was applied to the obtained fundamental stimuli in order to prepare the eigenvectors of
the target and the source. Finally, the first three eigenvectors of each matrix were then considered as the
converting mapping matrix. The results precisely seem that the color gamut of the converted target image
gets closer to the NBI image color gamut.
KEYWORDS
Color Converting, Endoscopic Imaging, Dichromatic Model, Principal Component Analysis,
Decomposition Theory.
Full Text : https://aircconline.com/csit/papers/vol9/csit91812.pdf
9th
International Conference on Computer Science, Engineering and Applications
(ICCSEA 2019) - http://airccse.org/csit/V9N18.html
REFERENCES
[1] S. Tanaka, S. oka, M. hirata, S. Yoshida, I. Kaneko and K. Chayama, (2006) “Pit pattern diagnosis for
colorectal neoplasia using narrow band imaging magnification,” Digestive Endoscopy 18(Suppl. 1), pp.
S52 –S56.
[2] P. Lukes et al. (2013) “Narrow Band Imaging (NBI)”, Endoscopy, IntechOpen , Edited by S.
Amornyotin, Chapter 5.
[3] R. Saito and H. Kotera, (2005)“Gamut mapping adapted to image contents,” Proc. Congress of the
International Colour Association (AIC 05), Granada, Spain, pp. 661–664.
[4] X. Xiao, L. Ma, (2006) “Color transfer in correlated color space,” In VRCIA '06: Proc. of the 2006
ACM international conference on virtual reality continuum and its applications, pp. 305- 309.
[5] E. Reinhard, M. Ashikhmin, B. Gooch, and P. Shirley, (2001) “Color transfer between images,” IEEE
Comput.Graphics Appl., pp. 34-41.
[6] H. Kotera, Y. Matsusaki, T. Horiuchi, R.Saito, “Automatic color interchange between images,” Proc.
Congress of the International Color Association (AIC 05), Granada, Spain, pp. 1019 -1022.
[7] R. Saito, T. Horiuchi, and H. Kotera, (2005)“Scene color interchange using histogram rescaling,”
Proc. IS&T's International Conference on Digital Printing Technologies (NIP22), Denver, Colorado, pp.
378-381, 2006.
[8] R. Saito, H. Okuda, T. Horiuchi and S. Tominaga, (2007) “Scene-to-scene color transfer model based
on histogram rescaling,” Proc. Midterm Meeting of the International Color Association (AIC 07),
Hangzhou, China, pp. 122-125.
[9] S. Gorji Kandi, K. Ansari, (2011)“ Transforming color space between images using rosenfeldkak
histogram matching technique,” 4th International Color and Coatings Congress (ICCC 2011), Tehran,
Iran.
[10] E. Reinhard, M. Ashikhmin, B. Gooch, and P. Shirley, (2001)“Color transfer between images,” IEEE
Comput. Graphics, pp.34-41.
[11] Y. Chang, S.Saito, and M. Nakajima, (2007)”Example-based color transformation of image and
video using basic color categories,” IEEE Trans. Image Process, vol. 16,no. 2, pp. 329–336.
[12] S. Paris, S. W. Hasinoff, and J. Kautz, (2011)“Local Laplacian filters: Edge-aware image processing
with a Laplacian pyramid,” ACM Trans. Graph., vol. 30, no. 4, pp. 1–12.
[13] A. Abadpour, S. Kasaei, (2007)“An efficient PCA-based color transfer method,” J. Visual
Communication and Image Representation.
[14] A. Dhanve1, G. Chhajed, (2014)“Review on color transfer between images,” International Journal of
Engineering Research and General Science Vol. 2, Issue 6, Oct. -Nov.
[15] A. Krebs, Y. Benezeth, F. Marzani, (2017)“Quadratic objective functions for dichromatic model
parameters estimation,” in: IEEE International Conference on Digital Image Computing: Techniques and
Applications (DICTA).
[16] S. A. Shafer, (1985)“Using color to separate reflection components,” Color Research & Application
10 (4), pp.210-218.
[17] J. B. Cohen and W. E. Kappauf, (1982)“Metameric color stimuli, fundamental metamers, and
Wyszecki’s metameric blacks,” Am. J. Psychol. 95, pp. 537–564.
[18] F. Viénot, H. Brettel, (2014)“Visual properties of metameric blacks beyond cone vision,” Journal of
the Optical Society of America, Vol. 31,Issue 4, pp. A38-A46.
[19] Y. Mohamed, Y. Abdallah and T.Alqahtani,(2019) “Research in Medical Imaging Using Image
Processing Techniques,”Medical Imaging - Principles and Applications, IntechOpen, Edited by Y. Zhou,
Chapter 5.
[20] B.Selvapriya and B. Raghu,(2018) “A Color Map for Pseudo Color Processing of Medical Images,”
International Journal of Engineering & Technology, 7 (3.34) 954-958.
[21] K. Ansari, S. Moradian, and S.H. Amirshahi,(2005)” Ideal Compression of Reflectance Curves by
the use of Fundamental Color Stimuli”,10th Congress of the International Colour Association, AIC
Colour 2005 , Granada, Spain.
AUTHORS
Keivan Ansari received his Ph.D. in color engineering from Amirkabir University
(Polytechnic of Tehran),2005. He is an assistant professor in Color Imaging &Color
Image Processing research group in Color for Science & Technology Institute,
Tehran, IRAN. He is currently pursuing his postdoctoral at the ImViA laboratory in
the Bourgandy university, Dijon, FRANCE. His work has focused on the
development of Color Physics and its application in image processing.
Alexandre Krebs received the B.S., M.S. and Ph.D. in computer sciences and image
instrumentation from the University of Burgundy (France) in 2019. He is currently a
temporary lecturer and research assistant in the engineering school ESIREM in Dijon,
France. His research includes digestive endoscopy, Narrow Band Imaging,
multispectral imaging, stomach lesions, machine Learning, transfer learning, Inverse
problem, and optimization.
Yannick Benezeth is associate professor at the Univ. Bourgogne Franche-
Comté(France). He obtained his Ph.D. in computer science from the Univ. of Orléans
in 2009. He also received the engineering degree from the ENSI de Bourges and the
MS degree from the University of Versailles-Saint-Quentin-en-Yvelines in 2006. His
research interests include biomedical engineering, image processing, and video
analytics. Application areas include video health monitoring and endoscopy.
Franck Marzani received his M.Sc. in computer science from the University of
Rennes, France in 1989. He obtained his Ph.D. in computer vision and image
processing from the University of Burgundy, Dijon, France in 1998. He received
his “Habilitation à Diriger les Recherches” in 2007 and he is a full professor since
2009. he is currently the head of the ImViA research laboratory (Imaging &
Computer Vision) at the University of Burgundy. His research interests include
acquisition and analysis of images.
UNDERSTANDING HOW COLOUR CONTRAST IN HOTEL & TRAVEL
WEBSITE AFFECTS EMOTIONAL PERCEPTION, TRUST, AND
PURCHASE INTENTION OF VISITORS
Pimmanee Rattanawicha and Sutthipong Yungratog
Chulalongkorn Business School, Chulalongkorn University, Bangkok, Thailand
ABSTRACT
To understand how colour contrast in e-Commerce websites, such as hotel & travel websites, affects (1)
emotional perception (i.e. pleasant, arousal, and dominance), (2) trust, and (3) purchase intention of
visitors, a two-phase empirical study is conducted. In the first phase of this study, 120 volunteer
participants are asked to choose the most appropriate colour from a colour wheel for a hotel & travel
website. The colour “Blue Cyan”, the most chosen colour from this phase of study, is then used as the
foreground colour to develop three hotel & travel websites with three different colour contrast patterns for
the second phase of the study. A questionnaire is also developed from previous studies to collect
emotional perception, trust, and purchase intention data from another group of 145 volunteer participants.
It is found from data analysis that, for visitors as a whole, colour contrast has significant effects on their
purchase intention. For male visitors, colour contrast significantly affects their trust and purchase
intention. Moreover, for generation X and generation Z visitors, colour contrast has effects on their
emotional perception, trust, and purchase intention. However, no significant effect of colour contrast is
found in female or generation Y visitors.
KEYWORDS
Colour Contrast, e-Commerce, Website Design
Full Text : https://aircconline.com/csit/papers/vol9/csit91706.pdf
9th
International Conference on Advances in Computing and Information Technology
(ACITY 2019) – http://airccse.org/csit/V9N17.html
REFERENCES
[1] Anuratpanich, L. 2016. Generation Important thing to pay attention. Faculty of Pharmacy, Mahidol
University.
[2] Archavanitkul, K. 2011. Sexuality Transition in Thai Society. The Journal of Population and Social
Studies. 44.
[3] Bakker, I., van der Voordt, T., Vink, P., & de Boon, J. 2014. Pleasure, Arousal, Dominance:
Mehrabian and Russell revisited. Current Psychology, 33(3), 405-421.
[4] Beaird, J. 2007. The Principles of Beautiful Web Design (pp. 29).
[5] Bonnardel, N., Piolat, A., & Le Bigot, L. 2011. The impact of colour on Website appeal and users’
cognitive processes. Displays, 32(2), 69-80.
[6] Chaikate, S., Nittayapat, W., Morakotjinda, P., Peuchngen, P., & Kanthiwa, T. 2015. Science of color.
Journal of home economics SWU. 13(1), 6-8.
[7] Cyr, D., Head, M., & Larios, H. 2010. Colour appeal in website design within and across cultures: A
multi-method evaluation. International Journal of Human-Computer Studies, 68(1-2), 1-21.
[8] Das, G. 2014. Linkages of retailer personality, perceived quality and purchase intention with retailer
loyalty: A study of Indian non-food retailing. Journal of Retailing and Consumer Services, 21(3), 407-
414.
[9] Deng, L., & Poole, M. S. 2010. Affect in web interfaces: a study of the impacts of web page visual
complexity and order. MIS Quarterly, 34(4), 711-730.
[10] Electronic Transactions Development Agency (Public Organization). 2018. Thailand Internet User
Profile 2018, 1-150.
[11] Golalizadeh, F., & Sharifi, M. 2016. Exploring the effect of customers' perceptions of electronic
retailer ethics on revisit and purchase intention of retailer website. 10th International Conference on
eCommerce with focus on e-Tourist, 1-6.
[12] Gray, R. 2016. Quality of Life Among Employed Population by Generations. Institute for Population
and Social Research, Mahidol University. 461(2016), 1-128.
[13] Hall, R. H., & Hanna, P. 2004. The impact of web page text-background colour combinations on
readability, retention, aesthetics and behavioural intention. Behaviour & Information Technology, 23(3),
183-195.
[14] Hong, I. B., & Cha, H. S. 2013. The mediating role of consumer trust in an online merchant in
predicting purchase intention. International Journal of Information Management, 33(6), 927-939.
[15] Hurlbert, A., & Wolf, K. 2004. Color contrast: a contributory mechanism to color constancy.
Progress in Brain Research, 144, 147-160.
[16] Hurlbert, A. C., & Ling, Y. 2012. Colour Design Theories and applications. Woodhead Publishing
Limited, 129-157.
[17] Ingkavitan, J., & Rattanawicha, P. 2018. An Empirical Study of Choosing the Right Color
Combinations for e-Commerce Websites. The 2018 International Conference on e-Commerce,
Administration, e-Society, e-Education, and e-Technology (e-CASE & e-Tech 2018), Osaka, Japan, 16-
27.
[18] Lin, S.-W., Lo, L. Y.-S., & Huang, T. K. 2016. Visual Complexity and Figure-Background Color
Contrast of E-Commerce Websites: Effects on Consumers' Emotional Responses. 49th Hawaii
International Conference on System Sciences (HICSS), 3594-3603.
[19] Moisa, S., & Sălășan, C. 2017. Some Aspects Regarding Color Schemes in order to Create Visual
Attractive Websites. 4th International Multidisciplinary Scientific Conference on Social Sciences & Arts
(SGEM 2017), 61, 363-369.
[20] Nordeborn, G. 2013. The Effect of Color in Website Design Searching for Medical Information
Online. Master’s Thesis, Lund University.
[21] Pelet, J. E., & Papadopoulou, P. 2009. The effect of colors of e-commerce websites on consumer
mood, memorization and buying intention. Proceedings of the 4th Mediterranean Conference on
Information Systems, 1-16.
[22] Pelet, J. É., & Papadopoulou, P. 2011. The Effect of E-Commerce Websites’ Colors on Customer
Trust. International Journal of E-Business Research, 7(3), 1-18.
[23] Pengnate, S., & Sarathy, R. 2017. An experimental investigation of the influence of website
emotional design features on trust in unfamiliar online vendors. Computers in Human Behavior, 67, 49-
60.
[24] Porat, T., & Tractinsky, N. 2012. It’s a Pleasure Buying Here: The Effects of Web-Store Design on
Consumers’ Emotions and Attitudes. Human-Computer Interaction, 27, 235-276.
[25] Rareș, O. D. 2014. Exploring the mediating role of perceived quality between online flow and
customer’s online purchase intention on a restaurant e-commerce website. The Yearbook of the "Gh.
Zane" Institute of Economic Researches, 23(1), 35-44.
[26] Rattanawicha, P., & Esichaikul, V. 2005. What makes websites trustworthy? A two-phase empirical
study. International Journal of Electronic Business, 3(2), 110-134.
[27] Richardson, R. T., Drexler, T. L., & Delparte, D. M. 2014. Color and Contrast in E-Learning Design
A Review of the Literature and Recommendations for Instructional Designers and Web Developers.
MERLOT Journal of Online Learning and Teaching, 10(4), 657-670.
[28] Shapiro, A. G. 2008. Separating color from color contrast. Journal of Vision, 8(1), 1-18.
[29] Yungratog, S. & Rattanawicha, P. 2019. Effect of Color Contrast in e-Commerce Websites on
Emotional Perception, Trust, and Purchase Intention of Visitors: An Empirical Study Design. The 4th
International Conference on Innovative Education and Technology (ICIET 2019), 209-213.
[30] Zhou, X., & Lin, Y. 2015. The Study on the Influence Mechanism of Website Features on Consumer
Purchase Intention. 8th International Symposium on Computational Intelligence and Design, 104-107.
AUTHORS
Pimmanee Rattanawicha is an assistant professor at Chulalongkorn Business School, Chulalongkorn
University, Bangkok, Thailand. Her research interests include e-Commerce, HCI, UX/UI design.
Sutthipong Yungratog got his Master’s degree in IT in Business from Chulalongkorn Business School,
Chulalongkorn University, Bangkok, Thailand. He is now planning for his PhD study in HCI.
NONNEGATIVE MATRIX FACTORIZATION UNDER ADVERSARIAL
NOISE
Peter Ballen
Department of Computer and Information Science, University of Pennsylvania, Philadelphia, USA
ABSTRACT
Nonnegative Matrix Factorization (NMF) is a popular tool to estimate the missing entries of a dataset
under the assumption that the true data has a low-dimensional factorization. One example of such a
matrix is found in movie recommendation settings, where NMF corresponds to predicting how a user
would rate a movie. Traditional NMF algorithms assume the input data is generated from the underlying
representation plus mean-zero independent Gaussian noise. However, this simplistic assumption does not
hold in real-world settings that contain more complex or adversarial noise. We provide a new NMF
algorithm that is more robust towards these nonstandard noise patterns. Our algorithm outperforms
existing algorithms on movie rating datasets, where adversarial noise corresponds to a group of
adversarial users attempting to review-bomb a movie.
KEYWORDS
Nonnegative Matrix Factorization, Matrix Completion, Recommendation, Adversarial Noise, Outlier
Detection, Linear Model
Full Text : https://aircconline.com/csit/papers/vol9/csit91601.pdf
5th
International Conference on Data Mining and Applications (DMAP 2019) –
http://airccse.org/csit/V9N16.html
REFERENCES
[1] Indyk, Pitor & Motwani, Rajeev (1998) “Approximate Nearest Neighbors: Towards Remoing the
Curse of Dimensionality” Proceedings of the thirtieth annual ACM symposium on theory of computing,
pp604-613
[2] Oseledts, Ivan & Tyrtyshinikov, Eugene (2009), “Breaking the curse of dimensionality, or how to use
SVD in many dimensions”, SIAM Journal of Scientific Computing, pp3744-3759
[3] Dempster, Arthur & Laird, Nan & Rubin, Donald (1997) “Maximum likelihood from incomplete data
via the EM algorithm”, Journal of the Royal Statistical Society, pp1-22
[4] Srebro, Nathan and Jaakkola, Tommi (2003), “Weighted low-rank approximations”,Proceedings of
the 20th International Conference on Machine Learning, pp720-727
[5] Candes, Emmanuel & Recht, Benjamin (2009), “Exact matrix completion via convex optimization”,
Foundations of Computational mathematics, pp717
[6] Koren, Yehuda & Bell, Robert & Volinsky, Robert (2009) “Matrix factorization techniques for
recommender systems” Computer, pp30-37
[7] Zheng, Nan & Li, Qiudan & Liao, Shengcai & Zhang, Leiming (2010) “Which photo groups should I
choose? A comparative study of recommendation algorithms in Flickr”, Journal of Information Science,
pp733-750
[8] Burke, Robin & O’Mahony, Michael & Hurley, Neil (2015) “Robust Collaborative Filtering”
Recommender systems handbook, pp961-995
[9] O’Mahony, Michael & Hurley, Neil & Kushmerick, Nicolas & Silvestre, Guenole (2004),
“Collaborative recommendation: A robustness analysis”, ACM Transactions on Internet Technology,
pp344-377
[10] Sandvig, Jeff & Mobasher, Bamshad & Burke, Robin (2008), “A survey of collaborative
recommendation and the robustness of model-based algorithms”, IEEE Computer Society Technical
Committee on Data Engineering
[11] Mobasher, Bamshad & Burke, Robin & Sandvig, Jeff (2006), “Model-based collaborative filtering as
a defense against profile injection attacks”, AI Magazine pp1388
[12] Lee, Daniel & Seung, Sebastian, (2001) “Algorithms for nonnegative matrix factorization”,
Advances in Neural Information Processing Systems, pp556-562
[13] Sra, Suvrit & Dhillon, Inderjit (2006) “Generalized nonnegative matrix approximations with
Bregman divergences”, Advances in neural information processing systems, pp283-290
[14] Fevotte, Cedric & Idier, Jerome (2011), “Algorithms for nonnegative matrix factorization with
betadivergence, Neural computation, pp2421-2456
[15] Taslaman L & Nilsson B. (2012) “A framework for regularized non-negative matrix factorization,
with application to the analysis of gene expression data” PLoS One
[16] Mao, Yun & Saul, Lawrence (2009) “Modeling distances in large scale networks by matrix
factorization” ACM SIGCOMM conference in internet measurement, pp278-287
[17] Liu, Chao & Yang, Hung-chih & Fan, Jinliang & He, Li-Wei & Wang, Yi-Min (2010), “Distributed
nonnegative matrix factorization for web-scale dyadic data analysis on mapreduce”, Proceedings of the
19th international conference on World wide web, pp681-690
[18] Zhang, Sheng & Wang, Weihong & Ford, James & Makedon, Fillia (2006) “Learning from
incomplete ratings on nonnegative matrix factorization” SIAM conference on data mining, pp549-553
[19] Yang, Min & Xu, Linli & White, Martha & Schuurmans, Dale, & Yu, Yao-liang (2010) “Relaxed
clipping: A global training method for robust regression and classification”, Advances in Neural
Processing, pp2532-2540
[20] Honore, Bo E (1992), “Trimmed LAD and least squares estimation of truncated and censored
regression models with fixed effects”, Econometrica: Journal of the Econometric Society, pp533-565
[21] Garcia-Escudero, Luis Angel & Gordaliza, Alfonso (1999), “Robustness properties of k means and
trimmed k means”, Journal of the American Statistical Association, pp956-969
[22] Harper, F. Maxwell & Konstan, Joseph (2015) “The MovieLens datasets: history and context” ACM
Transactions on Interactive Intelligent Systems
AUTHORS
Peter Ballen is a PhD student at the University of Pennsylvania, where he studies matrix factorization
algorithms, their theoretical properties, and their applications in data mining.
PROPOSING A HYBRID APPROACH FOR EMOTION
CLASSIFICATION USING AUDIO AND VIDEO DATA
Reza Rafeh1
, Rezvan Azimi Khojasteh2
, Naji Alobaidi3
1
Centre for Information Technology, Waikato Institute of Technology, Hamilton, New Zealand
2
Department of Computer Engineering, Malayer Branch, Islamic Azad University, Hamedan, Iran
3
Department of Computer Engineering, Unitec Institute of Technology, Auckland, New Zealand
ABSTRACT
Emotion recognition has been a research topic in the field of Human Computer Interaction (HCI) during
recent years. Computers have become an inseparable part of human life. Users need human-like
interaction to better communicate with computers. Many researchers have become interested in emotion
recognition and classification using different sources. A hybrid approach of audio and text has been
recently introduced. All such approaches have been done to raise the accuracy and appropriateness of
emotion classification. In this study, a hybrid approach of audio and video has been applied for emotion
recognition. The innovation of this approach is selecting the characteristics of audio and video and their
features as a unique specification for classification. In this research, the SVM method has been used for
classifying the data in the SAVEE database. The experimental results show the maximum classification
accuracy for audio data is 91.63% while by applying the hybrid approach the accuracy achieved is
99.26%.
KEYWORDS
Emotion Classification, Emotions Analysis, Emotion Detection, SVM, Speech Emotion Recognition;
Full Text : https://aircconline.com/csit/papers/vol9/csit91403.pdf
5th
International Conference on Computer Science and Information Technology (CSTY
2019) – http://airccse.org/csit/V9N14.html
REFERENCES
[1] Ververidis,Dimitrios,Kotropoulos, Constantine, “Emotional speech recognition: Resources, features,
and methods,” Speech Communication, vol. 48, no. 9, pp. 1162-1181, 2006.
[2] Bhaskar, Jasmine, Sruthi, K. Nedungadi and Prema, “Hybrid Approach for Emotion Classification of
Audio Conversation Based on Text and Speech Mining,” Procedia Computer Science, vol. 46, pp. 635-
643, 2015.
[3] E. H. Jang, B. J. Park, S. H. Kim and J. H. Sohn, “Emotion classification based on physiological
signals induced by negative emotions: Discriminantion of negative emotions by machine learning,” in
Networking, Sensing and Control (ICNSC), 2012 9th IEEE International Conference on Beijing, 2012.
[4] C. Parlak. and B. Diri, “Emotion recognition from the human voice,” in Signal Processing and
Communications Applications Conference (SIU), 2013 21st, 2013.
[5] E. Ayadi, M. Kamel, M. S. and K. Fakhri, “Survey on speech emotion recognition: Features,
classification schemes, and databases,” vol. 44, no. 3, pp. 572-587, 2011.
[6] Y. Pan, P. Shen and L. Shen, “Speech Emotion Recognition Using Support Vector Machine,”
International Journal of Smart Home, vol. 6, no. 2, pp. 101-108, 2012.
[7] C. Lijiang, M. Xia, X. Yuli and C. L. Lung, “Speech emotion recognition: Features and classification
models,” Digital Signal Processing, vol. 22, no. 6, pp. 1154-1160, 2012.
[8] N. Rajitha, D. David, L. B, P. J., Sridharan, S. Fookes and C. B., “Recognising audio-visual speech in
vehicles using the AVICAR database,” in Proceedings of the 13th Australasian International Conference
on Speech Science and Technology Melbourne, Vic, 2010.
[9] M. S. Sinith, E. Aswathi, T. M. Deepa, C. P. Shameema and S. Rajan, “Emotion recognition from
audio signals using Support Vector Machine,” in IEEE Recent Advances in Intelligent Computational
Systems (RAICS) Trivandrum, 2015.
[10] G. Chandni, M. Vyas, K. Dutta, K. Riha and J. Prinosil, “An automatic emotion recognizer using
MFCCs and Hidden Markov Models,” in Ultra Modern Telecommunications and Control Systems and
Workshops (ICUMT), 2015 7th International Congress on Brno, 2015.
[11] “eNTERFACE'05 EMOTION Database,” [Online]. Available: http://
www.enterface.net/enterface05/..
[12] C. Busso, M. Bulut, C CLee, A. Kazemzadeh, E. Mower, S. Kim, J. Chang, S. Lee and S.
Narayanan, “IEMOCAP: interactive emotional dyadic motion capture database,” vol. 42, pp. 335-359,
2008.
[13] A. Metallinou, C. Busso, S. Lee and S. Narayanan, “Visual emotion recognition using compact facial
representations and viseme information,” in 2010 IEEE International Conference on Acoustics, Speech
and Signal Processing ,Dallas, TX, 2010.
[14] “SAVEE Database,” [Online]. Available: http://kahlan.eps.surrey.ac.uk/savee/Database.html.
[15] M. Sidorov, E. Sopov, I. Ivanov and W. Minker, “Feature and decision level audio-visual data fusion
in emotion recognition problem,” in Informatics in Control, Automation and Robotics (ICINCO), 2015
12th International Conference on Colmar, 2015.
[16] N. Yang, R. Muraleedharan, J. Kohl, I. Demirkol, W. Heinzelman and M. Sturge-Apple, “Speech-
based emotion classification using multiclass SVM with hybrid kernel and thresholding fusion,” in
Spoken Language Technology Workshop (SLT), 2012 IEEE Miami, FL, 2012.
[17] “Bridge Project,” 2013. [Online]. Available:
http://www.ece.rochester.edu/projects/wcng/project_bridge.html.
[18] E. Sopov and I. Ivanov, “elf-Configuring Ensemble of Neural Network Classifiers for Emotion
Recognition in the Intelligent Human-Machine Interaction,” in Computational Intelligence, 2015 IEEE
Symposium Series on Cape Town, 2015.
[19] S. Agrawal and S. Dongaonkar, “Emotion recognition from speech using Gaussian Mixture Model
and vector quantization,” in Reliability, Infocom Technologies and Optimization (ICRITO) (Trends and
Future Directions), 2015 4th International Conference on Noida, 2015.
[20] M. R. Mehmood and H. J. Lee, “Emotion classification of EEG brain signal using SVM and KNN,”
in Multimedia & Expo Workshops (ICMEW), 2015 IEEE International Conference on Turin, Italy, 2015.
[21] N. R. Kanth and S. Saraswathi, “Efficient speech emotion recognition using binary support vector
machines & multiclass SVM,” in IEEE International Conference on Computational Intelligence and
Computing Research (ICCIC) Madurai, 2015.
[22] Y. Chavhan, M. L. Dhore and P. Yesaware, “Article: Speech Emotion Recognition Using Support
Vector Machine,” vol. 1, pp. 6-9, 2010.
[23] M. S. Sinith, E. Aswathi, T. M. Deepa, C. P. Shameema and S. Rajan, “Emotion recognition from
audio signals using Support Vector Machine,” in IEEE Recent Advances in Intelligent Computational
Systems (RAICS) Trivandrum, 2015.
[24] A. Metallinou, A. Katsamanis, W. M, F. Eyben, B. Schuller and S. Narayanan, “Context-sensitive
learning for enhanced audiovisual emotion classification (Extended abstract),” in Affective Computing
and Intelligent Interaction (ACII), 2015 International Conference on Xi'an, 2015.
AUTHORS
Reza Rafeh is a senior lecturer at Waikato Institute of Technology. He received his
PhD in computer science from Monash University. His research areas cover data
mining, big data and analytics, recommender systems, software engineering and
modelling, constraint programming, and health informatics.
Rezvan Azimi Khojasteh received her MSc in Software Engineering from Islamic
Azad University, Malayer Branch. Her research area includes emotion mining and
data analytics.
Naji Alobaidi received his MSc in Computer Science from Unitec Institute of
Technology. His research areas cover data analytics, emotion mining, and vehicular
adhoc networks.
A FACIAL RECOGNITION-BASED VIDEO ENCRYPTION APPROACH
TO PREVENT FAKEDEEP VIDEOS
Alex Liang1
, Yu Su2
and Fangyan Zhang3
1
St. Margaret's Episcopal School, San Juan Capistrano, CA 92675
2
Department of Computer Science, California State Polytechnic University, Pomona, CA, 91768
3
ASML, San Jose, CA, 95131
ABSTRACT
Deepfake is a kind of technique which forges video with a certain purpose. It is in urgent demand that one
approach can defect if a video is deepfaked or not. It also can reduce a video to be exposed to slanderous
deepfakes and content theft. This paper proposes a useful tool which can encrypt and verify a video
through proper corresponding algorithms and defect it accurately. Experiment in the paper shows that the
tool has realized our goal and we can put it into practice.
KEYWORDS
Video Encryption, Video Verification, Encryption Algorithm, Decryption algorithm
Full Text : https://aircconline.com/csit/papers/vol9/csit91317.pdf
6th
International Conference on Computer Science, Engineering and Information
Technology (CSEIT-2019) - http://airccse.org/csit/V9N13.html
REFERENCES
[1] D. Güera and E. J. Delp, "Deepfake Video Detection Using Recurrent Neural Networks," 2018 15th
IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Auckland,
New Zealand, 2018, pp. 1-6.
[2] Exposing DeepFake Videos By Detecting Face Warping Artifacts Yuezun Li, Siwei Lyu Computer
Science Department University at Albany, State University of New York, USA
[3] Ruchansky, N., Seo, S., & Liu, Y. (2017, November). Csi: A hybrid deep model for fake news
detection. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management
(pp. 797-806). ACM.
[4] Polletta, F., & Callahan, J. (2019). Deep stories, nostalgia narratives, and fake news: Storytelling in
the Trump era. In Politics of meaning/meaning of politics (pp. 55-73). Palgrave Macmillan, Cham.
[5] Singhania, S., Fernandez, N., & Rao, S. (2017, November). 3han: A deep neural network for fake
news detection. In International Conference on Neural Information Processing (pp. 572-581). Springer,
Cham.
[6] Güera, D., & Delp, E. J. (2018, November). Deepfake video detection using recurrent neural
networks. In 2018 15th IEEE International Conference on Advanced Video and Signal Based
Surveillance (AVSS) (pp. 1-6). IEEE.
[7] Citron, D. K., & Chesney, R. (2018). Deep Fakes: A Looming Crisis for National Security,
Democracy and Privacy?. Lawfare.
[8] Bradski, G., & Kaehler, A. (2008). Learning OpenCV: Computer vision with the OpenCV library. "
O'Reilly Media, Inc.".
[9] Pulli, K., Baksheev, A., Kornyakov, K., & Eruhimov, V. (2012). Real-time computer vision with
OpenCV. Communications of the ACM, 55(6), 61-69.
[10] Li, Y., & Lyu, S. (2018). Exposing deepfake videos by detecting face warping artifacts. arXiv
preprint arXiv:1811.00656, 2.
[11] Cozzolino, D., Thies, J., Rössler, A., Riess, C., Nießner, M., & Verdoliva, L. (2018).
Forensictransfer: Weakly-supervised domain adaptation for forgery detection. arXiv preprint
arXiv:1812.02510.
[12] Dolhansky, B., Howes, R., Pflaum, B., Baram, N., & Ferrer, C. C. (2019). The Deepfake Detection
Challenge (DFDC) Preview Dataset. arXiv preprint arXiv:1910.08854.
AN IMAGE CLASSIFICATION-BASED APPROACH TO AUTOMATE
VIDEO PLAYING DETECTION AT SYSTEM LEVEL
Eric Liu1
, Samuel Walcoff2
, Qi Lu3
and Yu Sun4
1
Aracadia High School, Arcadia, CA, 92697
2
Department of Computer Science, University of California, Santa Cruz Santa Cruz, CA 95064
3
Department of Social Science, University of California, Irvine Irvine, CA, 92697
4
Department of Computer Science, California State Polytechnic University, Pomona, CA, 91768
ABSTRACT
Tech distraction has become a critical issue on people’s work and study productivity, particularly with the
growing amount of digital content from the social media site such as Youtube. Although browser-based
plug-ins are available to help block and monitor the sites, they do not work for all scenarios. In this paper,
we present a system-level video playing detection engine that captures screenshots and analyze the
screenshot image using deep learning, in order to predict whether the image has videos in it or not. A
mobile app has also been developed to enable parents to control the video playing detection remotely.
KEYWORDS
Machine learning, Tech distraction, Image classification
Full Text : https://aircconline.com/csit/papers/vol9/csit91215.pdf
8th
International Conference on Natural Language Processing (NLP 2019) -
http://airccse.org/csit/V9N12.html
REFERENCES
[1] Leonard, Huw, and Gary Farmaner. "Method and system for administering a customer loyalty reward
program using a browser extension." U.S. Patent Application 09/908,615, filed April 18, 2002.
[2] Viennot, Nicolas, Edward Garcia, and Jason Nieh. "A measurement study of google play." In ACM
SIGMETRICS Performance Evaluation Review, vol. 42, no. 1, pp. 221-233. ACM, 2014.
[3] Liu, Charles Zhechao, Yoris A. Au, and Hoon Seok Choi. "Effects of freemium strategy in the mobile
app market: An empirical study of google play." Journal of Management Information Systems 31, no. 3
(2014): 326-354.
[4] Reddington, Thomas B. "Keyword search automatic limiting method." U.S. Patent 4,554,631, issued
November 19, 1985.
[5] Lerner, Benjamin S., Liam Elberty, Neal Poole, and Shriram Krishnamurthi. "Verifying web browser
extensions’ compliance with private-browsing mode.” In European Symposium on Research in Computer
Security, pp. 57-74. Springer, Berlin, Heidelberg, 2013.
[6] Young, Simon N. "The use of diet and dietary components in the study of factors controlling affect in
humans: a review." Journal of Psychiatry and Neuroscience 18, no.5 (1993): 235.
[7] Buxton, J., M. White, and D. Osoba. "Patients' experiences using a computerized program with a
touch-sensitive video monitor for the assessment of health-related quality of life." Quality of Life
Research 7, no. 6 (1998): 513-519.
[8] Craddock, Deborah, Cath O'Halloran, Kathryn Mcpherson, Sarah Hean, and Marilyn Hammick. "A
top-down approach impedes the use of theory? Interprofessional educational leaders' approaches to
curriculum development and the use of learning theory." Journal of Interprofessional Care 27, no. 1
(2013): 65-72.
[9] Chamaret, Aurélie, Martin O'Connor, and Gilles Récoché. "Top-down/bottom-up approach for
developing sustainable development indicators for mining: application to the Arlit uranium mines
(Niger)." (2007).
[10] Neches, Robert, Richard E. Fikes, Tim Finin, Thomas Gruber, Ramesh Patil, Ted Senator, and
William R. Swartout. "Enabling technology for knowledge sharing." AI magazine 12, no. 3 (1991): 36-
36.
[11] Polit, Stephen. "R1 and beyond: Ai technology transfer at digital equipment corporation." AI
Magazine 5, no. 4 (1984): 76-76.
[12] Lee, Dar-Shyang, Lee-Feng Chien, Aries Hsieh, Pin Ting, and Kin Wong. "On-screen
guidelinebased selective text recognition." U.S. Patent 8,515,185, issued August 20, 2013.
[13] Alcock, Shane, and Richard Nelson. "Application flow control in YouTube video streams." ACM
SIGCOMM Computer Communication Review 41, no. 2 (2011): 24-30.
[14] Sheiner, Lilach, Jessica L. Demerly, Nicole Poulsen, Wandy L. Beatty, Olivier Lucas, Michael S.
Behnke, Michael W. White, and Boris Striepen. "A systematic screen to discover and analyze apicoplast
proteins identifies a conserved and essential protein import factor." PLoS pathogens 7, no. 12 (2011):
e1002392.
AUTOMATIC EXTRACTION OF FEATURE LINES ON 3D SURFACE
Zhihong Mao, Ruichao Wang and Yulin Zhou
Division of Intelligent Manufacturing, Wuyi University, Jiangmen529020, China
ABSTRACT
Many applications in mesh processing require the detection of feature lines. Feature lines convey the
inherent features of the shape. Existing techniques to find feature lines in discrete surfaces are relied on
user-specified thresholds, inaccurate and time-consuming. We use an automatic approximation technique
to estimate the optimal threshold for detecting feature lines. Some examples are presented to show our
method is effective, which leads to improve the feature lines visualization.
KEY WORDS
Feature Lines; Extraction; Meshes
Full Text : https://aircconline.com/csit/papers/vol9/csit90901.pdf
9th
International Conference on Computer Science, Engineering and Applications (CCSEA
2019) - http://airccse.org/csit/V9N09.html
REFERENCES
[1] Forrester Cole, Kevin Sanik, Doug Decarlo, Adam Finkelstein, Thomas Funkhouser, Szymon
Rusinkiewicz & Manish Singh, (2009) “How Well Do Line Drawings Depict Shape?”, ACM Transaction
on Graphics, Vol. 28, No.3, pp43-51.
[2] Ohtake Y., Belyaev A., & Seidel H.P, (2004) “Ridge-valley Lines on Meshes via Implicit Surface
Fitting”, ACM Transactions on Graphics, Vol. 23, No. 3, pp609-612.
[3] Shin Yoshizawa, Alexander Belyaev & Hans-Perter Seidel, (2005) “Fast and Robust Detection of
Crest Lines on Meshes”, Symposium on Solid and Physical Modeling’05, pp227-232.
[4] Soo-Kyun Kim & Chang-Hun Kim, (2006) “Finding Ridges and Valleys in A Discrete Surface Using
A Modified MLS Approximation”, Computer-Aided Design, Vol. 38, No.2, pp173-180.
[5] Georgios Stylianou & Gerald Farin, (2004) “Crest Lines for Surface Segmentation and Flattening”,
IEEE Transaction on Visualization and Computer Graphics, Vol. 10, No. 5, pp536-543.
[6] Tilke Judd, Fredo Durand & Edward H. Adelson, (2007) “ Apparent ridges for line drawing” , ACM
Transactions on Graphics, Vol. 26, No. 3, pp19-26.
[7] Chang Ha Lee, Amitabh Varshney & David W.Jacobs, (2005) “Mesh Saliency”. Proceedings of ACM
Siggraph’05, pp659-666.
[8] Ran Gal & Daniel Cohen-Or, (2006) “Salient Geometric Features for Partial Shape Matching and
Similarity”, ACM Transactions on Graphics, Vol. 25, No. 1, pp130-150.
[9] Taubin G, (1995) “Estimating the Tensor of Curvature of a Surface from a Polyhedral
Approximation”, In Proceedings of Fifth International Conference on Computer Vision’95, pp902- 907.
[10] Sachin Nigam & Vandana Agrawal, (2013) “ A Review: Curvature approximation on triangular
meshes”, Int. J. of Engineering science and Innovative Technology, Vol. 2, No. 3, pp330-339.
[11] Xunnian Yang & Jiamin Zheng, (2013) “Curvature tensor computation by piecewise surface
interpolation”, Computer Aided Design, Vol. 45, No. 12, pp1639-1650.
[12] Gady Agam & Xiaojing Tang, (2005) “A Sampling Framework for Accurate Curvature Estimation in
Discrete Surfaces”, IEEE Transaction on Visu alization and Computer Graphics, Vol. 11, No. 5, pp573-
582.
[13] Meyer M., Desbrun M., Schroder P. & Barr A. H, (2003) “Discrete Differential-geometry Operators
for Triangulated 2-manifolds”, In Visualization and Mathematics III’ 03, pp35-57.
[14] Stupariu & Mihai-Sorin, (2016) “An application of triangle mesh models in detecting patterns of
vegetation”, WSCG’ 2016, pp87-90.
[15] Chen L., Xie X., Fan X., Ma W., Zhang H., & Zhou H, (2003) “A visual attention model for adapting
images on small displays”, ACM Multimedia Systems Journal, Vol. 9, No. 4, pp353-364.
[16] Lee, Y., Markosian, L., Lee, S., & Hughes, J. F, (2007) ”Line drawings via abstracted shading”,
ACM Transactions on Graphics, Vol. 26, No. 3, pp1-9.
[17] Jack Szu-Shen & His-Yung FEng, (2017) “Idealization of scanning-derived triangle mesh models of
prismatic engineering parts”, International Journal on Interactive Design and Manufacturing, Vol. 11, No.
2, pp205-221.
[18] Decarlo D., Finkelstein A., Rusinkiewicz S. & Santella A,(2003) “Suggestive Contours for
Conveying Shape”, ACM Transactions on Graphics, Vol.22, No. 3, pp848-855.
[19] M. Kolomenkin, I. Shimshoni, & A. Tal,(2008) “Demarcating curves for shape illustration”, ACM
Transactions on Graphics, Vol.27, No.5, pp157-166.
[20] Michael Kolomenkin,(2009) “Ilan Shimshoni and Ayellet Tal. On Edge Detection on Surface”, IEEE
CVPR’ 09, pp2767-2774.
[21] M. P. Do Carmo (2004) Differential geometry of curves and surfaces, Book, China Machine Press.
[22] A. Belyaev, P.-A. Fayolle, & A. Pasko, (2013) “Signed Lp-distance fields”, CAD, Vol.45, No. 2,
pp523-528.
[23] Y Zhang, G Geng, X Wei, S Zhang & S Li, (2016) “A statistical approach for extraction of feature
lines from point clouds” ,Computers & Graphics, Vol. 56, No. 3, pp31-45.
A SURVEY OF STATE-OF-THE-ART GANBASED APPROACHES TO
IMAGE SYNTHESIS
Shirin Nasr Esfahani1
and Shahram Latifi2
1
Department of Computer Science, UNLV, Las Vegas, USA
2
Department of Electrical & Computer Eng., UNLV, Las Vegas, USA
ABSTRACT
In the past few years, Generative Adversarial Networks (GANs) have received immense attention by
researchers in a variety of application domains. This new field of deep learning has been growing rapidly
and has provided a way to learn deep representations without extensive use of annotated training data.
Their achievements may be used in a variety of applications, including speech synthesis, image and video
generation, semantic image editing, and style transfer. Image synthesis is an important component of
expert systems and it attracted much attention since the introduction of GANs. However, GANs are
known to be difficult to train especially when they try to generate high resolution images. This paper
gives a thorough overview of the state-of-the-art GANs-based approaches in four applicable areas of
image generation including Text-to-Image-Synthesis, Image-to-Image-Translation, Face Aging, and 3D
Image Synthesis. Experimental results show state-of-the-art performance using GANs compared to
traditional approaches in the fields of image processing and machine vision.
KEYWORDS
Conditional generative adversarial networks (cGANs), image synthesis, image-to-image translation, text-
to-image synthesis, 3D GANs.
Full Text : https://aircconline.com/csit/papers/vol9/csit90906.pdf
9th
International Conference on Computer Science, Engineering and Applications (CCSEA
2019) - http://airccse.org/csit/V9N09.html
REFERENCES
[1] Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A.,
&Bengio, Y. (2014) “Generative adversarial nets” Advances in Neural InformationProcessing Systems 27
(NIPS 2014),Montreal, Canada.
[2] Frey, B. J. (1998) “Graphical models for machine learning and digital communication”, MIT press.
[3] Doersch, C. (2016)“Tutorial on variational autoencoders”, arXiv preprint arXiv:1606.05908,
[4] M. Mirza & S. Osindero (2014) “Conditional generative adversarial nets”, arXiv:1411.1784v1.
[5] S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele & H. Lee (2016) “Generative adversarial text to
image synthesis”, International Conference on Machine Learning, New York, USA, pp. 1060-1069.
[6] A. Radford, L. Metz & S. Chintala (2016) “Unsupervised representation learning with deep
convolutional generative adversarial networks”, 4th International Conference of Learning Representations
(ICLR 2016), San Juan, Puerto Rico.
[7] S. Reed, Z. Akata, S. Mohan, S. Tenka, B. Schiele & H. Lee (2016) “Learning what and where to
draw”, Advances in Neural Information Processing Systems, pp. 217–225.
[8] S. Zhu, S. Fidler, R. Urtasun, D. Lin & C. L. Chen (2017) “Be your own prada: Fashion synthesis with
structural coherence”, International Conference on Computer Vision (ICCV 2017), Venice, Italy,pp.
1680-1688.
[9] S. Sharma, D. Suhubdy, V. Michalski, S. E. Kahou& Y. Bengio (2018) “ChatPainter: Improving text
to image generation using dialogue”, 6th International Conference on Learning Representations (ICLR
2018 Workshop), Vancouver, Canada.
[10] Z. Zhang, Y. Xie& L. Yang (2018) “Photographic text-to-image synthesis with a hierarchically-
nested adversarial network”, Conference on Computer Vision and PatternRecognition (CVPR 2018), Salt
Lake City, USA,pp. 6199-6208.
[11] M. Cha, Y. Gwon& H. T. Kung (2017) “Adversarial nets with perceptual losses for text-to-image
synthesis”, International Workshop on Machine Learning for Signal Processing (MLSP 2017), Tokyo,
Japan,pp. 1- 6.
[12] H. Dong, S. Yu, C. Wu & Y. Guo (2017) “Semantic image synthesis via adversarial learning”,
International Conference on Computer Vision (ICCV 2017), Venice, Italy,pp. 5706-5714.
[13] H. Zhang, T. Xu, H. Li, S. Zhang, X. Wang, X. Huang, and D. Metaxas (2017) “Stackgan: Text to
photo-realistic image synthesis with stacked generative adversarial networks”, International Conference
on Computer Vision (ICCV 2017), Venice, Italy,pp. 5907-5915.
[14] S. Hong, D. Yang, J. Choi & H. Lee (2018) “Inferring semantic layout for hierarchical text-to-image
synthesis”, Conference on Computer Vision and PatternRecognition (CVPR 2018), Salt Lake City,
USA,pp. 7986-7994.
[15] Y. Li, M. R. Min, Di. Shen, D. Carlson, and L. Carin (2018) “Video generation from text”, 14th
Artificial Intelligence and Interactive Digital Entertainment Conference (AIIDE 2018), Edmonton,
Canada.
[16] J. Chen, Y. Shen, J. Gao, J. Liu & X. Liu (2017) “Language-based image editing with recurrent
attentive models”, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2018), Salt
Lake City, USA, pp. 8721-8729.
[17] A. Dash, J. C. B. Gamboa, S. Ahmed, M. Liwicki& M. Z. Afzal (2017) “TAC-GAN-Text
conditioned auxiliary classifier”, arXiv preprint arXiv: 1703.06412, 2017.
[18] A. Odena, C. Olah& J. Shlens (2017) “Conditional image synthesis with auxiliary classifier GANs,”
Proceeding of 34th International Conference on Machine Learning (ICML 2017), Sydney, Australia.
[19] H. Zhang, I. Goodfellow, D. Metaxas & A. Odena (2018) “Self-attention, generative adversarial
networks”, arXiv preprint arXiv:1805.08318, 2018.
[20] T. Xu, P. Zhang, Q. Huang, H. Zhang, Z.Gan, X. Huang & X. He (2018) “AttnGAN: Fine-grained
text to image generation with attentional generative adversarial networks”, The IEEE Conference on
Computer Vision and PatternRecognition (CVPR 2018), Salt Lake City, USA,pp. 1316-1324.
[21] T. Salimans, I. J. Goodfellow, W. Zaremba, V. Cheung, A. Radford & X. Chen (2016) “Improved
techniques for training GANs”, Advances in Neural Information Processing Systems 29 (NIPS 2016),
Barcelona, Spain.
[22] P. Isola, J.-Y. Zhu, T. Park & A. A. Efros (2017) “Image-to-image translation with conditional
adversarial networks”,The IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017),
Honolulu, Hawai, USA, pp. 1125-1134.
[23] J.-Y. Zhu, T. Park, P. Isola & A. A. Efros (2017) “Unpaired Image-to-Image Translation using
CycleConsistent”, The IEEE International Conference on Computer Vision (ICCV2017), Venice, Italy,
pp.2223-2232.
[24] M.-Y. Liu & O. Tuzel (2016) “Coupled generative adversarial networks”, 2016 Conference on
Neural Information Processing Systems (NIPS 2016), Barcelona, Spain, pp. 469–477.
[25] J. Donahue, P. Kr¨ahenb¨uhl& T. Darrell (2016) “Adversarial feature learning”, 4th International
Conference on Learning Representations (ICLR 2016),San Juan, Puerto Rico.
[26] V. Dumoulin, I. Belghazi, B. Poole, A. Lamb, M. Arjovsky, O. Mastropietro& A. Courville (2017)
“Adversarially learned inference”, 5th International Conference on Learning Representations(ICLR
2017), Toulon, France.
[27] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, & B.
Schiele (2016) “The cityscapes dataset for semantic urban scene understanding”, The IEEE Conference
on Computer Vision and Pattern Recognition (CVPR 2016), Las Vegas, USA, pp. 3213- 3223.
[28] Q. Chen & V. Koltun (2017) “Photographic image synthesis with cascaded refinement
networks”,IEEE International Conference on Computer Vision (ICCV 2107), Venice, Italy, pp. 1520–
1529.
[29] T.-C. Wang, M.-Y. Liu, J.-Y. Zhu, A. Tao, J. Kautz & B. Catanzaro (2018) “High-resolution image
synthesis and semantic manipulation with conditional GANs”, The IEEE Conference on Computer Vision
and Pattern Recognition (CVPR 2018), Salt Lake City, USA, pp. 8798-8807.
[30] G. Lample, N. Zeghidour, N. Usunier, A. Bordes, L. Denoyer& M. Ranzato (2017) “Fader networks:
Manipulating images by sliding attributes”, Advances in Neural Information Processing Systems 30
(NIPS 2017), Long Beach, USA.
[31] D. Michelsanti & Z.-H. Tan (2017) “Conditional generative adversarial networks for speech
enhancement and noise-robust speaker verification”, Proceeding of Interspeech, pp. 2008–2012.
[32] G. Antipov, M. Baccouche &J.-L. Dugelay(2017)“Face aging with conditional generative adversarial
networks”, IEEE International Conference on Image Processing (ICIP 2017), pp.2089 – 2093.
[33] R. H. Byrd, P. Lu, J. Nocedal& C. Zhu (1995) “A limited memory algorithm for bound constrained
optimization”, SIAM Journal on Scientific Computing, vol. 16, no. 5, pp. 1190–1208, 1995.
[34] Z. Wang, X. Tang, W. Luo & S. Gao (2018) “Face aging with identity preserved conditional
generative adversarial networks”, Proceeding IEEE Conference Computer Vision and Pattern
Recognition, CVPR 2018), Salt Lake City, USA, pp. 7939–7947.
[35] G. Antipov, M. Baccouche& J.-L. Dugelay (2017)” Boosting cross-age face verification via
generative age normalization”, International Joint Conference on Biometrics (IJCB 2017), Denver, USA,
pp. 17.
[36] E. L.-Miller, Gary B. Huang, A. R. Chowdhury, H. Li &G.Hua (2016) “Labeled Faces in the Wild:
A Survey”,Advances in Face Detection and Facial Image Analysis, Springer,2016, pp.189-248.
[37] B. Amos, B. Ludwiczuk, & M. Satyanarayanan. Openface (2016) “A general-purpose face
recognition library with mobile applications”, Technical report, CMU-CS-16-118, CMU School of
Computer Science.
[38] Z. Zhang, Y. Song & H. Qi (2017) “Age progression/regression by conditional adversarial auto
encoder”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017), Honolulu, USA,
pp. 4352 – 4360.
[39] S. Liu, Y. Sun, D. Zhu, R. Bao, W. Wang, X. Shu & S. Yan (2017) “Face Aging with Contextual
Generative Adversarial Nets”, Proceedings of the 25th ACM international conference on Multimedia,
Mountain View, USA, pp. 82 -90.
[40] J. Song, J. Zhang, L. Gao, X. Liu & H. T. Shen (2018) “Dual Conditional GANs for Face Aging and
Rejuvenation”, Proceedings of the Twenty-Seventh International Joint Conference on Artificial
Intelligence (IJCAI-18), Stockholm, Sweden, pp. 899-905.
[41] H. Yang, D. Huang, Y. Wang & A. K. Jain (2018)” Learning face age progression: A pyramid
architecture of GANs”, Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR 2018), Salt Lake City, USA, pp. 31– 39.
[42] P. Li, Y. Hu, Q. Li, R. He & Z. Sun (2018) “Global and local consistent age generative adversarial
networks”, IEEE International Conference on Pattern Recognition, Beijing, China.
[43] P. Li, Y. Hu, R. He & Z. Sun (2018) “Global and Local Consistent Wavelet-domain Age Synthesis”,
arXiv:1809.07764.
[44] J. Wu, C. Zhang, T. Xue, W. T. Freeman & J. B. Tenenbaum (2016) “Learning a probabilistic latent
space of object shapes via 3d generative-adversarial modeling,” In Advances in Neural Information
Processing Systems 29 (NIPS 2016), Barcelona, Spain.
[45] J. Wu, Y. Wang, T. Xue, X. Sun, B. Freeman & J. Tenenbaum (2017) “Marrnet: 3d shape
reconstruction via 2.5 d sketches”, Advances in Neural Information Processing Systems,Long Beach,
USA, pp. 540–550.
[46] W. Wang, Q. Huang, S. You, C. Yang & U. Neumann (2017) “Shape inpainting using 3d generative
adversarial network and recurrent convolutional networks”, The IEEE International Conference on
Computer Vision (ICCV 2017),Venice, Italy, pp. 2298-2306.
[47] E. J. Smith & D. Meger (2017) “Improved adversarial systems for 3d object generation and
reconstruction”, first Annual Conference on Robot Learning,Mountain View, USA, pp. 87–96.
[48] P. Achlioptas, O. Diamanti, I. Mitliagkas& L. Guibas (2018) “Learning representations and
generative models for 3d point clouds”, 6th International Conference on Learning Representations,
Vancouver, Canada.
[49] X. Sun, J. Wu, X. Zhang, Z. Zhang, C. Zhang, T. Xue, J. B. Tenenbaum & W. T. Freeman (2018)
“Pix3d: Dataset and methods for single-image 3d shape modeling”, IEEE Conference on Computer
Vision and Pattern Recognition (CVPR 2018), Salt Lake City, USA, pp. 2974-2983.
[50] D. Maturana &S. Scherer (2015) “VoxNet: A 3D Convolutional Neural Network for real-time object
recognition”, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems
(IROS),Hamburg, Germany, pp. 922 – 928.
[51] B. Shi, S. Bai, Z. Zhou & X. Bai (2015) “DeepPano: Deep Panoramic Representation for 3-D Shape
Recognition”, IEEE Signal Processing Letters ,Vol. 22(12), pp. 2339 – 2343.
[52] A. Brock, T. Lim, J. Ritchie & N. Weston (2016) “Generative and discriminative voxel modeling
with convolutional neural networks”, arXiv:1608.04236.
AUTHORS
Shirin Nasr Esfahani received her M.S. degree in computer science – scientific
computation from Sharif University of technology, Tehran- Iran. She is currently a
Ph.D. candidate in computer science, University of Nevada, Las Vegas (UNLV). Her
fields of interest include, hyper spectral image processing, neural networks, deep
learning and data mining.
Shahram Latifireceived the Master of Science and the PhD degrees both in Electri cal
and Computer Engineering from Louisiana State University, Baton Rouge, in 1986
and 1989, respectively. He is currently a Professor of Electrical Engineering at the
University of Nevada, Las Vegas.
BLIND IMAGE QUALITY ASSESSMENT USING SINGULAR VALUE
DECOMPOSITION BASED DOMINANT EIGENVECTORS FOR
FEATURE SELECTION
Besma Sadou1
, Atidel Lahoulou2*
, Toufik Bouden1
, Anderson R. Avila3
, Tiago H. Falk3
, Zahid
Akhtar4
1
Non Destructive Testing Laboratory, University of Jijel, Algeria
2
LAOTI laboratory, University of Jijel, Algeria
3
Institut National de la Recherche Scientifique, University of Québec, Montreal, Canada 4University of
Memphis, USA
ABSTRACT
In this paper, a new no-reference image quality assessment (NR-IQA) metric for grey images is proposed
using LIVE II image database. The features used are extracted from three well-known NR-IQA objective
metrics based on natural scene statistical attributes from three different domains. These metrics may
contain redundant, noisy or less informative features which affect the quality score prediction. In order to
overcome this drawback, the first step of our work consists in selecting the most relevant image quality
features by using Singular Value Decomposition (SVD) based dominant eigenvectors. The second step is
performed by employing Relevance Vector Machine (RVM) to learn the mapping between the previously
selected features and human opinion scores. Simulations demonstrate that the proposed metric performs
very well in terms of correlation and monotonicity.
KEYWORDS
Natural Scene Statistics (NSS), Singular Value Decomposition (SVD), dominant eigenvectors, Relevance
Vector Machine (RVM).
Full Text : https://aircconline.com/csit/papers/vol9/csit90919.pdf
9th
International Conference on Computer Science, Engineering and Applications (CCSEA
2019) - http://airccse.org/csit/V9N09.html
REFERENCES
[1] D. Zhang, Y. Ding , N. Zheng, “Nature scene statistics approach based on ICA for no- reference
image quality assessment”, Proceedings of International Workshop on Information and Electronics
Engineering (IWIEE), 29 (2012), 3589- 3593.
[2] A. K. Moorthy, A. C. Bovik, A two-step framework for constructing blind image quality indices[J],
IEEE Signal Process. Lett., 17 (2010), 513-516.
[3] L. Zhang, L. Zhang, A.C. Bovik, A Feature-Enriched Completely Blind Image Quality Evaluator,
IEEE Transactions on Image Processing, 24(8) (2015), 2579- 2591.
[4] M.A. Saad, A.C. Bovik, C. Charrier, A DCT statistics-based blind image quality index, Signal
Process. Lett. 17 (2010) 583–586.
[5] M. A. Saad, A. C. Bovik, C. Charrier, Blind image quality assessment: A natural scene statistics
approach in the DCT domain, IEEE Trans. Image Process., 21 (2012), 3339-3352.
[6] A. Mittal, A.K. Moorthy, A.C. Bovik, No-reference image quality assessment in the spatial domain,
IEEE Trans. Image Process. 21 (2012), 4695 - 4708.
[7] A. Mittal, R. Soundararajan, A. C. Bovik, Making a completely blind image quality analyzer, IEEE
Signal Process. Lett., 20 (2013), 209-212.
[8] N. Kruger, P. Janssen, S. Kalkan, M. Lappe, A. Leonardis, J. Piater, A. Rodriguez-Sanchez, L.
Wiskott, “Deep hierarchies in the primate visual cortex: What can we learn for computer vision?”, IEEE
Trans. Pattern Anal. Mach. Intell., 35 (2013), 1847–1871.
[9] D. J. Felleman, D. C. Van Essen, “Distributed hierarchical processing in the primate cerebral cortex,”
Cerebral cortex, 1 (1991), 1–47.
[10] B. Sadou, A. Lahoulou, T. Bouden, A New No-reference Color Image Quality Assessment Metric in
Wavelet and Gradient Domains, 6th International Conference on Control Engineering and Information
Technologies, Istanbul, Turkey, 25-27 October (2018), 954-959.
[11] Q. Wu, H. Li, F. Meng, K. N. Ngan, S. Zhu, No reference image quality assessment metric via
multidomain structural information and piecewise regression. J. Vis. Commun. Image R., 32(2015), 205–
216.
[12] X. Shang, X. Zhao, Y. Ding, Image quality assessment based on joint quality-aware representation
construction in multiple domains, Journal of Engineering 2018 (2018), 12p.
[13] A. Lahoulou, E. Viennet, A. Beghdadi, ‘‘Selecting low-level features for image quality assessment
by statistical methods,’’ J. Comput. Inf. Technol. CIT 18 (2010), 83–195.
[14] H. Liu, H. Motoda, R. Setiono, and Z. Zhao, “Feature Selection: An Ever Evolving Frontier in Data
Mining”, Journal of Machine Learning Research, Proceedings Track, pp. 4-13, 2010.
[15] H. R. Sheikh, Z. Wang, L. Cormack, A. C. Bovik, LIVE Image Quality Assessment Database
Release 2, http://live.ece.utexas.edu/research/quality
[16] Final VQEG report on the validation of objective quality metrics for video quality
assessment:http://www.its.bldrdoc.gov/vqeg/projects/frtv_phaseI/
[17] M. W. Mahoney, P. Drineas, “CUR matrix decompositions for improved data analysis,” in Proc. The
National Academy of Sciences, February 2009.
[18] M.E. Tipping. The relevance vector machines. In Advances in Neural Information Processing
Systems 12, Solla SA, Leen TK, Muller K-R (eds). MIT Press: Cambridge, MA (2000), 652-658.
[19] D. Basak, S. Pal, D.C. Patranabis, Support vector regression, Neural Information Processing –
Letters and Reviews, 11 (2007).
[20] B. SchÖlkopf, A.J. Smola, Learning with Kernels. MIT press, Cambridge, (2002).
[21] H. R. Sheikh, M. F. Sabir, A. C. Bovik, A statistical evaluation of recent full reference image quality
assessment algorithms, IEEE Trans. Image Process., 15 (2006), 3440–3451.
AUTHORS
Besma Sadou is currently a PhD student in the department of Electronics at university of Jijel (Algeria).
She also works as full-time teacher of mathematics at the middle school. Her research interests are
focused on reduced and no-reference image quality assessment.
Atidel Lahoulou is Doctor in Signals and Images from Sorbonne Paris Cité (France) since 2012. She
earned her Habilitation Universitaire in 2017 and is currently associate professor in the department of
computer science at university of Jijel (Algeria). Her research interests include visual data quality
evaluation and enhancement, biometrics, machine learning and cybersecurity.
Toufik Bouden received the engineer diploma (1992), MSc (1995) and PhD (2007) degrees in automatics
and signal processing from Electronics Institute of Annaba University (Algeria). Since 2015, he is full
professor in the department of Automatics. His areas of research are signal and image processing,
nondestructive testing and materials characterization, biometrics, transmission security and watermarking,
chaos, fractional system analysis, synthesis and control.
Anderson R. Avila received his B.Sc. in Computer Science from Federal University of Sao Carlos, Brazil,
in 2004 and his M.Sc in Information Engineering from Federal University of ABC in 2014. In October
2013, Anderson worked as a short-term visiting researcher at INRS, where he now pursues his Ph.D
degree on the topic of speaker and emotion recognition. His research interests include pattern recognition
and multimodal signal processing applied to biometrics.
Tiago H. Falk is an Associate Professor at INRS-EMT, University of Quebec and Director of the
Multimedia Signal Analysis and Enhancement (MuSAE) Lab. His research interests are in multimedia
quality measurement and enhancement, with a particular focus on human-inspired technologies.
Zahid Akhtar is a research assistant professor at the University of Memphis (USA). Prior to joining the
University of Memphis, he was a postdoctoral fellow at INRS-EMT-University of Quebec (Canada),
University of Udine (Italy), Bahcesehir University (Turkey), and University of Cagliari (Italy),
respectively. Dr. Akhtar received a PhD in electronic and computer engineering from the University of
Cagliari (Italy). His research interests are biometrics, affect recognition, multimedia quality assessment,
and cybersecurity.
VULNERABILITY ANALYSIS OF IP CAMERAS USING ARP
POISONING
Thomas Doughty1
, Nauman Israr2
and Usman Adeel3
1
BSc (Hons) Cyber Security and Networks, Teesside University, Middlesbrough, UK
2
Senior Lecturer in Networks and Communication, Teesside University, Middlesbrough, UK
3
Senior Lecturer in Computer Science, Teesside University, Middlesbrough, UK
ABSTRACT
Internet Protocol (IP) cameras and Internet of Things (IoT) devices are known for their vulnerabilities,
and Man in the Middle attacks present a significant privacy and security concern. Because the attacks are
easy to perform and highly effective, this allows attackers to steal information and disrupt access to
services. We evaluate the security of six IP cameras by performing and outlining various attacks which
can be used by criminals. A threat scenario is used to describe how a criminal may attack cameras before
and during a burglary. Our findings show that IP cameras remain vulnerable to ARP Poisoning or
Spoofing, and while some cameras use Digest Authentication to obfuscate passwords, some vendors and
applications remain insecure. We suggest methods to prevent ARP Poisoning, and reiterate the need for
good password policy.
KEYWORDS
Security, Camera, Internet of Things, Passwords, Sniffing, Authentication
Full Text : https://aircconline.com/csit/papers/vol9/csit90712.pdf
8th
International Conference on Soft Computing, Artificial Intelligence and Applications
(SAI 2019) - http://airccse.org/csit/V9N07.html
REFERENCES
[1] H. Sinanovic and S. Mrdovic, “Analysis of mirai malicious software,” in 2017 25th International
Conference on Software, Telecommunications and Computer Networks (SoftCOM), Sep. 2017, pp. 1–5.
DOI:10.23919/SOFTCOM.2017.8115504.
[2] C. Kolias, G. Kambourakis, A. Stavrou, and J. Voas, “Ddos in the iot: Mirai and other
botnets,”Computer, vol. 50, no. 7, pp. 80–84, 2017, ISSN: 0018-9162.DOI: 10.1109/MC.2017.201.
[3] J. Liranzo and T. Hayajneh, "Security and privacy issues affecting cloud-based IP camera," 2017
IEEE 8th Annual Ubiquitous Computing, Electronics and Mobile Communication Conference
(UEMCON), New York, NY, 2017, pp. 458-465. DOI: 10.1109/UEMCON.2017.8249043
[4] M. Smith. (2014). Peeping into 73,000 unsecured security cameras thanks to default passwords,
[Online]. Available: https://www.csoonline.com/article/2844283/microsoft-subnet/peeping-into-73- 000-
unsecured-security-cameras-thanks-to-default-passwords.html.
[5] F. Callegati, W. Cerroni, and M. Ramilli, “Man-in-the-middle attack to the https protocol,”IEEE
Security Privacy, vol. 7, no. 1, pp. 78–81, Jan. 2009, ISSN: 1540-7993.DOI: 10.1109/MSP.2009.12.
[6] P. Arote and K. V. Arya, “Detection and prevention against arp poisoning attack using modified icmp
and voting,” in2015 International Conference on Computational Intelligence and Networks, Jan. 2015,pp.
136–141.DOI: 10.1109/CINE.2015.34.
[7] K. Boyarinov and A. Hunter, “Security and trust for surveillance cameras,” in2017 IEEE Conference
on Communications and Network Security (CNS), Oct. 2017, pp. 384–385.DOI:
10.1109/CNS.2017.8228676.
[8] ONVIF. (2018). Conformant products, [Online]. Available:
https://www.onvif.org/conformantproducts/.
[9] R. Alharbi and D. Aspinall, “An iot analysis framework: An investigation of iot smart cameras’
vulnerabilities,” inLiving in the Internet of Things: Cybersecurity of the IoT - 2018, Mar. 2018, pp. 1–
10.DOI:10.1049/cp.2018.0047.
[10] H. Schulzrinne, A. Rao, R. Lanphier, M. Westerlund, and M. Stiemer-ling,Real-Time Streaming
Protocol Version 2.0, RFC 7826, Dec. 2016.DOI: 10.17487/RFC7826. [Online]. Available: https://rfc-
editor.org/rfc/rfc7826.txt.
[11] Aircrack-ng. (2018). Aircrack-ng, [Online]. Available: https://www.aircrack-ng.org/.
[12] Foscam. (2018). Fi9826w, [Online]. Available: https://www.foscam.com/product/2.html.
[13] Hikvision. (). Ds-2cd2535fwd-i(w)(s), [Online]. Available:
https://www.hikvision.com/en/Products/Network-Camera/EasyIP-3.0/3MP/DS-2CD2535FWDI(W)(S).
[14] LILIN. (2018). Model: Lr2522e4 / lr2522e6, [Online]. Available:
https://www.meritlilin.com/en/product/LR2522E4LR2522E6.
[15] Sricam, (2018). Model: Ipr722es4.3 / ipr722es6, [Online]. Available:
https://www.meritlilin.com/en/product/IPR722ESIPR722ES6.
[16] Sricam. (2018). Sp008, [Online]. Available:
http://www.sricam.com/product/id/9d5d656a907f46e48da1d45b9d0115ed.html.
[17] Sricam, (). Sp017, [Online]. Available:
http://www.sricam.com/product/id/66e005d40593482ca14957fe87562952.html.
[18] J. Franks, P. M. Hallam-Baker, J. L. Hostetler, S. D. Lawrence, P. J.Leach, A. Luotonen, and L. C.
Stewart. (Jun. 1999). Http authentica-tion: Basic and digest access authentication, [Online]. Available:
http://www.rfc-editor.org/rfc/rfc2617.txt.
[19] P. Hawkes, M. Paddon, and G. G. Rose, Musings on the wang et al. md5 collision, Cryptology ePrint
Archive, Report 2004/264, 2004.[Online]. Available: https://eprint.iacr.org/2004/264.
[20] D. Pauli. (2016). Security! experts! slam! yahoo! management! for!using! old! crypto! [Online].
Available: https://www.theregister.co.uk/2016/12/15/yahoospasswordhash/.
[21] P. Shankdhar. (2018). Popular tools for brute-force attacks (updatedfor 2018), [Online]. Available:
https://resources.infosecinstitute.com/popular-tools-for-brute-force-attacks/.
[22] T. S. Project. (2018). Snort1#1 users manual 2.9.12, [Online]. Avail-able: http://manual-snort-org.s3-
website-us-east-1.amazonaws.com/.
[23] N. Tripathi and B. M. Mehtre, “Analysis of various arp poisoning mitigation techniques: A
comparison,” in2014 International Conference on Control, Instrumentation, Communication and
Computational Technologies (ICCICCT), Jul. 2014, pp. 125–132.DOI: 10 . 1109
/ICCICCT.2014.6992942.
[24] R. Shekh-Yusef, D. Ahrens, and S. Bremer, “Http digest access authentication,” RFC Editor, RFC
7616, Sep. 2015.
[25] E. W. (2018). Not perfect, but better: Improving security one step at a time, [Online]. Available:
https://www.ncsc.gov.uk/blog- post/not-perfect-better-improving-security-one-step-time.
AUTHORS
Thomas Doughty is a graduate of Teesside University and received a BSc (Hon) in
Cybersecurity and Networks. His research interests include Cyber Securit y and the
Internet of Things.
Dr. Nauman Israr is currently a Senior Lecturer in Networks and Communication at
Teesside University. His research interests are include Wireless Sensor Networks,
Intelligent Computing and Cluster Communication.
Dr. Usman Adeel is currently a Senior Lecturer in Computer Science at Teesside
University. He holds a PhD in Computing from Imperial College, London. His research
interests are focused on Distributed Sensing Systems and their applications for Internet
of Things, Cyber-physical Systems
BRAIN COMPUTER INTERFACE FOR BIOMETRIC
AUTHENTICATION BY RECORDING SIGNAL
Abd Abrahim Mosslah1
. Reyadh Hazim Mahdi2
and Shokhan M. AlBarzinji3
1
University of Anbar, College of Islamic Science, Anbar- Iraq
2
University of Mustansiriyah, Dept. of ComputerScience,College of Science, Baghdad,Iraq
3
College of Computer Science and Information Technology, University of Anbar
ABSTRACT
Electroencephalogram(EEG) is done in several ways, which are referred to as brainwaves, which
scientists interpret as an electromagnetic phenomenon that reflects the activity in the human brain, this
study is used to diagnose brain diseases such as schizophrenia, epilepsy, Parkinson's, Alzheimer's, etc. It
is also used in brain machine interfaces and in brain computers. In these applications wireless recording is
necessary for these waves. What we need today is Authentication? Authentication is obtained from
several techniques, in this paper we will check the efficiency of these techniques such as password and
pin. There are also biometrics techniques used to obtain authentication such as heart rate, fingerprint, eye
mesh and sound, these techniques give acceptable authentication. If we want to get a technology that
gives us integrated and efficient authentication, we use brain wave recording. The aim of the technique in
our proposed paper is to improve the efficiency of the reception of radio waves in the brain and to provide
authentication.
KEYWORD
Related work, EEG brain signal, Brain wave, Overall projcet outline, System requirements.
Full Text : https://aircconline.com/csit/papers/vol9/csit90613.pdf
6th
International Conference on Artificial Intelligence and Applications (AIAP-2019) -
http://airccse.org/csit/V9N06.html
REFERENCE
[1] Electroencephalogram.PDF,15 July 2007.

[2] Wenjie Xu, Cuntai Guan, Chngeng siong . rangana- tha,m.thulasidas,jiankang wu,”high accuracy of
classi- fication of eeg signal,”icpr,17th international confe- rence on patter recognization(ICPR’
04)PP.391-394
[3] Marcel, S., & Millán, J. D. R. (2007). “Person authentication using brainwaves (EEG) and maximum
a posteriori model adaptation.” Pattern Analysis and Machine Intelligence, IEEE Transactions on, 29(4),
pp. 743-752.
[4] Poulos, M. Rangoussi, N. Alexandris, A. Evangelou, M. (2001). “On the use of EEG features towards
person identification via neural networks.” Informatics for Health and Social Care, 26(1), pp. 35-48.
[5] Palaniappan, R. (2008). “Two-stage biometric authentication method using thought activity brain
waves.” International Journal of Neural Systems, 18(01), pp. 59-66.
[6] E. Bas ̧ar. Brain Function and Oscillations: Integrative brain function. Neurophysiol- ogy and
cognitive processes. Springer series in synergetics. Springer, 1999. ISBN 9783540643456.
[7] W. Klimesch, “Theta band power in the human scalp EEG and the encoding of new information,”
Neuroreport, vol. 7, no. 7, pp. 1235-1240, 1996.
[8] Bressler SL. The gamma wave: a cortical information carrier? Trends Neurosci 1990;13:161–162.
[9] Patrizio Campisi and Daria La Rocca, “Brain Waves for Automatic Biometric-Based User
Recognition”, IEEE Transactions on Information Forensics and Security, Vol. 9, No. 5, pp 782-800, May
2014.
[10] J. Klonovs, C. Petersen, H. Olesen, and A. Hammershoj, “ID proof on the go: Development of a
mobile EEG-based biometric authentication system,” IEEE Veh. Technol. Mag., vol. 8, no. 1, pp. 81– 89,
Mar. 2013.
[11] Abd et al. “Biometrics detection and recognition based-on geometrical features extraction”, In
Proceedings of the IEEE 2018 International Conference on Advance of Sustainable Engineering and its
Application (ICASEA). Date of Conference: 14-15 March 2018. Date Added to IEEE Xplore: 04 June
2018, INSPEC Accession, Number: 17807703, DOI: 10.1109/ICASEA.2018.8370956.
[12] K. Brigham and B. V. Kumar, “Subject identification from electroen- cephalogram (EEG) signals
during imagined speech,” in Proc. IEEE 4th Int. Conf. BTAS, Sep. 2010, pp. 1–8.
[13] M. Poulos, M. Rangoussi, and N. Alexandris, “Neural network based person identification using
EEG features,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., vol. 2. Mar. 1999, pp. 1117–
1120.
[14] M. Poulos, M. Rangoussi, V. Chrissikopoulos, and A. Evangelou, “Person identification based on
parametric processing of the EEG,” in Proc. 6th IEEE Int. Conf. Electr., Circuit Syst., Sept. 1999, pp.
283–286.
[15] C. He and Z. J. Wang, “An independent component analysis (ICA) based approach for EEG person
authentication,” in Proc. 3rd ICBBE, 2010, pp. 1–10.
[16] A. Riera, A. Soria-Frisch, M. Caparrini, C. Grau, and G. Ruffini, “Unobtrusive biometric system
based on electroencephalogram analy- sis,” EURASIP J. Adv. Signal Process, vol. 2008, 2008.
[17] F. Su, H. Zhou, Z. Feng, and J. Ma, “A biometric-based covert warning system using EEG,” in Proc.
5th IAPR Int. Conf. Biometrics ICB, 2012, pp. 342–347.
[18] P. Campisi et al., “Brain waves based user recognition using the ‘eyes closed resting conditions’
protocol,” in Proc. IEEE Int. WIFS, Nov. 2011, pp. 1–6,
[19] D. La Rocca, P. Campisi, and G. Scarano, “On the repeatability of EEG features in a biometric
recognition framework using a resting state protocol,” in Proc. BIOSIGNALS, 2013, pp. 20–2.
[20] R. Paranjape, J. Mahovsky, L. Benedicenti, and Z. Koles, “The elec- troencephalogram as a
biometric,” in Proc. Can. Conf. Electr. Comput. EngComput., 2001, pp. 1363–1366.
[21] K. Das, S. Zhang, B. Giesbrecht, And M. P. Eckstein, “Using Rapid Visually Evoked Eeg Activity
For Person İdentification,” İ 2493.
AUTHORS
M.Sc. Abd Abrahim Mosslh. and who was born in the Alaesawi village, Fallujah, in
1971. obtained his M.Sc. in Comput Mustanseriah. Iraq Baghdad. He is currently
instructor of Islamic Universi ty of Anbar Iraq. His research interests are Artificial
Neural Networks, Computer Networks, Image Processing, Software Engineering, and
Genet
M.Sc. Reyadh Hazim Mahdiobtained utara / Malaysia. He is currently instructor of
College of Science University of Mustanseriah IRAQ-BAGHDAD His research
interests are Artificial Neural Netwo Computer Networks, Image Processing, and
Software Engineering
M.SC Shokhan M. Al-Barzinji. University of Anbar. Iraq anbar. She is currently
instructor of College of Computer Science and Information Technogy, University of
Anbar, Anbar, Iraq. His re search interests are Medical Image processing, Image
processing, Internet of Things, cloud computing and visualization
METHOD FOR THE DETECTION OF CARRIERIN-CARRIER SIGNALS
BASED ON FOURTHORDER CUMULANTS
Vasyl Semenov1
, Pavel Omelchenko1
and Oleh Kruhlyk1
1
Department of Algorithms, Delta SPE LLC, Kiev, Ukraine
ABSTRACT
The method for the detection of Carrier-in-Carrier signals based on the calculation of fourthorder
cumulants is proposed. In accordance with the methodology based on the “Area under the curve” (AUC)
parameter, a threshold value for the decision rule is established. It was found that the proposed method
provides the correct detection of the sum of QPSK signals for a wide range of signal-to-noise ratios. The
obtained AUC value indicates the high efficiency of the proposed detection method. The advantage of the
proposed detection method over the “radiuses” method is also shown.
KEYWORDS
Carrier-in-Carrier, Cumulants, QPSK.
Full Text : https://aircconline.com/csit/papers/vol9/csit90503.pdf
7th
International Conference on Computational Science and Engineering (CSE) -
http://airccse.org/csit/V9N05.html
REFERENCES
[1] Agne, Craig & Cornell, Billy & Dale, Mark & Keams, Ronald & Lee, Frank, (2010) “Sharedspectrum
bandwidth efficient satellite communications”, Proceedings of the IEEE Military Communications
Conference (MILCOM' 10), pp341-346.
[2] Gouldieff, Vincent & Palicot, Jacques, (2015) “MISO Estimation of Asynchronously Mixed BPSK
Sources”, Proc. IEEE Conf. EUSIPCO, pp369-373.
[3] Feng, Hao & Gao, Yong, (2016) “High-Speed Parallel Particle Filter for PCMA Signal Blind
Separation”, Radioelectronics and Communications Systems, Vol.59, No.10, pp305-313.
[4] Semenov, Vasyl, (2018) “Method of Iterative Single-Channel Blind Separation for QPSK Signals”,
Mathematical and computer modelling, Vol. 17, No. 2, pp108-116.
[5] Fernandes, Carlos Estevao R. & Comon, Pierre & Favier, Gerard, (2010) “Blind identification of
MISO-FIR channels”, Signal Processing, Vol. 90, pp490–503.
[6] Swami, Anantharam & and Sadler, Brain M., (2000) “Hierarchical digital modulation classification
using cumulants,” IEEE Trans. Commun., Vol. 48, pp416-429.
[7] Wunderlich, Adam & Goossens, Bart & Abbey, Craig K. “Optimal Joint Detection and Estimation
That Maximizes ROC-Type Curves” (2016) IEEE Transactions on Medical Imaging, Vol. 35, No.9,
pp2164– 2173.
AUTHORS
Vasyl Semenov received a Ph.D. in Acoustics from Institute of Hydromechanics of
National Academy of Sciences of Ukraine in 2004. He is currently the chief of the
Department of Algorithms at Delta SPE LLC, Kiev, Ukraine and doctoral student at
the Institute of Cybernetics of National Academy of Sciences of Ukraine. His main
research interests are in the fields of digital signal processing, demodulation, blind
separation, and recognition systems.
Pavel Omelchenko received a Ph.D. in Mathematics from Institute of Mathematics of National Academy
of Sciences of Ukraine in 2010. He is currently the member of the Department of Algorithms at Delta
SPE LLC, Kiev, Ukraine. His main research interests are in the fields of digital signal processing,
demodulation, blind separation, and cryptanalysis systems.
Oleh Kruhlyk received M.Sc. degree in Radioelectronics from the National Technical University of
Ukraine “Kiev Polytechnic Institute” in 2017. He is currently the member of the Department of
Algorithms at Delta SPE LLC, Kiev, Ukraine and Ph.D. student at the National Technical University of
Ukraine “Kiev Polytechnic Institute”. His main research interests are in the fields of digital signal
processing, demodulation, and blind separation and methods.
A DFG PROCESSOR IMPLEMENTATION FOR DIGITAL SIGNAL
PROCESSING APPLICATIONS
Ali Shatnawi, Osama Al-Khaleel and Hala Alzoubi
Department of Computer Engineering, Jordan University of Science and Technology, Irbid, Jordan
ABSTRACT
This paper proposes a new scheduling technique for digital signal processing (DSP) applications
represented by data flow graphs (DFGs). Hardware implementation in the form of a specialized
embedded system, is proposed. The scheduling technique achieves the optimal schedule of a given DFG
at design time. The optimality criterion targeted in the proposed algorithm is the maximum throughput
than can be achieved by the available hardware resources. Each task is presented in a form of an
instruction to be executed on the available hardware. The architecture is composed of one or multiple
homogeneous pipelined processing elements, designed to achieve the maximum possible sampling rate
for several DSP applications. In this paper, we present a processor implementation of the proposed
architecture. It comprises one processing element where all tasks are processed sequentially. The
hardware components are built on an FPGA chip using Verilog HDL. The architecture requires a very
small area size, which is represented by the number of slice registers and the number of slice lookup
tables (LUTs). The proposed scheduling technique is shown to outperform the retiming technique, which
is proposed in the literature, by 19.3%.
KEYWORDS
Data Flow Graphs, Task Scheduling, Processor Design, Hardware Description Language
Full Text : https://aircconline.com/csit/papers/vol9/csit90402.pdf
8th
International Conference on Advanced Computer Science and Information Technology
(ICAIT 2019) - http://airccse.org/csit/V9N04.html
REFERENCES
[1] DeFatta D, Lucas J, Hadgkiss W. Digital signal processing, a system design approach. John Wiley &
Sons. 1988.
[2] Trevillyan L. An overview of logic synthesis systems. Conference on Design Automation. IEEE;
1987; 166-172.
[3] Schafer R, Oppenheim A. Digital Signal Processing. 1st ed. Englewood Cliffe, New Jersey: Prentice
Hall; 1975; 31-32.
[4] Shatnawi A. Compile-time scheduling of digital signal processing data flow graphs onto
homogeneous multiprocessor systems. Ph.D. Thesis Department of Electrical and Computer Engineer,
Concordia University, Montreal Canada, 1996.
[5] Shatnawi A. Optimal Scheduling of Digital Signal Processing Data-flow Graphs using Shortest-path
Algorithms. The Computer Journal. 2002; 45(1):88-100.
[6] Wang G, Wang Y, Liu H, Guo H. HSIP: A Novel Task Scheduling Algorithm for Heterogeneous
Computing. Scientific Programming. 2016; 2016:1-11.
[7] Ullah Munir E, Mohsin S, Hussain A. SDBATS: A Novel Algorithm for Task Scheduling in
Heterogeneous Computing Systems. Parallel and Distributed Processing Symposium Workshops & PhD
Forum (IPDPSW). IEEE; 2013; 43-53.
[8] Liu G, He Y, Guo L. Static Scheduling of Synchronous Data Flow onto Multiprocessors for
Embedded DSP Systems. Third International Conference on Measuring Technology and Mechatronics
Automation. IEEE. 2011; 338–341.
[9] Zhou N, Qi D, Wang X, Zheng Z, Lin W. A list scheduling algorithm for heterogeneous systems
based on a critical node cost table and pessimistic cost table. Concurrency and Computation: Practice and
Experience. 2016;29(5):1-11.
[10] Kang Y, Lin Y. A Recursive Algorithm for Scheduling of Tasks in a Heterogeneous Distributed
Environment. 2011 4th International Conference on Biomedical Engineering and Informatics (BMEI).
IEEE. 2011; 2099-2103.
[11] Woods R, McAllister J, Lightbody G, Yi Y. FPGA-Based Implementation of Signal Processing
Systems. Chichester, United Kingdom: John Wiley & Sons 2009; 145-169.
[12] Parhi K, Messerschmitt D. Static rate-optimal scheduling of iterative data-flow programs via
optimum unfolding. IEEE Transactions on Computers. 1991;
[13] McFarland M, Parker A, Camposano R. Tutorial on high-level synthesis, 25th Design Automat.
1988. p. 330-336.
[14] Hurson A, Milutinović V, Advances in computers. Waltham, MA : Academic Press, 2015;(96):1-45.
[15] De Groot S, Gerez S, Herrmann O. Range-chart-guided iterative data-flow graph scheduling. IEEE
Transactions on Circuits and Systems I: Fundamental Theory and Applications. 1992; 39(5):351-364.
AUTHORS
Ali Shatnawi is a professor of computer engineering. He received the B.Sc and M.Sc
in electrical and computer engineering from the Jordan University of Science and
Technology (JUST) in 1989 and 1992, respectively; and the Ph.D degree in electrical
and computer engineering from Concordia University, Canada, in 1996. He has been
on the faculty of the Jordan University of Science and Technology since 1996. He
served as the director of computer centre 1996-1999, Vice-dean 2001-2002, Dean of
IT at Hashemite University 2002-2005 and dean of Computer and Information Technology, JUST, 2016-
2018. His present research includes algorithms and optimizations, hardware scheduling, computer
architecture and high level synthesis of DSP applications.
Osama Al-Khaleel is an associate professor of Computer Engineering in the
Department of Computer Engineering of Jordan University of Science and
Technology (Irbid, Jordan), received his B.S in Electrical Engineering from Jordan
University of Science and Technology in 1999, M.Sc. and Ph.D. in Computer
Engineering from Case Western Reserve University, Cleveland, OH, USA in 2003
and 2006 respectively. Currently, his main research interests are in embedded systems
design, reconfigurable computing, computer arithmetic, and logic design.
Hala AL-Zu'bi received her B.SC. in Computer Engineering from Yarmouk University
in 2012, and M.Sc. in Computer Engineering from Jordan University of Science &
Technology in 2018. Her research interests include computer architecture, hardware
description language, task scheduling and data flow computing.
OCCLUSION HANDLED BLOCK-BASED STEREO MATCHING WITH
IMAGE SEGMENTATION
Jisu Kim, Cheolhyeong Park, Ju O Kim and Deokwoo Lee
Department of Computer Engineering, Keimyung University, Daegu 42601, Republic of Korea
ABSTRACT
This paper chiefly deals with techniques of stereo vision, particularly focuses on the procedure of stereo
matching. In addition, the proposed approach deals with detection of the regions of occlusion. Prior to
carrying out stereo matching, image segmentation is conducted in order to achieve precise matching
results. In practice, in stereo vision, matching algorithm sometimes suffers from insufficient accuracy if
occlusion is inherent with the scene of interest. Searching the matching regions is conducted based on
cross correlation and based on finding a region of the minimum mean square error of the difference
between the areas of interest defined in matching window. Middlebury dataset is used for experiments,
comparison with the existed results, and the proposed algorithm shows better performance than the
existed matching algorithms. To evaluate the proposed algorithm, we compare the result of disparity to
the existed ones.
KEYWORDS
Occlusion, Stereo vision, Segmentation, Matching.
Full Text : https://airccj.org/CSCP/vol9/csit90303.pdf
7th
International Conference on Signal Image Processing and Multimedia (SIPM 2019) -
http://airccse.org/csit/V9N03.html
REFERENCES
[1] Hartely, Richard. & Zisserman, Andrew (2003) Multiple View Geometry in Computer Vision,
Computer graphics, image processing and robotics, Cambridge University Press.
[2] Mühlmann, Karsten & Maier, Dennis & Hesser, Jürgen & Männer, Reinhard, (2002) “Calculating
Dense Disparity Maps from Color Stereo Images, an Efficient Implementation”, International Journal of
Computer Vision, Vol. 47, No. 1, pp.79-88.
[3] Xu, Jintao & Yang, Qingxiong & Feng, Zuren, (2016) “Occlusion-Aware Stereo Matching”,
International Journal of Computer Vision, Vol. 120, No. 3, pp.256-271.
[4] Kim Kyung Rae & Kim Chang Su, (2016) “Adaptive smoothness constraints for efficient stereo
matching using texture and edge information”, 2016 IEEE International Conference on Image Processing
(ICIP), pp.3429-3433.
[5] Brown, Myron Z & Burschka, Darius & Hager, Gregory D, (2003) “Advances in computational
stereo”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 25, No. 8, pp.993- 1008.
[6] Huang, Xiaoshui & Yuan, Chun & Zhang Jian, (2015) “Graph Cuts Stereo Matching Based on
PatchMatch and Ground Control Points Constraint”, Advances in Multimedia Information Processing –
PCM, Vol. 9315, pp14-23.
[7] Mozerov, Mikhail G & Weijer, Joost van de, (2015) “Accurate Stereo Matching by Two-Step Energy
Minimization”, IEEE Transactions on Image Processing, Vol. 24, No. 3, pp.1153-163.
[8] Salehian, Behzad & Fotouhi, Ali M & Raie, Abolghasem A, (2018) “Dynamic programming-based
dense stereo matching improvement using an efficient search space reduction technique”, Optik, Vol.
160, pp.1-12.
[9] Zhu, Shiping & Yan, Lina, (2017) “Local stereo matching algorithm with efficient matching cost and
adaptive guided image filter”. The Visual Computer, Vol. 33, No. 9, pp. 1087-1102.
[10] Kang, C & Kim, J & Lee, S & Nam, K, (1997) “Stereo Matching Using Dynamic Programming with
Region Partition”. Journal of the Institute of Electronics and Information Engineers, Vol. 20, No. 1,
pp.479-482.
[11] Lowe, David G, (1999) “Object recognition from local scale-invariant features”, Proceedings of the
Seventh IEEE International Conference on Computer Vision, pp.1-8.
[12] Bay, Herbert & Tuytelaars, Tinne & Gool, Luc V, (2008) “Speeded-Up Robust Features (SURF)”,
Computer Vision and Image Understanding, Vol. 110, No. 3, pp.345-359.
[13] Lee, K-M. and Lin, C-H, (2017) “And Image Segmentation and Merge Hierarchical Region using
Mean-Shift Tracking Algorithm”, Proceedings of Annual Conference of IEIE, pp.704-706.
[14] Scharstein, D & Szeliski, R, (2002) “A taxonomy and evaluation of dense two-frame stereo
corresponding algorithms”, International Journal of Computer Vision, Vol. 47, No. 1, pp.7-42.
[15] Scharstein, D & Szeliski, R, (2003) “High-Accuracy Stereo Depth Maps Using Structured Light”,
IIEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), pp.195- 202.
AUTHOR
Jisu Kim is in department of computer engineering, Keimyung University, Daegu,
Republic of Korea. He is currently working on image processing, computer vision,
signal processing and machine learning. He is currently pursuing his M.S degree in
computer engineering.
Cheolhyeong Park is in department of computer engineering, Keimyung University,
Daegu, Republic of Korea. He is currently working on geometric image analysis,
computer vision, computer graphics and machine learning. He is in the course of
integrated B.S and M.S degree in computer engineering.
Ju O Kim is in department of computer engineering, Keimyung University, Daegu,
Republic of Korea. He is currently working on image analysis and Processing. He is
pursuing B.S degree in computer engineering.
Dr. Deokwoo Lee is an Assistant Professor in the department of computer engineering at Keimyung
University. Dr. Lee has received B.S degree in electrical engineering from Kyungpook
National University, Daegu, Republic of Korea, and M.S and Ph.D degree from North
Carolina State University, Raleigh, NC, USA, respectively. He has been working on
the areas of computer vision, image processing, signal processing and machine
learning. In particular, he has been conducting camera calibration, bio-signal analysis
and image denoising.
ORDER PRESERVING STREAM PROCESSING IN FOG COMPUTING
ARCHITECTURES
Vidyasankar
Department of Computer Science, Memorial University of Newfoundland, St. John’s, Newfoundland,
Canada
ABSTRACT
A Fog Computing architecture consists of edge nodes that generate and possibly pre-process (sensor)
data, fog nodes that do some processing quickly and do any actuations that may be needed, and cloud
nodes that may perform further detailed analysis for long-term and archival purposes. Processing of a
batch of input data is distributed into sub-computations which are executed at the different nodes of the
architecture. In many applications, the computations are expected to preserve the order in which the
batches arrive at the sources. In this paper, we discuss mechanisms for performing the computations at a
node in correct order, by storing some batches temporarily and/or dropping some batches. The former
option causes a delay in processing and the latter option affects Quality of Service (QoS). We bring out
the tradeoffsbetween processing delay and storage capabilities of the nodes, and also between QoS and
the storage capabilities.
KEYWORDS
Fog computing, Order preserving computations, Quality of Service
Full Text : https://airccj.org/CSCP/vol9/csit90104.pdf
3rd
International Conference on Computer Science and Information Technology (COMIT
2019) - http://airccse.org/csit/V9N01.html
REFERENCES
[1] F. Bonomi, R. Milito, J. Zhu & S. Addepalli (2012)“Fog computing and its role in the internet of
things”, Proceedings of the First Edition of the MCC Workshop on Mobile Cloud Computing, MCC ’12,
pp 13–16, New York, NY, USA, ACM.
[2] F. Bonomi, R. Milito, P. Natarajan & J. Zhu (2014) “Fog computing: A platform for internet of things
and analytics”, In N. Bessis and C. Dobre, editors, Big Data and Internet of Things: A Roadmap for Smart
Environments, pp169–186, Springer International Publishing, Cham.
[3] C. Chang, S. N. Srirama& R. Buyya (2017)“Indie fog: An efficient fog-computing infrastructure for
the internet of things”,Computer, Vol. 50, No. 9, pp 92–98.
[4] A. V. Dastjerdi& R. Buyya(2016)“Fog computing: Helping the internet of things realize its
potential”,Computer, Vol. 49, No. 8, pp 112–116.
[5] K. Vidyasankar (1991)“Unified theory of database serializability”, FundamentaInformatica, Vol. 1,
No. 2, pp 145-153.
[6] K. Vidyasankar (2018a)“Distributing computations in fog architectures”, TOPIC’18 Proceedings.
Association for Computing Machinery.
[7] K. Vidyasankar (2018b)“Atomicity of executions in fog computing architectures”,Proceedings of the
Twenty Seventh International Conference on Software Engineering and Data Engineering (SEDE18).
[8] N. Conway (2008)“Transactions and data stream processing”, Online Publication, pages 1–28.
http://neilconway.org/docs/stream_txn.pdf.
[9] J. Meehan, N. Tatbul, S. Zdonik, C. Aslantas, U. Cetintemel, J. Du, T. Kraska, S. Madden, D. Maier,
A. Pavlo, M. Stonebraker, K. Tufte, & H. Wang (2015) “ S-store: Streaming meets transaction
processing”,Proc. VLDB Endow., Vol. 8, No. 13, pp 2134–2145.
[10] I. Botan, P. M. Fischer, D. Kossmann, & N. Tatbul (2012)“Transactional stream processing”,
Proceedings EDBT, ACM Press.
[11] L. Gürgen, C. Roncancio, S. Labbé& V. Olive (2006)“Transactional issues in sensor data
management”, Proceedings of the 3rd International Workshop on Data Management for Sensor Networks
(DMSN’06), Seoul, South Korea, pp 27–32.
[12] M. Oyamada, H. Kawashima, & H. Kitagawa (2013)“Continuous query processing with concurrency
control: Reading updatable resources consistently”, Proceedings of the 28th Annual ACM Symposium on
Applied Computing, SAC ’13, pp 788–794, New York, NY, USA, ACM.
[13] K. Vidyasankar (2017) “On continuous queries in stream processing”, The 8th International
Conference on Ambient Systems, Networks and Technologies (ANT-2017), Procedia Computer Science,
pp 640–647. Elsevier.
[14] L. Andrade, M. Serrano& C. Prazeres (2018)“The data interplay for the fog of things: A transition to
edge computing with IoT”,Proceedings of the 2018 IEEE International Conference on Communications
(ICC), IEEE Xplore.
[15] S. H. Mortazavi, M. Salehe, C. S. Gomes, C. Phillips & E. de Lara (2017)“Cloudpath: A multi-tier
cloud computing framework”, Proceedings of the Second ACM/IEEE Symposium on Edge Computing,
SEC ’17, pp 20:1–20:13, New York, NY, USA, ACM.
[16] storm.apache.org/releases/1.0.6/Transactional-topologies.html.
[17] Jin Li , Kristin Tufte, VladislavShkapenyuk, VassilisPapadimos, Theodore Johnson & David Maier
(2008) “Out-of-Order Processing: A new Architecture for high-performance stream systems”, PVLDB
’08, pp 274-288, VLDB Endowment.
[18] Zhitao Shen, Vikram Kumaran, Michael J. Franklin, Sailesh Krishnamurthy, Amit Bhat, Madhu
Kumar, Robert Lerche& Kim Macpherson (2015) “CSA: Streaming engine for internet of things”, Data
Engineering bulletin, Vol. 38, No. 4, pp 39-50, IEEE Computer Society.
[19] F. Xhafa, V. Naranjo, L. Barolli& M. Takizawa (2015)“On streaming consistency of big data stream
processing in heterogeneous clusters”, Proceedings of the 18th International Conference on
NetworkBased Information Systems. IEEE Xplore.

More Related Content

What's hot

November 2021: Top Read Articles in Signal & Image Processing
November 2021: Top Read Articles in Signal & Image ProcessingNovember 2021: Top Read Articles in Signal & Image Processing
November 2021: Top Read Articles in Signal & Image Processingsipij
 
December 2021: Top Read Articles in Signal & Image Processing
December 2021: Top Read Articles in Signal & Image ProcessingDecember 2021: Top Read Articles in Signal & Image Processing
December 2021: Top Read Articles in Signal & Image Processingsipij
 
IRJET- Retinal Fundus Image Segmentation using Watershed Algorithm
IRJET-  	  Retinal Fundus Image Segmentation using Watershed AlgorithmIRJET-  	  Retinal Fundus Image Segmentation using Watershed Algorithm
IRJET- Retinal Fundus Image Segmentation using Watershed AlgorithmIRJET Journal
 
Interactive Wall (Multi Touch Interactive Surface)
Interactive Wall (Multi Touch Interactive Surface)Interactive Wall (Multi Touch Interactive Surface)
Interactive Wall (Multi Touch Interactive Surface)alaxandre
 
Anatomical Survey Based Feature Vector for Text Pattern Detection
Anatomical Survey Based Feature Vector for Text Pattern DetectionAnatomical Survey Based Feature Vector for Text Pattern Detection
Anatomical Survey Based Feature Vector for Text Pattern DetectionIJEACS
 
A review on Development of novel algorithm by combining Wavelet based Enhance...
A review on Development of novel algorithm by combining Wavelet based Enhance...A review on Development of novel algorithm by combining Wavelet based Enhance...
A review on Development of novel algorithm by combining Wavelet based Enhance...IJSRD
 
Lec4: Pre-Processing Medical Images (II)
Lec4: Pre-Processing Medical Images (II)Lec4: Pre-Processing Medical Images (II)
Lec4: Pre-Processing Medical Images (II)Ulaş Bağcı
 
Content Based Image Retrieval : Classification Using Neural Networks
Content Based Image Retrieval : Classification Using Neural NetworksContent Based Image Retrieval : Classification Using Neural Networks
Content Based Image Retrieval : Classification Using Neural Networksijma
 
AN AUTOMATIC SCREENING METHOD TO DETECT OPTIC DISC IN THE RETINA
AN AUTOMATIC SCREENING METHOD TO DETECT OPTIC DISC IN THE RETINAAN AUTOMATIC SCREENING METHOD TO DETECT OPTIC DISC IN THE RETINA
AN AUTOMATIC SCREENING METHOD TO DETECT OPTIC DISC IN THE RETINAijait
 
Age Invariant Face Recognition using Convolutional Neural Network
Age Invariant Face Recognition using Convolutional  Neural Network Age Invariant Face Recognition using Convolutional  Neural Network
Age Invariant Face Recognition using Convolutional Neural Network IJECEIAES
 
Visual character n grams for classification and retrieval of radiological images
Visual character n grams for classification and retrieval of radiological imagesVisual character n grams for classification and retrieval of radiological images
Visual character n grams for classification and retrieval of radiological imagesijma
 
Skin Color Detection Using Region-Based Approach
Skin Color Detection Using Region-Based ApproachSkin Color Detection Using Region-Based Approach
Skin Color Detection Using Region-Based ApproachCSCJournals
 
Text region extraction from low resolution display board ima
Text region extraction from low resolution display board imaText region extraction from low resolution display board ima
Text region extraction from low resolution display board imaIAEME Publication
 
DISTANCE MEASUREMENT WITH A STEREO CAMERA
DISTANCE MEASUREMENT WITH A STEREO CAMERADISTANCE MEASUREMENT WITH A STEREO CAMERA
DISTANCE MEASUREMENT WITH A STEREO CAMERAAM Publications
 

What's hot (19)

Ic3414861499
Ic3414861499Ic3414861499
Ic3414861499
 
November 2021: Top Read Articles in Signal & Image Processing
November 2021: Top Read Articles in Signal & Image ProcessingNovember 2021: Top Read Articles in Signal & Image Processing
November 2021: Top Read Articles in Signal & Image Processing
 
December 2021: Top Read Articles in Signal & Image Processing
December 2021: Top Read Articles in Signal & Image ProcessingDecember 2021: Top Read Articles in Signal & Image Processing
December 2021: Top Read Articles in Signal & Image Processing
 
IRJET- Retinal Fundus Image Segmentation using Watershed Algorithm
IRJET-  	  Retinal Fundus Image Segmentation using Watershed AlgorithmIRJET-  	  Retinal Fundus Image Segmentation using Watershed Algorithm
IRJET- Retinal Fundus Image Segmentation using Watershed Algorithm
 
Avinash_CV
Avinash_CVAvinash_CV
Avinash_CV
 
Interactive Wall (Multi Touch Interactive Surface)
Interactive Wall (Multi Touch Interactive Surface)Interactive Wall (Multi Touch Interactive Surface)
Interactive Wall (Multi Touch Interactive Surface)
 
Anatomical Survey Based Feature Vector for Text Pattern Detection
Anatomical Survey Based Feature Vector for Text Pattern DetectionAnatomical Survey Based Feature Vector for Text Pattern Detection
Anatomical Survey Based Feature Vector for Text Pattern Detection
 
A review on Development of novel algorithm by combining Wavelet based Enhance...
A review on Development of novel algorithm by combining Wavelet based Enhance...A review on Development of novel algorithm by combining Wavelet based Enhance...
A review on Development of novel algorithm by combining Wavelet based Enhance...
 
Lec4: Pre-Processing Medical Images (II)
Lec4: Pre-Processing Medical Images (II)Lec4: Pre-Processing Medical Images (II)
Lec4: Pre-Processing Medical Images (II)
 
Content Based Image Retrieval : Classification Using Neural Networks
Content Based Image Retrieval : Classification Using Neural NetworksContent Based Image Retrieval : Classification Using Neural Networks
Content Based Image Retrieval : Classification Using Neural Networks
 
40120140505017
4012014050501740120140505017
40120140505017
 
Das09112008
Das09112008Das09112008
Das09112008
 
Avinash_CV_long
Avinash_CV_longAvinash_CV_long
Avinash_CV_long
 
AN AUTOMATIC SCREENING METHOD TO DETECT OPTIC DISC IN THE RETINA
AN AUTOMATIC SCREENING METHOD TO DETECT OPTIC DISC IN THE RETINAAN AUTOMATIC SCREENING METHOD TO DETECT OPTIC DISC IN THE RETINA
AN AUTOMATIC SCREENING METHOD TO DETECT OPTIC DISC IN THE RETINA
 
Age Invariant Face Recognition using Convolutional Neural Network
Age Invariant Face Recognition using Convolutional  Neural Network Age Invariant Face Recognition using Convolutional  Neural Network
Age Invariant Face Recognition using Convolutional Neural Network
 
Visual character n grams for classification and retrieval of radiological images
Visual character n grams for classification and retrieval of radiological imagesVisual character n grams for classification and retrieval of radiological images
Visual character n grams for classification and retrieval of radiological images
 
Skin Color Detection Using Region-Based Approach
Skin Color Detection Using Region-Based ApproachSkin Color Detection Using Region-Based Approach
Skin Color Detection Using Region-Based Approach
 
Text region extraction from low resolution display board ima
Text region extraction from low resolution display board imaText region extraction from low resolution display board ima
Text region extraction from low resolution display board ima
 
DISTANCE MEASUREMENT WITH A STEREO CAMERA
DISTANCE MEASUREMENT WITH A STEREO CAMERADISTANCE MEASUREMENT WITH A STEREO CAMERA
DISTANCE MEASUREMENT WITH A STEREO CAMERA
 

Similar to Top SIP Research Articles of 2019

Recent articles published in Signal & Image Processing: An InternationalJourn...
Recent articles published in Signal & Image Processing: An InternationalJourn...Recent articles published in Signal & Image Processing: An InternationalJourn...
Recent articles published in Signal & Image Processing: An InternationalJourn...sipij
 
October 202:top read articles in signal & image processing
October 202:top read articles in signal & image processingOctober 202:top read articles in signal & image processing
October 202:top read articles in signal & image processingsipij
 
International Journal of Image Processing (IJIP) Volume (3) Issue (6)
International Journal of Image Processing (IJIP) Volume (3) Issue (6)International Journal of Image Processing (IJIP) Volume (3) Issue (6)
International Journal of Image Processing (IJIP) Volume (3) Issue (6)CSCJournals
 
International Journal of Image Processing (IJIP) Volume (4) Issue (1)
International Journal of Image Processing (IJIP) Volume (4) Issue (1)International Journal of Image Processing (IJIP) Volume (4) Issue (1)
International Journal of Image Processing (IJIP) Volume (4) Issue (1)CSCJournals
 
TOP 5 Most View Article From Academia in 2019
TOP 5 Most View Article From Academia in 2019TOP 5 Most View Article From Academia in 2019
TOP 5 Most View Article From Academia in 2019sipij
 
Extracting individual information using facial recognition in a smart mirror....
Extracting individual information using facial recognition in a smart mirror....Extracting individual information using facial recognition in a smart mirror....
Extracting individual information using facial recognition in a smart mirror....IQRARANI11
 
RECENT ADVANCES IN ORTHODONTIC DIAGNOSTIC AIDS
RECENT ADVANCES IN ORTHODONTIC DIAGNOSTIC AIDSRECENT ADVANCES IN ORTHODONTIC DIAGNOSTIC AIDS
RECENT ADVANCES IN ORTHODONTIC DIAGNOSTIC AIDSShehnaz Jahangir
 
A SIGN LANGUAGE RECOGNITION APPROACH FOR HUMAN-ROBOT SYMBIOSIS
A SIGN LANGUAGE RECOGNITION APPROACH FOR HUMAN-ROBOT SYMBIOSISA SIGN LANGUAGE RECOGNITION APPROACH FOR HUMAN-ROBOT SYMBIOSIS
A SIGN LANGUAGE RECOGNITION APPROACH FOR HUMAN-ROBOT SYMBIOSISijcseit
 
A SIGN LANGUAGE RECOGNITION APPROACH FOR HUMAN-ROBOT SYMBIOSIS
A SIGN LANGUAGE RECOGNITION APPROACH FOR HUMAN-ROBOT SYMBIOSISA SIGN LANGUAGE RECOGNITION APPROACH FOR HUMAN-ROBOT SYMBIOSIS
A SIGN LANGUAGE RECOGNITION APPROACH FOR HUMAN-ROBOT SYMBIOSISijcseit
 
INFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSION
INFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSIONINFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSION
INFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSIONIJCI JOURNAL
 
Presentation1.-ADVANCEMENT-IN-DENTAL-RADIOLOGY1.pptx
Presentation1.-ADVANCEMENT-IN-DENTAL-RADIOLOGY1.pptxPresentation1.-ADVANCEMENT-IN-DENTAL-RADIOLOGY1.pptx
Presentation1.-ADVANCEMENT-IN-DENTAL-RADIOLOGY1.pptxDrCarlosIICapitan
 
Improvement of the Fingerprint Recognition Process
Improvement of the Fingerprint Recognition ProcessImprovement of the Fingerprint Recognition Process
Improvement of the Fingerprint Recognition Processijbbjournal
 
IMPROVEMENT OF THE FINGERPRINT RECOGNITION PROCESS
IMPROVEMENT OF THE FINGERPRINT RECOGNITION PROCESSIMPROVEMENT OF THE FINGERPRINT RECOGNITION PROCESS
IMPROVEMENT OF THE FINGERPRINT RECOGNITION PROCESSADEIJ Journal
 
Comparative Study of Lip Extraction Feature with Eye Feature Extraction Algor...
Comparative Study of Lip Extraction Feature with Eye Feature Extraction Algor...Comparative Study of Lip Extraction Feature with Eye Feature Extraction Algor...
Comparative Study of Lip Extraction Feature with Eye Feature Extraction Algor...Editor IJCATR
 
A New Skin Color Based Face Detection Algorithm by Combining Three Color Mode...
A New Skin Color Based Face Detection Algorithm by Combining Three Color Mode...A New Skin Color Based Face Detection Algorithm by Combining Three Color Mode...
A New Skin Color Based Face Detection Algorithm by Combining Three Color Mode...iosrjce
 
International Journal of Image Processing (IJIP) Volume (1) Issue (1)
International Journal of Image Processing (IJIP) Volume (1) Issue (1)International Journal of Image Processing (IJIP) Volume (1) Issue (1)
International Journal of Image Processing (IJIP) Volume (1) Issue (1)CSCJournals
 
F ACIAL E XPRESSION R ECOGNITION B ASED ON E DGE D ETECTION
F ACIAL  E XPRESSION  R ECOGNITION  B ASED ON  E DGE  D ETECTIONF ACIAL  E XPRESSION  R ECOGNITION  B ASED ON  E DGE  D ETECTION
F ACIAL E XPRESSION R ECOGNITION B ASED ON E DGE D ETECTIONIJCSES Journal
 

Similar to Top SIP Research Articles of 2019 (20)

Recent articles published in Signal & Image Processing: An InternationalJourn...
Recent articles published in Signal & Image Processing: An InternationalJourn...Recent articles published in Signal & Image Processing: An InternationalJourn...
Recent articles published in Signal & Image Processing: An InternationalJourn...
 
October 202:top read articles in signal & image processing
October 202:top read articles in signal & image processingOctober 202:top read articles in signal & image processing
October 202:top read articles in signal & image processing
 
International Journal of Image Processing (IJIP) Volume (3) Issue (6)
International Journal of Image Processing (IJIP) Volume (3) Issue (6)International Journal of Image Processing (IJIP) Volume (3) Issue (6)
International Journal of Image Processing (IJIP) Volume (3) Issue (6)
 
International Journal of Image Processing (IJIP) Volume (4) Issue (1)
International Journal of Image Processing (IJIP) Volume (4) Issue (1)International Journal of Image Processing (IJIP) Volume (4) Issue (1)
International Journal of Image Processing (IJIP) Volume (4) Issue (1)
 
TOP 5 Most View Article From Academia in 2019
TOP 5 Most View Article From Academia in 2019TOP 5 Most View Article From Academia in 2019
TOP 5 Most View Article From Academia in 2019
 
Extracting individual information using facial recognition in a smart mirror....
Extracting individual information using facial recognition in a smart mirror....Extracting individual information using facial recognition in a smart mirror....
Extracting individual information using facial recognition in a smart mirror....
 
RECENT ADVANCES IN ORTHODONTIC DIAGNOSTIC AIDS
RECENT ADVANCES IN ORTHODONTIC DIAGNOSTIC AIDSRECENT ADVANCES IN ORTHODONTIC DIAGNOSTIC AIDS
RECENT ADVANCES IN ORTHODONTIC DIAGNOSTIC AIDS
 
A SIGN LANGUAGE RECOGNITION APPROACH FOR HUMAN-ROBOT SYMBIOSIS
A SIGN LANGUAGE RECOGNITION APPROACH FOR HUMAN-ROBOT SYMBIOSISA SIGN LANGUAGE RECOGNITION APPROACH FOR HUMAN-ROBOT SYMBIOSIS
A SIGN LANGUAGE RECOGNITION APPROACH FOR HUMAN-ROBOT SYMBIOSIS
 
A SIGN LANGUAGE RECOGNITION APPROACH FOR HUMAN-ROBOT SYMBIOSIS
A SIGN LANGUAGE RECOGNITION APPROACH FOR HUMAN-ROBOT SYMBIOSISA SIGN LANGUAGE RECOGNITION APPROACH FOR HUMAN-ROBOT SYMBIOSIS
A SIGN LANGUAGE RECOGNITION APPROACH FOR HUMAN-ROBOT SYMBIOSIS
 
Identification of Brain Regions Related to Alzheimers' Diseases using MRI Ima...
Identification of Brain Regions Related to Alzheimers' Diseases using MRI Ima...Identification of Brain Regions Related to Alzheimers' Diseases using MRI Ima...
Identification of Brain Regions Related to Alzheimers' Diseases using MRI Ima...
 
INFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSION
INFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSIONINFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSION
INFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSION
 
Presentation1.-ADVANCEMENT-IN-DENTAL-RADIOLOGY1.pptx
Presentation1.-ADVANCEMENT-IN-DENTAL-RADIOLOGY1.pptxPresentation1.-ADVANCEMENT-IN-DENTAL-RADIOLOGY1.pptx
Presentation1.-ADVANCEMENT-IN-DENTAL-RADIOLOGY1.pptx
 
Improvement of the Fingerprint Recognition Process
Improvement of the Fingerprint Recognition ProcessImprovement of the Fingerprint Recognition Process
Improvement of the Fingerprint Recognition Process
 
IMPROVEMENT OF THE FINGERPRINT RECOGNITION PROCESS
IMPROVEMENT OF THE FINGERPRINT RECOGNITION PROCESSIMPROVEMENT OF THE FINGERPRINT RECOGNITION PROCESS
IMPROVEMENT OF THE FINGERPRINT RECOGNITION PROCESS
 
CV_of_ArulMurugan (2017_01_18)
CV_of_ArulMurugan (2017_01_18)CV_of_ArulMurugan (2017_01_18)
CV_of_ArulMurugan (2017_01_18)
 
Comparative Study of Lip Extraction Feature with Eye Feature Extraction Algor...
Comparative Study of Lip Extraction Feature with Eye Feature Extraction Algor...Comparative Study of Lip Extraction Feature with Eye Feature Extraction Algor...
Comparative Study of Lip Extraction Feature with Eye Feature Extraction Algor...
 
B017310612
B017310612B017310612
B017310612
 
A New Skin Color Based Face Detection Algorithm by Combining Three Color Mode...
A New Skin Color Based Face Detection Algorithm by Combining Three Color Mode...A New Skin Color Based Face Detection Algorithm by Combining Three Color Mode...
A New Skin Color Based Face Detection Algorithm by Combining Three Color Mode...
 
International Journal of Image Processing (IJIP) Volume (1) Issue (1)
International Journal of Image Processing (IJIP) Volume (1) Issue (1)International Journal of Image Processing (IJIP) Volume (1) Issue (1)
International Journal of Image Processing (IJIP) Volume (1) Issue (1)
 
F ACIAL E XPRESSION R ECOGNITION B ASED ON E DGE D ETECTION
F ACIAL  E XPRESSION  R ECOGNITION  B ASED ON  E DGE  D ETECTIONF ACIAL  E XPRESSION  R ECOGNITION  B ASED ON  E DGE  D ETECTION
F ACIAL E XPRESSION R ECOGNITION B ASED ON E DGE D ETECTION
 

Recently uploaded

Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerAnamika Sarkar
 
Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.eptoze12
 
Artificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxArtificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxbritheesh05
 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSKurinjimalarL3
 
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...srsj9000
 
chaitra-1.pptx fake news detection using machine learning
chaitra-1.pptx  fake news detection using machine learningchaitra-1.pptx  fake news detection using machine learning
chaitra-1.pptx fake news detection using machine learningmisbanausheenparvam
 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escortsranjana rawat
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionDr.Costas Sachpazis
 
Introduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptxIntroduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptxvipinkmenon1
 
HARMONY IN THE HUMAN BEING - Unit-II UHV-2
HARMONY IN THE HUMAN BEING - Unit-II UHV-2HARMONY IN THE HUMAN BEING - Unit-II UHV-2
HARMONY IN THE HUMAN BEING - Unit-II UHV-2RajaP95
 
GDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSCAESB
 
microprocessor 8085 and its interfacing
microprocessor 8085  and its interfacingmicroprocessor 8085  and its interfacing
microprocessor 8085 and its interfacingjaychoudhary37
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxJoão Esperancinha
 
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130Suhani Kapoor
 
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escortsranjana rawat
 
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfCCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfAsst.prof M.Gokilavani
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxwendy cai
 

Recently uploaded (20)

Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
 
Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.
 
Artificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxArtificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptx
 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
 
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
 
chaitra-1.pptx fake news detection using machine learning
chaitra-1.pptx  fake news detection using machine learningchaitra-1.pptx  fake news detection using machine learning
chaitra-1.pptx fake news detection using machine learning
 
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
 
Introduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptxIntroduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptx
 
HARMONY IN THE HUMAN BEING - Unit-II UHV-2
HARMONY IN THE HUMAN BEING - Unit-II UHV-2HARMONY IN THE HUMAN BEING - Unit-II UHV-2
HARMONY IN THE HUMAN BEING - Unit-II UHV-2
 
GDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentation
 
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCRCall Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
 
microprocessor 8085 and its interfacing
microprocessor 8085  and its interfacingmicroprocessor 8085  and its interfacing
microprocessor 8085 and its interfacing
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
 
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
 
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
 
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfCCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptx
 
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
 

Top SIP Research Articles of 2019

  • 1. Top SIP Research Articles of 2019 International Journal of VLSI design & Communication Systems (VLSICS) ISSN : 0976 - 1357 (Online); 0976 - 1527(print) http://airccse.org/journal/vlsi/vlsics.html
  • 2. COLOR CONVERTING OF ENDOSCOPIC IMAGES USING DECOMPOSITION THEORY AND PRINCIPAL COMPONENT ANALYSIS Keivan Ansari1,2 , Alexandre Krebs1 , Yannick Benezeth1 and Franck Marzani1 1 ImViA-Imaging and Artificial Vision,Université de Bourgogne, Dijon, France 2 Dept. Color Imaging and Color Image Processing, Institute for Color Science and Technology, Tehran, Iran ABSTRACT Endoscopic color imaging technology has been a great improvement to assist clinicians in making better decisions since the initial introduction. In this study, a novel combined method, including quadratic objective functions for the dichromatic model by Krebs et al. and Wyszecki`s spectral decomposition theory and the well-known principal component analysis technique is employed. New algorithm method working for color space converting of a conventional endoscopic color image, as a target image, with a Narrow Band Image (NBI), as a source image. The images of the target and the source are captured under known illuminant/sensor/filters combinations, and matrix Q of the decomposition theory is computed for such combinations. The intrinsic images which are extracted from the Krebs technique are multiplied by the matrix Q to obtain their corresponding fundamental stimuli. Subsequently, the principal component analysis technique was applied to the obtained fundamental stimuli in order to prepare the eigenvectors of the target and the source. Finally, the first three eigenvectors of each matrix were then considered as the converting mapping matrix. The results precisely seem that the color gamut of the converted target image gets closer to the NBI image color gamut. KEYWORDS Color Converting, Endoscopic Imaging, Dichromatic Model, Principal Component Analysis, Decomposition Theory. Full Text : https://aircconline.com/csit/papers/vol9/csit91812.pdf 9th International Conference on Computer Science, Engineering and Applications (ICCSEA 2019) - http://airccse.org/csit/V9N18.html
  • 3. REFERENCES [1] S. Tanaka, S. oka, M. hirata, S. Yoshida, I. Kaneko and K. Chayama, (2006) “Pit pattern diagnosis for colorectal neoplasia using narrow band imaging magnification,” Digestive Endoscopy 18(Suppl. 1), pp. S52 –S56. [2] P. Lukes et al. (2013) “Narrow Band Imaging (NBI)”, Endoscopy, IntechOpen , Edited by S. Amornyotin, Chapter 5. [3] R. Saito and H. Kotera, (2005)“Gamut mapping adapted to image contents,” Proc. Congress of the International Colour Association (AIC 05), Granada, Spain, pp. 661–664. [4] X. Xiao, L. Ma, (2006) “Color transfer in correlated color space,” In VRCIA '06: Proc. of the 2006 ACM international conference on virtual reality continuum and its applications, pp. 305- 309. [5] E. Reinhard, M. Ashikhmin, B. Gooch, and P. Shirley, (2001) “Color transfer between images,” IEEE Comput.Graphics Appl., pp. 34-41. [6] H. Kotera, Y. Matsusaki, T. Horiuchi, R.Saito, “Automatic color interchange between images,” Proc. Congress of the International Color Association (AIC 05), Granada, Spain, pp. 1019 -1022. [7] R. Saito, T. Horiuchi, and H. Kotera, (2005)“Scene color interchange using histogram rescaling,” Proc. IS&T's International Conference on Digital Printing Technologies (NIP22), Denver, Colorado, pp. 378-381, 2006. [8] R. Saito, H. Okuda, T. Horiuchi and S. Tominaga, (2007) “Scene-to-scene color transfer model based on histogram rescaling,” Proc. Midterm Meeting of the International Color Association (AIC 07), Hangzhou, China, pp. 122-125. [9] S. Gorji Kandi, K. Ansari, (2011)“ Transforming color space between images using rosenfeldkak histogram matching technique,” 4th International Color and Coatings Congress (ICCC 2011), Tehran, Iran. [10] E. Reinhard, M. Ashikhmin, B. Gooch, and P. Shirley, (2001)“Color transfer between images,” IEEE Comput. Graphics, pp.34-41. [11] Y. Chang, S.Saito, and M. Nakajima, (2007)”Example-based color transformation of image and video using basic color categories,” IEEE Trans. Image Process, vol. 16,no. 2, pp. 329–336. [12] S. Paris, S. W. Hasinoff, and J. Kautz, (2011)“Local Laplacian filters: Edge-aware image processing with a Laplacian pyramid,” ACM Trans. Graph., vol. 30, no. 4, pp. 1–12. [13] A. Abadpour, S. Kasaei, (2007)“An efficient PCA-based color transfer method,” J. Visual Communication and Image Representation. [14] A. Dhanve1, G. Chhajed, (2014)“Review on color transfer between images,” International Journal of Engineering Research and General Science Vol. 2, Issue 6, Oct. -Nov.
  • 4. [15] A. Krebs, Y. Benezeth, F. Marzani, (2017)“Quadratic objective functions for dichromatic model parameters estimation,” in: IEEE International Conference on Digital Image Computing: Techniques and Applications (DICTA). [16] S. A. Shafer, (1985)“Using color to separate reflection components,” Color Research & Application 10 (4), pp.210-218. [17] J. B. Cohen and W. E. Kappauf, (1982)“Metameric color stimuli, fundamental metamers, and Wyszecki’s metameric blacks,” Am. J. Psychol. 95, pp. 537–564. [18] F. Viénot, H. Brettel, (2014)“Visual properties of metameric blacks beyond cone vision,” Journal of the Optical Society of America, Vol. 31,Issue 4, pp. A38-A46. [19] Y. Mohamed, Y. Abdallah and T.Alqahtani,(2019) “Research in Medical Imaging Using Image Processing Techniques,”Medical Imaging - Principles and Applications, IntechOpen, Edited by Y. Zhou, Chapter 5. [20] B.Selvapriya and B. Raghu,(2018) “A Color Map for Pseudo Color Processing of Medical Images,” International Journal of Engineering & Technology, 7 (3.34) 954-958. [21] K. Ansari, S. Moradian, and S.H. Amirshahi,(2005)” Ideal Compression of Reflectance Curves by the use of Fundamental Color Stimuli”,10th Congress of the International Colour Association, AIC Colour 2005 , Granada, Spain. AUTHORS Keivan Ansari received his Ph.D. in color engineering from Amirkabir University (Polytechnic of Tehran),2005. He is an assistant professor in Color Imaging &Color Image Processing research group in Color for Science & Technology Institute, Tehran, IRAN. He is currently pursuing his postdoctoral at the ImViA laboratory in the Bourgandy university, Dijon, FRANCE. His work has focused on the development of Color Physics and its application in image processing. Alexandre Krebs received the B.S., M.S. and Ph.D. in computer sciences and image instrumentation from the University of Burgundy (France) in 2019. He is currently a temporary lecturer and research assistant in the engineering school ESIREM in Dijon, France. His research includes digestive endoscopy, Narrow Band Imaging, multispectral imaging, stomach lesions, machine Learning, transfer learning, Inverse problem, and optimization. Yannick Benezeth is associate professor at the Univ. Bourgogne Franche- Comté(France). He obtained his Ph.D. in computer science from the Univ. of Orléans in 2009. He also received the engineering degree from the ENSI de Bourges and the MS degree from the University of Versailles-Saint-Quentin-en-Yvelines in 2006. His research interests include biomedical engineering, image processing, and video analytics. Application areas include video health monitoring and endoscopy.
  • 5. Franck Marzani received his M.Sc. in computer science from the University of Rennes, France in 1989. He obtained his Ph.D. in computer vision and image processing from the University of Burgundy, Dijon, France in 1998. He received his “Habilitation à Diriger les Recherches” in 2007 and he is a full professor since 2009. he is currently the head of the ImViA research laboratory (Imaging & Computer Vision) at the University of Burgundy. His research interests include acquisition and analysis of images.
  • 6. UNDERSTANDING HOW COLOUR CONTRAST IN HOTEL & TRAVEL WEBSITE AFFECTS EMOTIONAL PERCEPTION, TRUST, AND PURCHASE INTENTION OF VISITORS Pimmanee Rattanawicha and Sutthipong Yungratog Chulalongkorn Business School, Chulalongkorn University, Bangkok, Thailand ABSTRACT To understand how colour contrast in e-Commerce websites, such as hotel & travel websites, affects (1) emotional perception (i.e. pleasant, arousal, and dominance), (2) trust, and (3) purchase intention of visitors, a two-phase empirical study is conducted. In the first phase of this study, 120 volunteer participants are asked to choose the most appropriate colour from a colour wheel for a hotel & travel website. The colour “Blue Cyan”, the most chosen colour from this phase of study, is then used as the foreground colour to develop three hotel & travel websites with three different colour contrast patterns for the second phase of the study. A questionnaire is also developed from previous studies to collect emotional perception, trust, and purchase intention data from another group of 145 volunteer participants. It is found from data analysis that, for visitors as a whole, colour contrast has significant effects on their purchase intention. For male visitors, colour contrast significantly affects their trust and purchase intention. Moreover, for generation X and generation Z visitors, colour contrast has effects on their emotional perception, trust, and purchase intention. However, no significant effect of colour contrast is found in female or generation Y visitors. KEYWORDS Colour Contrast, e-Commerce, Website Design Full Text : https://aircconline.com/csit/papers/vol9/csit91706.pdf 9th International Conference on Advances in Computing and Information Technology (ACITY 2019) – http://airccse.org/csit/V9N17.html
  • 7. REFERENCES [1] Anuratpanich, L. 2016. Generation Important thing to pay attention. Faculty of Pharmacy, Mahidol University. [2] Archavanitkul, K. 2011. Sexuality Transition in Thai Society. The Journal of Population and Social Studies. 44. [3] Bakker, I., van der Voordt, T., Vink, P., & de Boon, J. 2014. Pleasure, Arousal, Dominance: Mehrabian and Russell revisited. Current Psychology, 33(3), 405-421. [4] Beaird, J. 2007. The Principles of Beautiful Web Design (pp. 29). [5] Bonnardel, N., Piolat, A., & Le Bigot, L. 2011. The impact of colour on Website appeal and users’ cognitive processes. Displays, 32(2), 69-80. [6] Chaikate, S., Nittayapat, W., Morakotjinda, P., Peuchngen, P., & Kanthiwa, T. 2015. Science of color. Journal of home economics SWU. 13(1), 6-8. [7] Cyr, D., Head, M., & Larios, H. 2010. Colour appeal in website design within and across cultures: A multi-method evaluation. International Journal of Human-Computer Studies, 68(1-2), 1-21. [8] Das, G. 2014. Linkages of retailer personality, perceived quality and purchase intention with retailer loyalty: A study of Indian non-food retailing. Journal of Retailing and Consumer Services, 21(3), 407- 414. [9] Deng, L., & Poole, M. S. 2010. Affect in web interfaces: a study of the impacts of web page visual complexity and order. MIS Quarterly, 34(4), 711-730. [10] Electronic Transactions Development Agency (Public Organization). 2018. Thailand Internet User Profile 2018, 1-150. [11] Golalizadeh, F., & Sharifi, M. 2016. Exploring the effect of customers' perceptions of electronic retailer ethics on revisit and purchase intention of retailer website. 10th International Conference on eCommerce with focus on e-Tourist, 1-6. [12] Gray, R. 2016. Quality of Life Among Employed Population by Generations. Institute for Population and Social Research, Mahidol University. 461(2016), 1-128. [13] Hall, R. H., & Hanna, P. 2004. The impact of web page text-background colour combinations on readability, retention, aesthetics and behavioural intention. Behaviour & Information Technology, 23(3), 183-195. [14] Hong, I. B., & Cha, H. S. 2013. The mediating role of consumer trust in an online merchant in predicting purchase intention. International Journal of Information Management, 33(6), 927-939. [15] Hurlbert, A., & Wolf, K. 2004. Color contrast: a contributory mechanism to color constancy. Progress in Brain Research, 144, 147-160.
  • 8. [16] Hurlbert, A. C., & Ling, Y. 2012. Colour Design Theories and applications. Woodhead Publishing Limited, 129-157. [17] Ingkavitan, J., & Rattanawicha, P. 2018. An Empirical Study of Choosing the Right Color Combinations for e-Commerce Websites. The 2018 International Conference on e-Commerce, Administration, e-Society, e-Education, and e-Technology (e-CASE & e-Tech 2018), Osaka, Japan, 16- 27. [18] Lin, S.-W., Lo, L. Y.-S., & Huang, T. K. 2016. Visual Complexity and Figure-Background Color Contrast of E-Commerce Websites: Effects on Consumers' Emotional Responses. 49th Hawaii International Conference on System Sciences (HICSS), 3594-3603. [19] Moisa, S., & Sălășan, C. 2017. Some Aspects Regarding Color Schemes in order to Create Visual Attractive Websites. 4th International Multidisciplinary Scientific Conference on Social Sciences & Arts (SGEM 2017), 61, 363-369. [20] Nordeborn, G. 2013. The Effect of Color in Website Design Searching for Medical Information Online. Master’s Thesis, Lund University. [21] Pelet, J. E., & Papadopoulou, P. 2009. The effect of colors of e-commerce websites on consumer mood, memorization and buying intention. Proceedings of the 4th Mediterranean Conference on Information Systems, 1-16. [22] Pelet, J. É., & Papadopoulou, P. 2011. The Effect of E-Commerce Websites’ Colors on Customer Trust. International Journal of E-Business Research, 7(3), 1-18. [23] Pengnate, S., & Sarathy, R. 2017. An experimental investigation of the influence of website emotional design features on trust in unfamiliar online vendors. Computers in Human Behavior, 67, 49- 60. [24] Porat, T., & Tractinsky, N. 2012. It’s a Pleasure Buying Here: The Effects of Web-Store Design on Consumers’ Emotions and Attitudes. Human-Computer Interaction, 27, 235-276. [25] Rareș, O. D. 2014. Exploring the mediating role of perceived quality between online flow and customer’s online purchase intention on a restaurant e-commerce website. The Yearbook of the "Gh. Zane" Institute of Economic Researches, 23(1), 35-44. [26] Rattanawicha, P., & Esichaikul, V. 2005. What makes websites trustworthy? A two-phase empirical study. International Journal of Electronic Business, 3(2), 110-134. [27] Richardson, R. T., Drexler, T. L., & Delparte, D. M. 2014. Color and Contrast in E-Learning Design A Review of the Literature and Recommendations for Instructional Designers and Web Developers. MERLOT Journal of Online Learning and Teaching, 10(4), 657-670. [28] Shapiro, A. G. 2008. Separating color from color contrast. Journal of Vision, 8(1), 1-18. [29] Yungratog, S. & Rattanawicha, P. 2019. Effect of Color Contrast in e-Commerce Websites on Emotional Perception, Trust, and Purchase Intention of Visitors: An Empirical Study Design. The 4th
  • 9. International Conference on Innovative Education and Technology (ICIET 2019), 209-213. [30] Zhou, X., & Lin, Y. 2015. The Study on the Influence Mechanism of Website Features on Consumer Purchase Intention. 8th International Symposium on Computational Intelligence and Design, 104-107. AUTHORS Pimmanee Rattanawicha is an assistant professor at Chulalongkorn Business School, Chulalongkorn University, Bangkok, Thailand. Her research interests include e-Commerce, HCI, UX/UI design. Sutthipong Yungratog got his Master’s degree in IT in Business from Chulalongkorn Business School, Chulalongkorn University, Bangkok, Thailand. He is now planning for his PhD study in HCI.
  • 10. NONNEGATIVE MATRIX FACTORIZATION UNDER ADVERSARIAL NOISE Peter Ballen Department of Computer and Information Science, University of Pennsylvania, Philadelphia, USA ABSTRACT Nonnegative Matrix Factorization (NMF) is a popular tool to estimate the missing entries of a dataset under the assumption that the true data has a low-dimensional factorization. One example of such a matrix is found in movie recommendation settings, where NMF corresponds to predicting how a user would rate a movie. Traditional NMF algorithms assume the input data is generated from the underlying representation plus mean-zero independent Gaussian noise. However, this simplistic assumption does not hold in real-world settings that contain more complex or adversarial noise. We provide a new NMF algorithm that is more robust towards these nonstandard noise patterns. Our algorithm outperforms existing algorithms on movie rating datasets, where adversarial noise corresponds to a group of adversarial users attempting to review-bomb a movie. KEYWORDS Nonnegative Matrix Factorization, Matrix Completion, Recommendation, Adversarial Noise, Outlier Detection, Linear Model Full Text : https://aircconline.com/csit/papers/vol9/csit91601.pdf 5th International Conference on Data Mining and Applications (DMAP 2019) – http://airccse.org/csit/V9N16.html
  • 11. REFERENCES [1] Indyk, Pitor & Motwani, Rajeev (1998) “Approximate Nearest Neighbors: Towards Remoing the Curse of Dimensionality” Proceedings of the thirtieth annual ACM symposium on theory of computing, pp604-613 [2] Oseledts, Ivan & Tyrtyshinikov, Eugene (2009), “Breaking the curse of dimensionality, or how to use SVD in many dimensions”, SIAM Journal of Scientific Computing, pp3744-3759 [3] Dempster, Arthur & Laird, Nan & Rubin, Donald (1997) “Maximum likelihood from incomplete data via the EM algorithm”, Journal of the Royal Statistical Society, pp1-22 [4] Srebro, Nathan and Jaakkola, Tommi (2003), “Weighted low-rank approximations”,Proceedings of the 20th International Conference on Machine Learning, pp720-727 [5] Candes, Emmanuel & Recht, Benjamin (2009), “Exact matrix completion via convex optimization”, Foundations of Computational mathematics, pp717 [6] Koren, Yehuda & Bell, Robert & Volinsky, Robert (2009) “Matrix factorization techniques for recommender systems” Computer, pp30-37 [7] Zheng, Nan & Li, Qiudan & Liao, Shengcai & Zhang, Leiming (2010) “Which photo groups should I choose? A comparative study of recommendation algorithms in Flickr”, Journal of Information Science, pp733-750 [8] Burke, Robin & O’Mahony, Michael & Hurley, Neil (2015) “Robust Collaborative Filtering” Recommender systems handbook, pp961-995 [9] O’Mahony, Michael & Hurley, Neil & Kushmerick, Nicolas & Silvestre, Guenole (2004), “Collaborative recommendation: A robustness analysis”, ACM Transactions on Internet Technology, pp344-377 [10] Sandvig, Jeff & Mobasher, Bamshad & Burke, Robin (2008), “A survey of collaborative recommendation and the robustness of model-based algorithms”, IEEE Computer Society Technical Committee on Data Engineering [11] Mobasher, Bamshad & Burke, Robin & Sandvig, Jeff (2006), “Model-based collaborative filtering as a defense against profile injection attacks”, AI Magazine pp1388 [12] Lee, Daniel & Seung, Sebastian, (2001) “Algorithms for nonnegative matrix factorization”, Advances in Neural Information Processing Systems, pp556-562 [13] Sra, Suvrit & Dhillon, Inderjit (2006) “Generalized nonnegative matrix approximations with Bregman divergences”, Advances in neural information processing systems, pp283-290 [14] Fevotte, Cedric & Idier, Jerome (2011), “Algorithms for nonnegative matrix factorization with betadivergence, Neural computation, pp2421-2456
  • 12. [15] Taslaman L & Nilsson B. (2012) “A framework for regularized non-negative matrix factorization, with application to the analysis of gene expression data” PLoS One [16] Mao, Yun & Saul, Lawrence (2009) “Modeling distances in large scale networks by matrix factorization” ACM SIGCOMM conference in internet measurement, pp278-287 [17] Liu, Chao & Yang, Hung-chih & Fan, Jinliang & He, Li-Wei & Wang, Yi-Min (2010), “Distributed nonnegative matrix factorization for web-scale dyadic data analysis on mapreduce”, Proceedings of the 19th international conference on World wide web, pp681-690 [18] Zhang, Sheng & Wang, Weihong & Ford, James & Makedon, Fillia (2006) “Learning from incomplete ratings on nonnegative matrix factorization” SIAM conference on data mining, pp549-553 [19] Yang, Min & Xu, Linli & White, Martha & Schuurmans, Dale, & Yu, Yao-liang (2010) “Relaxed clipping: A global training method for robust regression and classification”, Advances in Neural Processing, pp2532-2540 [20] Honore, Bo E (1992), “Trimmed LAD and least squares estimation of truncated and censored regression models with fixed effects”, Econometrica: Journal of the Econometric Society, pp533-565 [21] Garcia-Escudero, Luis Angel & Gordaliza, Alfonso (1999), “Robustness properties of k means and trimmed k means”, Journal of the American Statistical Association, pp956-969 [22] Harper, F. Maxwell & Konstan, Joseph (2015) “The MovieLens datasets: history and context” ACM Transactions on Interactive Intelligent Systems AUTHORS Peter Ballen is a PhD student at the University of Pennsylvania, where he studies matrix factorization algorithms, their theoretical properties, and their applications in data mining.
  • 13. PROPOSING A HYBRID APPROACH FOR EMOTION CLASSIFICATION USING AUDIO AND VIDEO DATA Reza Rafeh1 , Rezvan Azimi Khojasteh2 , Naji Alobaidi3 1 Centre for Information Technology, Waikato Institute of Technology, Hamilton, New Zealand 2 Department of Computer Engineering, Malayer Branch, Islamic Azad University, Hamedan, Iran 3 Department of Computer Engineering, Unitec Institute of Technology, Auckland, New Zealand ABSTRACT Emotion recognition has been a research topic in the field of Human Computer Interaction (HCI) during recent years. Computers have become an inseparable part of human life. Users need human-like interaction to better communicate with computers. Many researchers have become interested in emotion recognition and classification using different sources. A hybrid approach of audio and text has been recently introduced. All such approaches have been done to raise the accuracy and appropriateness of emotion classification. In this study, a hybrid approach of audio and video has been applied for emotion recognition. The innovation of this approach is selecting the characteristics of audio and video and their features as a unique specification for classification. In this research, the SVM method has been used for classifying the data in the SAVEE database. The experimental results show the maximum classification accuracy for audio data is 91.63% while by applying the hybrid approach the accuracy achieved is 99.26%. KEYWORDS Emotion Classification, Emotions Analysis, Emotion Detection, SVM, Speech Emotion Recognition; Full Text : https://aircconline.com/csit/papers/vol9/csit91403.pdf 5th International Conference on Computer Science and Information Technology (CSTY 2019) – http://airccse.org/csit/V9N14.html
  • 14. REFERENCES [1] Ververidis,Dimitrios,Kotropoulos, Constantine, “Emotional speech recognition: Resources, features, and methods,” Speech Communication, vol. 48, no. 9, pp. 1162-1181, 2006. [2] Bhaskar, Jasmine, Sruthi, K. Nedungadi and Prema, “Hybrid Approach for Emotion Classification of Audio Conversation Based on Text and Speech Mining,” Procedia Computer Science, vol. 46, pp. 635- 643, 2015. [3] E. H. Jang, B. J. Park, S. H. Kim and J. H. Sohn, “Emotion classification based on physiological signals induced by negative emotions: Discriminantion of negative emotions by machine learning,” in Networking, Sensing and Control (ICNSC), 2012 9th IEEE International Conference on Beijing, 2012. [4] C. Parlak. and B. Diri, “Emotion recognition from the human voice,” in Signal Processing and Communications Applications Conference (SIU), 2013 21st, 2013. [5] E. Ayadi, M. Kamel, M. S. and K. Fakhri, “Survey on speech emotion recognition: Features, classification schemes, and databases,” vol. 44, no. 3, pp. 572-587, 2011. [6] Y. Pan, P. Shen and L. Shen, “Speech Emotion Recognition Using Support Vector Machine,” International Journal of Smart Home, vol. 6, no. 2, pp. 101-108, 2012. [7] C. Lijiang, M. Xia, X. Yuli and C. L. Lung, “Speech emotion recognition: Features and classification models,” Digital Signal Processing, vol. 22, no. 6, pp. 1154-1160, 2012. [8] N. Rajitha, D. David, L. B, P. J., Sridharan, S. Fookes and C. B., “Recognising audio-visual speech in vehicles using the AVICAR database,” in Proceedings of the 13th Australasian International Conference on Speech Science and Technology Melbourne, Vic, 2010. [9] M. S. Sinith, E. Aswathi, T. M. Deepa, C. P. Shameema and S. Rajan, “Emotion recognition from audio signals using Support Vector Machine,” in IEEE Recent Advances in Intelligent Computational Systems (RAICS) Trivandrum, 2015. [10] G. Chandni, M. Vyas, K. Dutta, K. Riha and J. Prinosil, “An automatic emotion recognizer using MFCCs and Hidden Markov Models,” in Ultra Modern Telecommunications and Control Systems and Workshops (ICUMT), 2015 7th International Congress on Brno, 2015. [11] “eNTERFACE'05 EMOTION Database,” [Online]. Available: http:// www.enterface.net/enterface05/.. [12] C. Busso, M. Bulut, C CLee, A. Kazemzadeh, E. Mower, S. Kim, J. Chang, S. Lee and S. Narayanan, “IEMOCAP: interactive emotional dyadic motion capture database,” vol. 42, pp. 335-359, 2008. [13] A. Metallinou, C. Busso, S. Lee and S. Narayanan, “Visual emotion recognition using compact facial representations and viseme information,” in 2010 IEEE International Conference on Acoustics, Speech and Signal Processing ,Dallas, TX, 2010. [14] “SAVEE Database,” [Online]. Available: http://kahlan.eps.surrey.ac.uk/savee/Database.html.
  • 15. [15] M. Sidorov, E. Sopov, I. Ivanov and W. Minker, “Feature and decision level audio-visual data fusion in emotion recognition problem,” in Informatics in Control, Automation and Robotics (ICINCO), 2015 12th International Conference on Colmar, 2015. [16] N. Yang, R. Muraleedharan, J. Kohl, I. Demirkol, W. Heinzelman and M. Sturge-Apple, “Speech- based emotion classification using multiclass SVM with hybrid kernel and thresholding fusion,” in Spoken Language Technology Workshop (SLT), 2012 IEEE Miami, FL, 2012. [17] “Bridge Project,” 2013. [Online]. Available: http://www.ece.rochester.edu/projects/wcng/project_bridge.html. [18] E. Sopov and I. Ivanov, “elf-Configuring Ensemble of Neural Network Classifiers for Emotion Recognition in the Intelligent Human-Machine Interaction,” in Computational Intelligence, 2015 IEEE Symposium Series on Cape Town, 2015. [19] S. Agrawal and S. Dongaonkar, “Emotion recognition from speech using Gaussian Mixture Model and vector quantization,” in Reliability, Infocom Technologies and Optimization (ICRITO) (Trends and Future Directions), 2015 4th International Conference on Noida, 2015. [20] M. R. Mehmood and H. J. Lee, “Emotion classification of EEG brain signal using SVM and KNN,” in Multimedia & Expo Workshops (ICMEW), 2015 IEEE International Conference on Turin, Italy, 2015. [21] N. R. Kanth and S. Saraswathi, “Efficient speech emotion recognition using binary support vector machines & multiclass SVM,” in IEEE International Conference on Computational Intelligence and Computing Research (ICCIC) Madurai, 2015. [22] Y. Chavhan, M. L. Dhore and P. Yesaware, “Article: Speech Emotion Recognition Using Support Vector Machine,” vol. 1, pp. 6-9, 2010. [23] M. S. Sinith, E. Aswathi, T. M. Deepa, C. P. Shameema and S. Rajan, “Emotion recognition from audio signals using Support Vector Machine,” in IEEE Recent Advances in Intelligent Computational Systems (RAICS) Trivandrum, 2015. [24] A. Metallinou, A. Katsamanis, W. M, F. Eyben, B. Schuller and S. Narayanan, “Context-sensitive learning for enhanced audiovisual emotion classification (Extended abstract),” in Affective Computing and Intelligent Interaction (ACII), 2015 International Conference on Xi'an, 2015.
  • 16. AUTHORS Reza Rafeh is a senior lecturer at Waikato Institute of Technology. He received his PhD in computer science from Monash University. His research areas cover data mining, big data and analytics, recommender systems, software engineering and modelling, constraint programming, and health informatics. Rezvan Azimi Khojasteh received her MSc in Software Engineering from Islamic Azad University, Malayer Branch. Her research area includes emotion mining and data analytics. Naji Alobaidi received his MSc in Computer Science from Unitec Institute of Technology. His research areas cover data analytics, emotion mining, and vehicular adhoc networks.
  • 17. A FACIAL RECOGNITION-BASED VIDEO ENCRYPTION APPROACH TO PREVENT FAKEDEEP VIDEOS Alex Liang1 , Yu Su2 and Fangyan Zhang3 1 St. Margaret's Episcopal School, San Juan Capistrano, CA 92675 2 Department of Computer Science, California State Polytechnic University, Pomona, CA, 91768 3 ASML, San Jose, CA, 95131 ABSTRACT Deepfake is a kind of technique which forges video with a certain purpose. It is in urgent demand that one approach can defect if a video is deepfaked or not. It also can reduce a video to be exposed to slanderous deepfakes and content theft. This paper proposes a useful tool which can encrypt and verify a video through proper corresponding algorithms and defect it accurately. Experiment in the paper shows that the tool has realized our goal and we can put it into practice. KEYWORDS Video Encryption, Video Verification, Encryption Algorithm, Decryption algorithm Full Text : https://aircconline.com/csit/papers/vol9/csit91317.pdf 6th International Conference on Computer Science, Engineering and Information Technology (CSEIT-2019) - http://airccse.org/csit/V9N13.html
  • 18. REFERENCES [1] D. Güera and E. J. Delp, "Deepfake Video Detection Using Recurrent Neural Networks," 2018 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Auckland, New Zealand, 2018, pp. 1-6. [2] Exposing DeepFake Videos By Detecting Face Warping Artifacts Yuezun Li, Siwei Lyu Computer Science Department University at Albany, State University of New York, USA [3] Ruchansky, N., Seo, S., & Liu, Y. (2017, November). Csi: A hybrid deep model for fake news detection. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management (pp. 797-806). ACM. [4] Polletta, F., & Callahan, J. (2019). Deep stories, nostalgia narratives, and fake news: Storytelling in the Trump era. In Politics of meaning/meaning of politics (pp. 55-73). Palgrave Macmillan, Cham. [5] Singhania, S., Fernandez, N., & Rao, S. (2017, November). 3han: A deep neural network for fake news detection. In International Conference on Neural Information Processing (pp. 572-581). Springer, Cham. [6] Güera, D., & Delp, E. J. (2018, November). Deepfake video detection using recurrent neural networks. In 2018 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS) (pp. 1-6). IEEE. [7] Citron, D. K., & Chesney, R. (2018). Deep Fakes: A Looming Crisis for National Security, Democracy and Privacy?. Lawfare. [8] Bradski, G., & Kaehler, A. (2008). Learning OpenCV: Computer vision with the OpenCV library. " O'Reilly Media, Inc.". [9] Pulli, K., Baksheev, A., Kornyakov, K., & Eruhimov, V. (2012). Real-time computer vision with OpenCV. Communications of the ACM, 55(6), 61-69. [10] Li, Y., & Lyu, S. (2018). Exposing deepfake videos by detecting face warping artifacts. arXiv preprint arXiv:1811.00656, 2. [11] Cozzolino, D., Thies, J., Rössler, A., Riess, C., Nießner, M., & Verdoliva, L. (2018). Forensictransfer: Weakly-supervised domain adaptation for forgery detection. arXiv preprint arXiv:1812.02510. [12] Dolhansky, B., Howes, R., Pflaum, B., Baram, N., & Ferrer, C. C. (2019). The Deepfake Detection Challenge (DFDC) Preview Dataset. arXiv preprint arXiv:1910.08854.
  • 19. AN IMAGE CLASSIFICATION-BASED APPROACH TO AUTOMATE VIDEO PLAYING DETECTION AT SYSTEM LEVEL Eric Liu1 , Samuel Walcoff2 , Qi Lu3 and Yu Sun4 1 Aracadia High School, Arcadia, CA, 92697 2 Department of Computer Science, University of California, Santa Cruz Santa Cruz, CA 95064 3 Department of Social Science, University of California, Irvine Irvine, CA, 92697 4 Department of Computer Science, California State Polytechnic University, Pomona, CA, 91768 ABSTRACT Tech distraction has become a critical issue on people’s work and study productivity, particularly with the growing amount of digital content from the social media site such as Youtube. Although browser-based plug-ins are available to help block and monitor the sites, they do not work for all scenarios. In this paper, we present a system-level video playing detection engine that captures screenshots and analyze the screenshot image using deep learning, in order to predict whether the image has videos in it or not. A mobile app has also been developed to enable parents to control the video playing detection remotely. KEYWORDS Machine learning, Tech distraction, Image classification Full Text : https://aircconline.com/csit/papers/vol9/csit91215.pdf 8th International Conference on Natural Language Processing (NLP 2019) - http://airccse.org/csit/V9N12.html
  • 20. REFERENCES [1] Leonard, Huw, and Gary Farmaner. "Method and system for administering a customer loyalty reward program using a browser extension." U.S. Patent Application 09/908,615, filed April 18, 2002. [2] Viennot, Nicolas, Edward Garcia, and Jason Nieh. "A measurement study of google play." In ACM SIGMETRICS Performance Evaluation Review, vol. 42, no. 1, pp. 221-233. ACM, 2014. [3] Liu, Charles Zhechao, Yoris A. Au, and Hoon Seok Choi. "Effects of freemium strategy in the mobile app market: An empirical study of google play." Journal of Management Information Systems 31, no. 3 (2014): 326-354. [4] Reddington, Thomas B. "Keyword search automatic limiting method." U.S. Patent 4,554,631, issued November 19, 1985. [5] Lerner, Benjamin S., Liam Elberty, Neal Poole, and Shriram Krishnamurthi. "Verifying web browser extensions’ compliance with private-browsing mode.” In European Symposium on Research in Computer Security, pp. 57-74. Springer, Berlin, Heidelberg, 2013. [6] Young, Simon N. "The use of diet and dietary components in the study of factors controlling affect in humans: a review." Journal of Psychiatry and Neuroscience 18, no.5 (1993): 235. [7] Buxton, J., M. White, and D. Osoba. "Patients' experiences using a computerized program with a touch-sensitive video monitor for the assessment of health-related quality of life." Quality of Life Research 7, no. 6 (1998): 513-519. [8] Craddock, Deborah, Cath O'Halloran, Kathryn Mcpherson, Sarah Hean, and Marilyn Hammick. "A top-down approach impedes the use of theory? Interprofessional educational leaders' approaches to curriculum development and the use of learning theory." Journal of Interprofessional Care 27, no. 1 (2013): 65-72. [9] Chamaret, Aurélie, Martin O'Connor, and Gilles Récoché. "Top-down/bottom-up approach for developing sustainable development indicators for mining: application to the Arlit uranium mines (Niger)." (2007). [10] Neches, Robert, Richard E. Fikes, Tim Finin, Thomas Gruber, Ramesh Patil, Ted Senator, and William R. Swartout. "Enabling technology for knowledge sharing." AI magazine 12, no. 3 (1991): 36- 36. [11] Polit, Stephen. "R1 and beyond: Ai technology transfer at digital equipment corporation." AI Magazine 5, no. 4 (1984): 76-76. [12] Lee, Dar-Shyang, Lee-Feng Chien, Aries Hsieh, Pin Ting, and Kin Wong. "On-screen guidelinebased selective text recognition." U.S. Patent 8,515,185, issued August 20, 2013. [13] Alcock, Shane, and Richard Nelson. "Application flow control in YouTube video streams." ACM SIGCOMM Computer Communication Review 41, no. 2 (2011): 24-30.
  • 21. [14] Sheiner, Lilach, Jessica L. Demerly, Nicole Poulsen, Wandy L. Beatty, Olivier Lucas, Michael S. Behnke, Michael W. White, and Boris Striepen. "A systematic screen to discover and analyze apicoplast proteins identifies a conserved and essential protein import factor." PLoS pathogens 7, no. 12 (2011): e1002392.
  • 22. AUTOMATIC EXTRACTION OF FEATURE LINES ON 3D SURFACE Zhihong Mao, Ruichao Wang and Yulin Zhou Division of Intelligent Manufacturing, Wuyi University, Jiangmen529020, China ABSTRACT Many applications in mesh processing require the detection of feature lines. Feature lines convey the inherent features of the shape. Existing techniques to find feature lines in discrete surfaces are relied on user-specified thresholds, inaccurate and time-consuming. We use an automatic approximation technique to estimate the optimal threshold for detecting feature lines. Some examples are presented to show our method is effective, which leads to improve the feature lines visualization. KEY WORDS Feature Lines; Extraction; Meshes Full Text : https://aircconline.com/csit/papers/vol9/csit90901.pdf 9th International Conference on Computer Science, Engineering and Applications (CCSEA 2019) - http://airccse.org/csit/V9N09.html
  • 23. REFERENCES [1] Forrester Cole, Kevin Sanik, Doug Decarlo, Adam Finkelstein, Thomas Funkhouser, Szymon Rusinkiewicz & Manish Singh, (2009) “How Well Do Line Drawings Depict Shape?”, ACM Transaction on Graphics, Vol. 28, No.3, pp43-51. [2] Ohtake Y., Belyaev A., & Seidel H.P, (2004) “Ridge-valley Lines on Meshes via Implicit Surface Fitting”, ACM Transactions on Graphics, Vol. 23, No. 3, pp609-612. [3] Shin Yoshizawa, Alexander Belyaev & Hans-Perter Seidel, (2005) “Fast and Robust Detection of Crest Lines on Meshes”, Symposium on Solid and Physical Modeling’05, pp227-232. [4] Soo-Kyun Kim & Chang-Hun Kim, (2006) “Finding Ridges and Valleys in A Discrete Surface Using A Modified MLS Approximation”, Computer-Aided Design, Vol. 38, No.2, pp173-180. [5] Georgios Stylianou & Gerald Farin, (2004) “Crest Lines for Surface Segmentation and Flattening”, IEEE Transaction on Visualization and Computer Graphics, Vol. 10, No. 5, pp536-543. [6] Tilke Judd, Fredo Durand & Edward H. Adelson, (2007) “ Apparent ridges for line drawing” , ACM Transactions on Graphics, Vol. 26, No. 3, pp19-26. [7] Chang Ha Lee, Amitabh Varshney & David W.Jacobs, (2005) “Mesh Saliency”. Proceedings of ACM Siggraph’05, pp659-666. [8] Ran Gal & Daniel Cohen-Or, (2006) “Salient Geometric Features for Partial Shape Matching and Similarity”, ACM Transactions on Graphics, Vol. 25, No. 1, pp130-150. [9] Taubin G, (1995) “Estimating the Tensor of Curvature of a Surface from a Polyhedral Approximation”, In Proceedings of Fifth International Conference on Computer Vision’95, pp902- 907. [10] Sachin Nigam & Vandana Agrawal, (2013) “ A Review: Curvature approximation on triangular meshes”, Int. J. of Engineering science and Innovative Technology, Vol. 2, No. 3, pp330-339. [11] Xunnian Yang & Jiamin Zheng, (2013) “Curvature tensor computation by piecewise surface interpolation”, Computer Aided Design, Vol. 45, No. 12, pp1639-1650. [12] Gady Agam & Xiaojing Tang, (2005) “A Sampling Framework for Accurate Curvature Estimation in Discrete Surfaces”, IEEE Transaction on Visu alization and Computer Graphics, Vol. 11, No. 5, pp573- 582. [13] Meyer M., Desbrun M., Schroder P. & Barr A. H, (2003) “Discrete Differential-geometry Operators for Triangulated 2-manifolds”, In Visualization and Mathematics III’ 03, pp35-57. [14] Stupariu & Mihai-Sorin, (2016) “An application of triangle mesh models in detecting patterns of vegetation”, WSCG’ 2016, pp87-90. [15] Chen L., Xie X., Fan X., Ma W., Zhang H., & Zhou H, (2003) “A visual attention model for adapting images on small displays”, ACM Multimedia Systems Journal, Vol. 9, No. 4, pp353-364.
  • 24. [16] Lee, Y., Markosian, L., Lee, S., & Hughes, J. F, (2007) ”Line drawings via abstracted shading”, ACM Transactions on Graphics, Vol. 26, No. 3, pp1-9. [17] Jack Szu-Shen & His-Yung FEng, (2017) “Idealization of scanning-derived triangle mesh models of prismatic engineering parts”, International Journal on Interactive Design and Manufacturing, Vol. 11, No. 2, pp205-221. [18] Decarlo D., Finkelstein A., Rusinkiewicz S. & Santella A,(2003) “Suggestive Contours for Conveying Shape”, ACM Transactions on Graphics, Vol.22, No. 3, pp848-855. [19] M. Kolomenkin, I. Shimshoni, & A. Tal,(2008) “Demarcating curves for shape illustration”, ACM Transactions on Graphics, Vol.27, No.5, pp157-166. [20] Michael Kolomenkin,(2009) “Ilan Shimshoni and Ayellet Tal. On Edge Detection on Surface”, IEEE CVPR’ 09, pp2767-2774. [21] M. P. Do Carmo (2004) Differential geometry of curves and surfaces, Book, China Machine Press. [22] A. Belyaev, P.-A. Fayolle, & A. Pasko, (2013) “Signed Lp-distance fields”, CAD, Vol.45, No. 2, pp523-528. [23] Y Zhang, G Geng, X Wei, S Zhang & S Li, (2016) “A statistical approach for extraction of feature lines from point clouds” ,Computers & Graphics, Vol. 56, No. 3, pp31-45.
  • 25. A SURVEY OF STATE-OF-THE-ART GANBASED APPROACHES TO IMAGE SYNTHESIS Shirin Nasr Esfahani1 and Shahram Latifi2 1 Department of Computer Science, UNLV, Las Vegas, USA 2 Department of Electrical & Computer Eng., UNLV, Las Vegas, USA ABSTRACT In the past few years, Generative Adversarial Networks (GANs) have received immense attention by researchers in a variety of application domains. This new field of deep learning has been growing rapidly and has provided a way to learn deep representations without extensive use of annotated training data. Their achievements may be used in a variety of applications, including speech synthesis, image and video generation, semantic image editing, and style transfer. Image synthesis is an important component of expert systems and it attracted much attention since the introduction of GANs. However, GANs are known to be difficult to train especially when they try to generate high resolution images. This paper gives a thorough overview of the state-of-the-art GANs-based approaches in four applicable areas of image generation including Text-to-Image-Synthesis, Image-to-Image-Translation, Face Aging, and 3D Image Synthesis. Experimental results show state-of-the-art performance using GANs compared to traditional approaches in the fields of image processing and machine vision. KEYWORDS Conditional generative adversarial networks (cGANs), image synthesis, image-to-image translation, text- to-image synthesis, 3D GANs. Full Text : https://aircconline.com/csit/papers/vol9/csit90906.pdf 9th International Conference on Computer Science, Engineering and Applications (CCSEA 2019) - http://airccse.org/csit/V9N09.html
  • 26. REFERENCES [1] Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., &Bengio, Y. (2014) “Generative adversarial nets” Advances in Neural InformationProcessing Systems 27 (NIPS 2014),Montreal, Canada. [2] Frey, B. J. (1998) “Graphical models for machine learning and digital communication”, MIT press. [3] Doersch, C. (2016)“Tutorial on variational autoencoders”, arXiv preprint arXiv:1606.05908, [4] M. Mirza & S. Osindero (2014) “Conditional generative adversarial nets”, arXiv:1411.1784v1. [5] S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele & H. Lee (2016) “Generative adversarial text to image synthesis”, International Conference on Machine Learning, New York, USA, pp. 1060-1069. [6] A. Radford, L. Metz & S. Chintala (2016) “Unsupervised representation learning with deep convolutional generative adversarial networks”, 4th International Conference of Learning Representations (ICLR 2016), San Juan, Puerto Rico. [7] S. Reed, Z. Akata, S. Mohan, S. Tenka, B. Schiele & H. Lee (2016) “Learning what and where to draw”, Advances in Neural Information Processing Systems, pp. 217–225. [8] S. Zhu, S. Fidler, R. Urtasun, D. Lin & C. L. Chen (2017) “Be your own prada: Fashion synthesis with structural coherence”, International Conference on Computer Vision (ICCV 2017), Venice, Italy,pp. 1680-1688. [9] S. Sharma, D. Suhubdy, V. Michalski, S. E. Kahou& Y. Bengio (2018) “ChatPainter: Improving text to image generation using dialogue”, 6th International Conference on Learning Representations (ICLR 2018 Workshop), Vancouver, Canada. [10] Z. Zhang, Y. Xie& L. Yang (2018) “Photographic text-to-image synthesis with a hierarchically- nested adversarial network”, Conference on Computer Vision and PatternRecognition (CVPR 2018), Salt Lake City, USA,pp. 6199-6208. [11] M. Cha, Y. Gwon& H. T. Kung (2017) “Adversarial nets with perceptual losses for text-to-image synthesis”, International Workshop on Machine Learning for Signal Processing (MLSP 2017), Tokyo, Japan,pp. 1- 6. [12] H. Dong, S. Yu, C. Wu & Y. Guo (2017) “Semantic image synthesis via adversarial learning”, International Conference on Computer Vision (ICCV 2017), Venice, Italy,pp. 5706-5714. [13] H. Zhang, T. Xu, H. Li, S. Zhang, X. Wang, X. Huang, and D. Metaxas (2017) “Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks”, International Conference on Computer Vision (ICCV 2017), Venice, Italy,pp. 5907-5915. [14] S. Hong, D. Yang, J. Choi & H. Lee (2018) “Inferring semantic layout for hierarchical text-to-image synthesis”, Conference on Computer Vision and PatternRecognition (CVPR 2018), Salt Lake City, USA,pp. 7986-7994.
  • 27. [15] Y. Li, M. R. Min, Di. Shen, D. Carlson, and L. Carin (2018) “Video generation from text”, 14th Artificial Intelligence and Interactive Digital Entertainment Conference (AIIDE 2018), Edmonton, Canada. [16] J. Chen, Y. Shen, J. Gao, J. Liu & X. Liu (2017) “Language-based image editing with recurrent attentive models”, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2018), Salt Lake City, USA, pp. 8721-8729. [17] A. Dash, J. C. B. Gamboa, S. Ahmed, M. Liwicki& M. Z. Afzal (2017) “TAC-GAN-Text conditioned auxiliary classifier”, arXiv preprint arXiv: 1703.06412, 2017. [18] A. Odena, C. Olah& J. Shlens (2017) “Conditional image synthesis with auxiliary classifier GANs,” Proceeding of 34th International Conference on Machine Learning (ICML 2017), Sydney, Australia. [19] H. Zhang, I. Goodfellow, D. Metaxas & A. Odena (2018) “Self-attention, generative adversarial networks”, arXiv preprint arXiv:1805.08318, 2018. [20] T. Xu, P. Zhang, Q. Huang, H. Zhang, Z.Gan, X. Huang & X. He (2018) “AttnGAN: Fine-grained text to image generation with attentional generative adversarial networks”, The IEEE Conference on Computer Vision and PatternRecognition (CVPR 2018), Salt Lake City, USA,pp. 1316-1324. [21] T. Salimans, I. J. Goodfellow, W. Zaremba, V. Cheung, A. Radford & X. Chen (2016) “Improved techniques for training GANs”, Advances in Neural Information Processing Systems 29 (NIPS 2016), Barcelona, Spain. [22] P. Isola, J.-Y. Zhu, T. Park & A. A. Efros (2017) “Image-to-image translation with conditional adversarial networks”,The IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017), Honolulu, Hawai, USA, pp. 1125-1134. [23] J.-Y. Zhu, T. Park, P. Isola & A. A. Efros (2017) “Unpaired Image-to-Image Translation using CycleConsistent”, The IEEE International Conference on Computer Vision (ICCV2017), Venice, Italy, pp.2223-2232. [24] M.-Y. Liu & O. Tuzel (2016) “Coupled generative adversarial networks”, 2016 Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain, pp. 469–477. [25] J. Donahue, P. Kr¨ahenb¨uhl& T. Darrell (2016) “Adversarial feature learning”, 4th International Conference on Learning Representations (ICLR 2016),San Juan, Puerto Rico. [26] V. Dumoulin, I. Belghazi, B. Poole, A. Lamb, M. Arjovsky, O. Mastropietro& A. Courville (2017) “Adversarially learned inference”, 5th International Conference on Learning Representations(ICLR 2017), Toulon, France. [27] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, & B. Schiele (2016) “The cityscapes dataset for semantic urban scene understanding”, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016), Las Vegas, USA, pp. 3213- 3223.
  • 28. [28] Q. Chen & V. Koltun (2017) “Photographic image synthesis with cascaded refinement networks”,IEEE International Conference on Computer Vision (ICCV 2107), Venice, Italy, pp. 1520– 1529. [29] T.-C. Wang, M.-Y. Liu, J.-Y. Zhu, A. Tao, J. Kautz & B. Catanzaro (2018) “High-resolution image synthesis and semantic manipulation with conditional GANs”, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2018), Salt Lake City, USA, pp. 8798-8807. [30] G. Lample, N. Zeghidour, N. Usunier, A. Bordes, L. Denoyer& M. Ranzato (2017) “Fader networks: Manipulating images by sliding attributes”, Advances in Neural Information Processing Systems 30 (NIPS 2017), Long Beach, USA. [31] D. Michelsanti & Z.-H. Tan (2017) “Conditional generative adversarial networks for speech enhancement and noise-robust speaker verification”, Proceeding of Interspeech, pp. 2008–2012. [32] G. Antipov, M. Baccouche &J.-L. Dugelay(2017)“Face aging with conditional generative adversarial networks”, IEEE International Conference on Image Processing (ICIP 2017), pp.2089 – 2093. [33] R. H. Byrd, P. Lu, J. Nocedal& C. Zhu (1995) “A limited memory algorithm for bound constrained optimization”, SIAM Journal on Scientific Computing, vol. 16, no. 5, pp. 1190–1208, 1995. [34] Z. Wang, X. Tang, W. Luo & S. Gao (2018) “Face aging with identity preserved conditional generative adversarial networks”, Proceeding IEEE Conference Computer Vision and Pattern Recognition, CVPR 2018), Salt Lake City, USA, pp. 7939–7947. [35] G. Antipov, M. Baccouche& J.-L. Dugelay (2017)” Boosting cross-age face verification via generative age normalization”, International Joint Conference on Biometrics (IJCB 2017), Denver, USA, pp. 17. [36] E. L.-Miller, Gary B. Huang, A. R. Chowdhury, H. Li &G.Hua (2016) “Labeled Faces in the Wild: A Survey”,Advances in Face Detection and Facial Image Analysis, Springer,2016, pp.189-248. [37] B. Amos, B. Ludwiczuk, & M. Satyanarayanan. Openface (2016) “A general-purpose face recognition library with mobile applications”, Technical report, CMU-CS-16-118, CMU School of Computer Science. [38] Z. Zhang, Y. Song & H. Qi (2017) “Age progression/regression by conditional adversarial auto encoder”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017), Honolulu, USA, pp. 4352 – 4360. [39] S. Liu, Y. Sun, D. Zhu, R. Bao, W. Wang, X. Shu & S. Yan (2017) “Face Aging with Contextual Generative Adversarial Nets”, Proceedings of the 25th ACM international conference on Multimedia, Mountain View, USA, pp. 82 -90. [40] J. Song, J. Zhang, L. Gao, X. Liu & H. T. Shen (2018) “Dual Conditional GANs for Face Aging and Rejuvenation”, Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI-18), Stockholm, Sweden, pp. 899-905.
  • 29. [41] H. Yang, D. Huang, Y. Wang & A. K. Jain (2018)” Learning face age progression: A pyramid architecture of GANs”, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2018), Salt Lake City, USA, pp. 31– 39. [42] P. Li, Y. Hu, Q. Li, R. He & Z. Sun (2018) “Global and local consistent age generative adversarial networks”, IEEE International Conference on Pattern Recognition, Beijing, China. [43] P. Li, Y. Hu, R. He & Z. Sun (2018) “Global and Local Consistent Wavelet-domain Age Synthesis”, arXiv:1809.07764. [44] J. Wu, C. Zhang, T. Xue, W. T. Freeman & J. B. Tenenbaum (2016) “Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling,” In Advances in Neural Information Processing Systems 29 (NIPS 2016), Barcelona, Spain. [45] J. Wu, Y. Wang, T. Xue, X. Sun, B. Freeman & J. Tenenbaum (2017) “Marrnet: 3d shape reconstruction via 2.5 d sketches”, Advances in Neural Information Processing Systems,Long Beach, USA, pp. 540–550. [46] W. Wang, Q. Huang, S. You, C. Yang & U. Neumann (2017) “Shape inpainting using 3d generative adversarial network and recurrent convolutional networks”, The IEEE International Conference on Computer Vision (ICCV 2017),Venice, Italy, pp. 2298-2306. [47] E. J. Smith & D. Meger (2017) “Improved adversarial systems for 3d object generation and reconstruction”, first Annual Conference on Robot Learning,Mountain View, USA, pp. 87–96. [48] P. Achlioptas, O. Diamanti, I. Mitliagkas& L. Guibas (2018) “Learning representations and generative models for 3d point clouds”, 6th International Conference on Learning Representations, Vancouver, Canada. [49] X. Sun, J. Wu, X. Zhang, Z. Zhang, C. Zhang, T. Xue, J. B. Tenenbaum & W. T. Freeman (2018) “Pix3d: Dataset and methods for single-image 3d shape modeling”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2018), Salt Lake City, USA, pp. 2974-2983. [50] D. Maturana &S. Scherer (2015) “VoxNet: A 3D Convolutional Neural Network for real-time object recognition”, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),Hamburg, Germany, pp. 922 – 928. [51] B. Shi, S. Bai, Z. Zhou & X. Bai (2015) “DeepPano: Deep Panoramic Representation for 3-D Shape Recognition”, IEEE Signal Processing Letters ,Vol. 22(12), pp. 2339 – 2343. [52] A. Brock, T. Lim, J. Ritchie & N. Weston (2016) “Generative and discriminative voxel modeling with convolutional neural networks”, arXiv:1608.04236.
  • 30. AUTHORS Shirin Nasr Esfahani received her M.S. degree in computer science – scientific computation from Sharif University of technology, Tehran- Iran. She is currently a Ph.D. candidate in computer science, University of Nevada, Las Vegas (UNLV). Her fields of interest include, hyper spectral image processing, neural networks, deep learning and data mining. Shahram Latifireceived the Master of Science and the PhD degrees both in Electri cal and Computer Engineering from Louisiana State University, Baton Rouge, in 1986 and 1989, respectively. He is currently a Professor of Electrical Engineering at the University of Nevada, Las Vegas.
  • 31. BLIND IMAGE QUALITY ASSESSMENT USING SINGULAR VALUE DECOMPOSITION BASED DOMINANT EIGENVECTORS FOR FEATURE SELECTION Besma Sadou1 , Atidel Lahoulou2* , Toufik Bouden1 , Anderson R. Avila3 , Tiago H. Falk3 , Zahid Akhtar4 1 Non Destructive Testing Laboratory, University of Jijel, Algeria 2 LAOTI laboratory, University of Jijel, Algeria 3 Institut National de la Recherche Scientifique, University of Québec, Montreal, Canada 4University of Memphis, USA ABSTRACT In this paper, a new no-reference image quality assessment (NR-IQA) metric for grey images is proposed using LIVE II image database. The features used are extracted from three well-known NR-IQA objective metrics based on natural scene statistical attributes from three different domains. These metrics may contain redundant, noisy or less informative features which affect the quality score prediction. In order to overcome this drawback, the first step of our work consists in selecting the most relevant image quality features by using Singular Value Decomposition (SVD) based dominant eigenvectors. The second step is performed by employing Relevance Vector Machine (RVM) to learn the mapping between the previously selected features and human opinion scores. Simulations demonstrate that the proposed metric performs very well in terms of correlation and monotonicity. KEYWORDS Natural Scene Statistics (NSS), Singular Value Decomposition (SVD), dominant eigenvectors, Relevance Vector Machine (RVM). Full Text : https://aircconline.com/csit/papers/vol9/csit90919.pdf 9th International Conference on Computer Science, Engineering and Applications (CCSEA 2019) - http://airccse.org/csit/V9N09.html
  • 32. REFERENCES [1] D. Zhang, Y. Ding , N. Zheng, “Nature scene statistics approach based on ICA for no- reference image quality assessment”, Proceedings of International Workshop on Information and Electronics Engineering (IWIEE), 29 (2012), 3589- 3593. [2] A. K. Moorthy, A. C. Bovik, A two-step framework for constructing blind image quality indices[J], IEEE Signal Process. Lett., 17 (2010), 513-516. [3] L. Zhang, L. Zhang, A.C. Bovik, A Feature-Enriched Completely Blind Image Quality Evaluator, IEEE Transactions on Image Processing, 24(8) (2015), 2579- 2591. [4] M.A. Saad, A.C. Bovik, C. Charrier, A DCT statistics-based blind image quality index, Signal Process. Lett. 17 (2010) 583–586. [5] M. A. Saad, A. C. Bovik, C. Charrier, Blind image quality assessment: A natural scene statistics approach in the DCT domain, IEEE Trans. Image Process., 21 (2012), 3339-3352. [6] A. Mittal, A.K. Moorthy, A.C. Bovik, No-reference image quality assessment in the spatial domain, IEEE Trans. Image Process. 21 (2012), 4695 - 4708. [7] A. Mittal, R. Soundararajan, A. C. Bovik, Making a completely blind image quality analyzer, IEEE Signal Process. Lett., 20 (2013), 209-212. [8] N. Kruger, P. Janssen, S. Kalkan, M. Lappe, A. Leonardis, J. Piater, A. Rodriguez-Sanchez, L. Wiskott, “Deep hierarchies in the primate visual cortex: What can we learn for computer vision?”, IEEE Trans. Pattern Anal. Mach. Intell., 35 (2013), 1847–1871. [9] D. J. Felleman, D. C. Van Essen, “Distributed hierarchical processing in the primate cerebral cortex,” Cerebral cortex, 1 (1991), 1–47. [10] B. Sadou, A. Lahoulou, T. Bouden, A New No-reference Color Image Quality Assessment Metric in Wavelet and Gradient Domains, 6th International Conference on Control Engineering and Information Technologies, Istanbul, Turkey, 25-27 October (2018), 954-959. [11] Q. Wu, H. Li, F. Meng, K. N. Ngan, S. Zhu, No reference image quality assessment metric via multidomain structural information and piecewise regression. J. Vis. Commun. Image R., 32(2015), 205– 216. [12] X. Shang, X. Zhao, Y. Ding, Image quality assessment based on joint quality-aware representation construction in multiple domains, Journal of Engineering 2018 (2018), 12p. [13] A. Lahoulou, E. Viennet, A. Beghdadi, ‘‘Selecting low-level features for image quality assessment by statistical methods,’’ J. Comput. Inf. Technol. CIT 18 (2010), 83–195. [14] H. Liu, H. Motoda, R. Setiono, and Z. Zhao, “Feature Selection: An Ever Evolving Frontier in Data Mining”, Journal of Machine Learning Research, Proceedings Track, pp. 4-13, 2010.
  • 33. [15] H. R. Sheikh, Z. Wang, L. Cormack, A. C. Bovik, LIVE Image Quality Assessment Database Release 2, http://live.ece.utexas.edu/research/quality [16] Final VQEG report on the validation of objective quality metrics for video quality assessment:http://www.its.bldrdoc.gov/vqeg/projects/frtv_phaseI/ [17] M. W. Mahoney, P. Drineas, “CUR matrix decompositions for improved data analysis,” in Proc. The National Academy of Sciences, February 2009. [18] M.E. Tipping. The relevance vector machines. In Advances in Neural Information Processing Systems 12, Solla SA, Leen TK, Muller K-R (eds). MIT Press: Cambridge, MA (2000), 652-658. [19] D. Basak, S. Pal, D.C. Patranabis, Support vector regression, Neural Information Processing – Letters and Reviews, 11 (2007). [20] B. SchÖlkopf, A.J. Smola, Learning with Kernels. MIT press, Cambridge, (2002). [21] H. R. Sheikh, M. F. Sabir, A. C. Bovik, A statistical evaluation of recent full reference image quality assessment algorithms, IEEE Trans. Image Process., 15 (2006), 3440–3451. AUTHORS Besma Sadou is currently a PhD student in the department of Electronics at university of Jijel (Algeria). She also works as full-time teacher of mathematics at the middle school. Her research interests are focused on reduced and no-reference image quality assessment. Atidel Lahoulou is Doctor in Signals and Images from Sorbonne Paris Cité (France) since 2012. She earned her Habilitation Universitaire in 2017 and is currently associate professor in the department of computer science at university of Jijel (Algeria). Her research interests include visual data quality evaluation and enhancement, biometrics, machine learning and cybersecurity. Toufik Bouden received the engineer diploma (1992), MSc (1995) and PhD (2007) degrees in automatics and signal processing from Electronics Institute of Annaba University (Algeria). Since 2015, he is full professor in the department of Automatics. His areas of research are signal and image processing, nondestructive testing and materials characterization, biometrics, transmission security and watermarking, chaos, fractional system analysis, synthesis and control. Anderson R. Avila received his B.Sc. in Computer Science from Federal University of Sao Carlos, Brazil, in 2004 and his M.Sc in Information Engineering from Federal University of ABC in 2014. In October 2013, Anderson worked as a short-term visiting researcher at INRS, where he now pursues his Ph.D degree on the topic of speaker and emotion recognition. His research interests include pattern recognition and multimodal signal processing applied to biometrics. Tiago H. Falk is an Associate Professor at INRS-EMT, University of Quebec and Director of the Multimedia Signal Analysis and Enhancement (MuSAE) Lab. His research interests are in multimedia quality measurement and enhancement, with a particular focus on human-inspired technologies.
  • 34. Zahid Akhtar is a research assistant professor at the University of Memphis (USA). Prior to joining the University of Memphis, he was a postdoctoral fellow at INRS-EMT-University of Quebec (Canada), University of Udine (Italy), Bahcesehir University (Turkey), and University of Cagliari (Italy), respectively. Dr. Akhtar received a PhD in electronic and computer engineering from the University of Cagliari (Italy). His research interests are biometrics, affect recognition, multimedia quality assessment, and cybersecurity.
  • 35. VULNERABILITY ANALYSIS OF IP CAMERAS USING ARP POISONING Thomas Doughty1 , Nauman Israr2 and Usman Adeel3 1 BSc (Hons) Cyber Security and Networks, Teesside University, Middlesbrough, UK 2 Senior Lecturer in Networks and Communication, Teesside University, Middlesbrough, UK 3 Senior Lecturer in Computer Science, Teesside University, Middlesbrough, UK ABSTRACT Internet Protocol (IP) cameras and Internet of Things (IoT) devices are known for their vulnerabilities, and Man in the Middle attacks present a significant privacy and security concern. Because the attacks are easy to perform and highly effective, this allows attackers to steal information and disrupt access to services. We evaluate the security of six IP cameras by performing and outlining various attacks which can be used by criminals. A threat scenario is used to describe how a criminal may attack cameras before and during a burglary. Our findings show that IP cameras remain vulnerable to ARP Poisoning or Spoofing, and while some cameras use Digest Authentication to obfuscate passwords, some vendors and applications remain insecure. We suggest methods to prevent ARP Poisoning, and reiterate the need for good password policy. KEYWORDS Security, Camera, Internet of Things, Passwords, Sniffing, Authentication Full Text : https://aircconline.com/csit/papers/vol9/csit90712.pdf 8th International Conference on Soft Computing, Artificial Intelligence and Applications (SAI 2019) - http://airccse.org/csit/V9N07.html
  • 36. REFERENCES [1] H. Sinanovic and S. Mrdovic, “Analysis of mirai malicious software,” in 2017 25th International Conference on Software, Telecommunications and Computer Networks (SoftCOM), Sep. 2017, pp. 1–5. DOI:10.23919/SOFTCOM.2017.8115504. [2] C. Kolias, G. Kambourakis, A. Stavrou, and J. Voas, “Ddos in the iot: Mirai and other botnets,”Computer, vol. 50, no. 7, pp. 80–84, 2017, ISSN: 0018-9162.DOI: 10.1109/MC.2017.201. [3] J. Liranzo and T. Hayajneh, "Security and privacy issues affecting cloud-based IP camera," 2017 IEEE 8th Annual Ubiquitous Computing, Electronics and Mobile Communication Conference (UEMCON), New York, NY, 2017, pp. 458-465. DOI: 10.1109/UEMCON.2017.8249043 [4] M. Smith. (2014). Peeping into 73,000 unsecured security cameras thanks to default passwords, [Online]. Available: https://www.csoonline.com/article/2844283/microsoft-subnet/peeping-into-73- 000- unsecured-security-cameras-thanks-to-default-passwords.html. [5] F. Callegati, W. Cerroni, and M. Ramilli, “Man-in-the-middle attack to the https protocol,”IEEE Security Privacy, vol. 7, no. 1, pp. 78–81, Jan. 2009, ISSN: 1540-7993.DOI: 10.1109/MSP.2009.12. [6] P. Arote and K. V. Arya, “Detection and prevention against arp poisoning attack using modified icmp and voting,” in2015 International Conference on Computational Intelligence and Networks, Jan. 2015,pp. 136–141.DOI: 10.1109/CINE.2015.34. [7] K. Boyarinov and A. Hunter, “Security and trust for surveillance cameras,” in2017 IEEE Conference on Communications and Network Security (CNS), Oct. 2017, pp. 384–385.DOI: 10.1109/CNS.2017.8228676. [8] ONVIF. (2018). Conformant products, [Online]. Available: https://www.onvif.org/conformantproducts/. [9] R. Alharbi and D. Aspinall, “An iot analysis framework: An investigation of iot smart cameras’ vulnerabilities,” inLiving in the Internet of Things: Cybersecurity of the IoT - 2018, Mar. 2018, pp. 1– 10.DOI:10.1049/cp.2018.0047. [10] H. Schulzrinne, A. Rao, R. Lanphier, M. Westerlund, and M. Stiemer-ling,Real-Time Streaming Protocol Version 2.0, RFC 7826, Dec. 2016.DOI: 10.17487/RFC7826. [Online]. Available: https://rfc- editor.org/rfc/rfc7826.txt. [11] Aircrack-ng. (2018). Aircrack-ng, [Online]. Available: https://www.aircrack-ng.org/. [12] Foscam. (2018). Fi9826w, [Online]. Available: https://www.foscam.com/product/2.html. [13] Hikvision. (). Ds-2cd2535fwd-i(w)(s), [Online]. Available: https://www.hikvision.com/en/Products/Network-Camera/EasyIP-3.0/3MP/DS-2CD2535FWDI(W)(S). [14] LILIN. (2018). Model: Lr2522e4 / lr2522e6, [Online]. Available: https://www.meritlilin.com/en/product/LR2522E4LR2522E6.
  • 37. [15] Sricam, (2018). Model: Ipr722es4.3 / ipr722es6, [Online]. Available: https://www.meritlilin.com/en/product/IPR722ESIPR722ES6. [16] Sricam. (2018). Sp008, [Online]. Available: http://www.sricam.com/product/id/9d5d656a907f46e48da1d45b9d0115ed.html. [17] Sricam, (). Sp017, [Online]. Available: http://www.sricam.com/product/id/66e005d40593482ca14957fe87562952.html. [18] J. Franks, P. M. Hallam-Baker, J. L. Hostetler, S. D. Lawrence, P. J.Leach, A. Luotonen, and L. C. Stewart. (Jun. 1999). Http authentica-tion: Basic and digest access authentication, [Online]. Available: http://www.rfc-editor.org/rfc/rfc2617.txt. [19] P. Hawkes, M. Paddon, and G. G. Rose, Musings on the wang et al. md5 collision, Cryptology ePrint Archive, Report 2004/264, 2004.[Online]. Available: https://eprint.iacr.org/2004/264. [20] D. Pauli. (2016). Security! experts! slam! yahoo! management! for!using! old! crypto! [Online]. Available: https://www.theregister.co.uk/2016/12/15/yahoospasswordhash/. [21] P. Shankdhar. (2018). Popular tools for brute-force attacks (updatedfor 2018), [Online]. Available: https://resources.infosecinstitute.com/popular-tools-for-brute-force-attacks/. [22] T. S. Project. (2018). Snort1#1 users manual 2.9.12, [Online]. Avail-able: http://manual-snort-org.s3- website-us-east-1.amazonaws.com/. [23] N. Tripathi and B. M. Mehtre, “Analysis of various arp poisoning mitigation techniques: A comparison,” in2014 International Conference on Control, Instrumentation, Communication and Computational Technologies (ICCICCT), Jul. 2014, pp. 125–132.DOI: 10 . 1109 /ICCICCT.2014.6992942. [24] R. Shekh-Yusef, D. Ahrens, and S. Bremer, “Http digest access authentication,” RFC Editor, RFC 7616, Sep. 2015. [25] E. W. (2018). Not perfect, but better: Improving security one step at a time, [Online]. Available: https://www.ncsc.gov.uk/blog- post/not-perfect-better-improving-security-one-step-time.
  • 38. AUTHORS Thomas Doughty is a graduate of Teesside University and received a BSc (Hon) in Cybersecurity and Networks. His research interests include Cyber Securit y and the Internet of Things. Dr. Nauman Israr is currently a Senior Lecturer in Networks and Communication at Teesside University. His research interests are include Wireless Sensor Networks, Intelligent Computing and Cluster Communication. Dr. Usman Adeel is currently a Senior Lecturer in Computer Science at Teesside University. He holds a PhD in Computing from Imperial College, London. His research interests are focused on Distributed Sensing Systems and their applications for Internet of Things, Cyber-physical Systems
  • 39. BRAIN COMPUTER INTERFACE FOR BIOMETRIC AUTHENTICATION BY RECORDING SIGNAL Abd Abrahim Mosslah1 . Reyadh Hazim Mahdi2 and Shokhan M. AlBarzinji3 1 University of Anbar, College of Islamic Science, Anbar- Iraq 2 University of Mustansiriyah, Dept. of ComputerScience,College of Science, Baghdad,Iraq 3 College of Computer Science and Information Technology, University of Anbar ABSTRACT Electroencephalogram(EEG) is done in several ways, which are referred to as brainwaves, which scientists interpret as an electromagnetic phenomenon that reflects the activity in the human brain, this study is used to diagnose brain diseases such as schizophrenia, epilepsy, Parkinson's, Alzheimer's, etc. It is also used in brain machine interfaces and in brain computers. In these applications wireless recording is necessary for these waves. What we need today is Authentication? Authentication is obtained from several techniques, in this paper we will check the efficiency of these techniques such as password and pin. There are also biometrics techniques used to obtain authentication such as heart rate, fingerprint, eye mesh and sound, these techniques give acceptable authentication. If we want to get a technology that gives us integrated and efficient authentication, we use brain wave recording. The aim of the technique in our proposed paper is to improve the efficiency of the reception of radio waves in the brain and to provide authentication. KEYWORD Related work, EEG brain signal, Brain wave, Overall projcet outline, System requirements. Full Text : https://aircconline.com/csit/papers/vol9/csit90613.pdf 6th International Conference on Artificial Intelligence and Applications (AIAP-2019) - http://airccse.org/csit/V9N06.html
  • 40. REFERENCE [1] Electroencephalogram.PDF,15 July 2007.
 [2] Wenjie Xu, Cuntai Guan, Chngeng siong . rangana- tha,m.thulasidas,jiankang wu,”high accuracy of classi- fication of eeg signal,”icpr,17th international confe- rence on patter recognization(ICPR’ 04)PP.391-394 [3] Marcel, S., & Millán, J. D. R. (2007). “Person authentication using brainwaves (EEG) and maximum a posteriori model adaptation.” Pattern Analysis and Machine Intelligence, IEEE Transactions on, 29(4), pp. 743-752. [4] Poulos, M. Rangoussi, N. Alexandris, A. Evangelou, M. (2001). “On the use of EEG features towards person identification via neural networks.” Informatics for Health and Social Care, 26(1), pp. 35-48. [5] Palaniappan, R. (2008). “Two-stage biometric authentication method using thought activity brain waves.” International Journal of Neural Systems, 18(01), pp. 59-66. [6] E. Bas ̧ar. Brain Function and Oscillations: Integrative brain function. Neurophysiol- ogy and cognitive processes. Springer series in synergetics. Springer, 1999. ISBN 9783540643456. [7] W. Klimesch, “Theta band power in the human scalp EEG and the encoding of new information,” Neuroreport, vol. 7, no. 7, pp. 1235-1240, 1996. [8] Bressler SL. The gamma wave: a cortical information carrier? Trends Neurosci 1990;13:161–162. [9] Patrizio Campisi and Daria La Rocca, “Brain Waves for Automatic Biometric-Based User Recognition”, IEEE Transactions on Information Forensics and Security, Vol. 9, No. 5, pp 782-800, May 2014. [10] J. Klonovs, C. Petersen, H. Olesen, and A. Hammershoj, “ID proof on the go: Development of a mobile EEG-based biometric authentication system,” IEEE Veh. Technol. Mag., vol. 8, no. 1, pp. 81– 89, Mar. 2013. [11] Abd et al. “Biometrics detection and recognition based-on geometrical features extraction”, In Proceedings of the IEEE 2018 International Conference on Advance of Sustainable Engineering and its Application (ICASEA). Date of Conference: 14-15 March 2018. Date Added to IEEE Xplore: 04 June 2018, INSPEC Accession, Number: 17807703, DOI: 10.1109/ICASEA.2018.8370956. [12] K. Brigham and B. V. Kumar, “Subject identification from electroen- cephalogram (EEG) signals during imagined speech,” in Proc. IEEE 4th Int. Conf. BTAS, Sep. 2010, pp. 1–8. [13] M. Poulos, M. Rangoussi, and N. Alexandris, “Neural network based person identification using EEG features,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., vol. 2. Mar. 1999, pp. 1117– 1120. [14] M. Poulos, M. Rangoussi, V. Chrissikopoulos, and A. Evangelou, “Person identification based on parametric processing of the EEG,” in Proc. 6th IEEE Int. Conf. Electr., Circuit Syst., Sept. 1999, pp. 283–286.
  • 41. [15] C. He and Z. J. Wang, “An independent component analysis (ICA) based approach for EEG person authentication,” in Proc. 3rd ICBBE, 2010, pp. 1–10. [16] A. Riera, A. Soria-Frisch, M. Caparrini, C. Grau, and G. Ruffini, “Unobtrusive biometric system based on electroencephalogram analy- sis,” EURASIP J. Adv. Signal Process, vol. 2008, 2008. [17] F. Su, H. Zhou, Z. Feng, and J. Ma, “A biometric-based covert warning system using EEG,” in Proc. 5th IAPR Int. Conf. Biometrics ICB, 2012, pp. 342–347. [18] P. Campisi et al., “Brain waves based user recognition using the ‘eyes closed resting conditions’ protocol,” in Proc. IEEE Int. WIFS, Nov. 2011, pp. 1–6, [19] D. La Rocca, P. Campisi, and G. Scarano, “On the repeatability of EEG features in a biometric recognition framework using a resting state protocol,” in Proc. BIOSIGNALS, 2013, pp. 20–2. [20] R. Paranjape, J. Mahovsky, L. Benedicenti, and Z. Koles, “The elec- troencephalogram as a biometric,” in Proc. Can. Conf. Electr. Comput. EngComput., 2001, pp. 1363–1366. [21] K. Das, S. Zhang, B. Giesbrecht, And M. P. Eckstein, “Using Rapid Visually Evoked Eeg Activity For Person İdentification,” İ 2493. AUTHORS M.Sc. Abd Abrahim Mosslh. and who was born in the Alaesawi village, Fallujah, in 1971. obtained his M.Sc. in Comput Mustanseriah. Iraq Baghdad. He is currently instructor of Islamic Universi ty of Anbar Iraq. His research interests are Artificial Neural Networks, Computer Networks, Image Processing, Software Engineering, and Genet M.Sc. Reyadh Hazim Mahdiobtained utara / Malaysia. He is currently instructor of College of Science University of Mustanseriah IRAQ-BAGHDAD His research interests are Artificial Neural Netwo Computer Networks, Image Processing, and Software Engineering M.SC Shokhan M. Al-Barzinji. University of Anbar. Iraq anbar. She is currently instructor of College of Computer Science and Information Technogy, University of Anbar, Anbar, Iraq. His re search interests are Medical Image processing, Image processing, Internet of Things, cloud computing and visualization
  • 42. METHOD FOR THE DETECTION OF CARRIERIN-CARRIER SIGNALS BASED ON FOURTHORDER CUMULANTS Vasyl Semenov1 , Pavel Omelchenko1 and Oleh Kruhlyk1 1 Department of Algorithms, Delta SPE LLC, Kiev, Ukraine ABSTRACT The method for the detection of Carrier-in-Carrier signals based on the calculation of fourthorder cumulants is proposed. In accordance with the methodology based on the “Area under the curve” (AUC) parameter, a threshold value for the decision rule is established. It was found that the proposed method provides the correct detection of the sum of QPSK signals for a wide range of signal-to-noise ratios. The obtained AUC value indicates the high efficiency of the proposed detection method. The advantage of the proposed detection method over the “radiuses” method is also shown. KEYWORDS Carrier-in-Carrier, Cumulants, QPSK. Full Text : https://aircconline.com/csit/papers/vol9/csit90503.pdf 7th International Conference on Computational Science and Engineering (CSE) - http://airccse.org/csit/V9N05.html
  • 43. REFERENCES [1] Agne, Craig & Cornell, Billy & Dale, Mark & Keams, Ronald & Lee, Frank, (2010) “Sharedspectrum bandwidth efficient satellite communications”, Proceedings of the IEEE Military Communications Conference (MILCOM' 10), pp341-346. [2] Gouldieff, Vincent & Palicot, Jacques, (2015) “MISO Estimation of Asynchronously Mixed BPSK Sources”, Proc. IEEE Conf. EUSIPCO, pp369-373. [3] Feng, Hao & Gao, Yong, (2016) “High-Speed Parallel Particle Filter for PCMA Signal Blind Separation”, Radioelectronics and Communications Systems, Vol.59, No.10, pp305-313. [4] Semenov, Vasyl, (2018) “Method of Iterative Single-Channel Blind Separation for QPSK Signals”, Mathematical and computer modelling, Vol. 17, No. 2, pp108-116. [5] Fernandes, Carlos Estevao R. & Comon, Pierre & Favier, Gerard, (2010) “Blind identification of MISO-FIR channels”, Signal Processing, Vol. 90, pp490–503. [6] Swami, Anantharam & and Sadler, Brain M., (2000) “Hierarchical digital modulation classification using cumulants,” IEEE Trans. Commun., Vol. 48, pp416-429. [7] Wunderlich, Adam & Goossens, Bart & Abbey, Craig K. “Optimal Joint Detection and Estimation That Maximizes ROC-Type Curves” (2016) IEEE Transactions on Medical Imaging, Vol. 35, No.9, pp2164– 2173. AUTHORS Vasyl Semenov received a Ph.D. in Acoustics from Institute of Hydromechanics of National Academy of Sciences of Ukraine in 2004. He is currently the chief of the Department of Algorithms at Delta SPE LLC, Kiev, Ukraine and doctoral student at the Institute of Cybernetics of National Academy of Sciences of Ukraine. His main research interests are in the fields of digital signal processing, demodulation, blind separation, and recognition systems. Pavel Omelchenko received a Ph.D. in Mathematics from Institute of Mathematics of National Academy of Sciences of Ukraine in 2010. He is currently the member of the Department of Algorithms at Delta SPE LLC, Kiev, Ukraine. His main research interests are in the fields of digital signal processing, demodulation, blind separation, and cryptanalysis systems. Oleh Kruhlyk received M.Sc. degree in Radioelectronics from the National Technical University of Ukraine “Kiev Polytechnic Institute” in 2017. He is currently the member of the Department of Algorithms at Delta SPE LLC, Kiev, Ukraine and Ph.D. student at the National Technical University of Ukraine “Kiev Polytechnic Institute”. His main research interests are in the fields of digital signal processing, demodulation, and blind separation and methods.
  • 44. A DFG PROCESSOR IMPLEMENTATION FOR DIGITAL SIGNAL PROCESSING APPLICATIONS Ali Shatnawi, Osama Al-Khaleel and Hala Alzoubi Department of Computer Engineering, Jordan University of Science and Technology, Irbid, Jordan ABSTRACT This paper proposes a new scheduling technique for digital signal processing (DSP) applications represented by data flow graphs (DFGs). Hardware implementation in the form of a specialized embedded system, is proposed. The scheduling technique achieves the optimal schedule of a given DFG at design time. The optimality criterion targeted in the proposed algorithm is the maximum throughput than can be achieved by the available hardware resources. Each task is presented in a form of an instruction to be executed on the available hardware. The architecture is composed of one or multiple homogeneous pipelined processing elements, designed to achieve the maximum possible sampling rate for several DSP applications. In this paper, we present a processor implementation of the proposed architecture. It comprises one processing element where all tasks are processed sequentially. The hardware components are built on an FPGA chip using Verilog HDL. The architecture requires a very small area size, which is represented by the number of slice registers and the number of slice lookup tables (LUTs). The proposed scheduling technique is shown to outperform the retiming technique, which is proposed in the literature, by 19.3%. KEYWORDS Data Flow Graphs, Task Scheduling, Processor Design, Hardware Description Language Full Text : https://aircconline.com/csit/papers/vol9/csit90402.pdf 8th International Conference on Advanced Computer Science and Information Technology (ICAIT 2019) - http://airccse.org/csit/V9N04.html
  • 45. REFERENCES [1] DeFatta D, Lucas J, Hadgkiss W. Digital signal processing, a system design approach. John Wiley & Sons. 1988. [2] Trevillyan L. An overview of logic synthesis systems. Conference on Design Automation. IEEE; 1987; 166-172. [3] Schafer R, Oppenheim A. Digital Signal Processing. 1st ed. Englewood Cliffe, New Jersey: Prentice Hall; 1975; 31-32. [4] Shatnawi A. Compile-time scheduling of digital signal processing data flow graphs onto homogeneous multiprocessor systems. Ph.D. Thesis Department of Electrical and Computer Engineer, Concordia University, Montreal Canada, 1996. [5] Shatnawi A. Optimal Scheduling of Digital Signal Processing Data-flow Graphs using Shortest-path Algorithms. The Computer Journal. 2002; 45(1):88-100. [6] Wang G, Wang Y, Liu H, Guo H. HSIP: A Novel Task Scheduling Algorithm for Heterogeneous Computing. Scientific Programming. 2016; 2016:1-11. [7] Ullah Munir E, Mohsin S, Hussain A. SDBATS: A Novel Algorithm for Task Scheduling in Heterogeneous Computing Systems. Parallel and Distributed Processing Symposium Workshops & PhD Forum (IPDPSW). IEEE; 2013; 43-53. [8] Liu G, He Y, Guo L. Static Scheduling of Synchronous Data Flow onto Multiprocessors for Embedded DSP Systems. Third International Conference on Measuring Technology and Mechatronics Automation. IEEE. 2011; 338–341. [9] Zhou N, Qi D, Wang X, Zheng Z, Lin W. A list scheduling algorithm for heterogeneous systems based on a critical node cost table and pessimistic cost table. Concurrency and Computation: Practice and Experience. 2016;29(5):1-11. [10] Kang Y, Lin Y. A Recursive Algorithm for Scheduling of Tasks in a Heterogeneous Distributed Environment. 2011 4th International Conference on Biomedical Engineering and Informatics (BMEI). IEEE. 2011; 2099-2103. [11] Woods R, McAllister J, Lightbody G, Yi Y. FPGA-Based Implementation of Signal Processing Systems. Chichester, United Kingdom: John Wiley & Sons 2009; 145-169. [12] Parhi K, Messerschmitt D. Static rate-optimal scheduling of iterative data-flow programs via optimum unfolding. IEEE Transactions on Computers. 1991; [13] McFarland M, Parker A, Camposano R. Tutorial on high-level synthesis, 25th Design Automat. 1988. p. 330-336. [14] Hurson A, Milutinović V, Advances in computers. Waltham, MA : Academic Press, 2015;(96):1-45.
  • 46. [15] De Groot S, Gerez S, Herrmann O. Range-chart-guided iterative data-flow graph scheduling. IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications. 1992; 39(5):351-364. AUTHORS Ali Shatnawi is a professor of computer engineering. He received the B.Sc and M.Sc in electrical and computer engineering from the Jordan University of Science and Technology (JUST) in 1989 and 1992, respectively; and the Ph.D degree in electrical and computer engineering from Concordia University, Canada, in 1996. He has been on the faculty of the Jordan University of Science and Technology since 1996. He served as the director of computer centre 1996-1999, Vice-dean 2001-2002, Dean of IT at Hashemite University 2002-2005 and dean of Computer and Information Technology, JUST, 2016- 2018. His present research includes algorithms and optimizations, hardware scheduling, computer architecture and high level synthesis of DSP applications. Osama Al-Khaleel is an associate professor of Computer Engineering in the Department of Computer Engineering of Jordan University of Science and Technology (Irbid, Jordan), received his B.S in Electrical Engineering from Jordan University of Science and Technology in 1999, M.Sc. and Ph.D. in Computer Engineering from Case Western Reserve University, Cleveland, OH, USA in 2003 and 2006 respectively. Currently, his main research interests are in embedded systems design, reconfigurable computing, computer arithmetic, and logic design. Hala AL-Zu'bi received her B.SC. in Computer Engineering from Yarmouk University in 2012, and M.Sc. in Computer Engineering from Jordan University of Science & Technology in 2018. Her research interests include computer architecture, hardware description language, task scheduling and data flow computing.
  • 47. OCCLUSION HANDLED BLOCK-BASED STEREO MATCHING WITH IMAGE SEGMENTATION Jisu Kim, Cheolhyeong Park, Ju O Kim and Deokwoo Lee Department of Computer Engineering, Keimyung University, Daegu 42601, Republic of Korea ABSTRACT This paper chiefly deals with techniques of stereo vision, particularly focuses on the procedure of stereo matching. In addition, the proposed approach deals with detection of the regions of occlusion. Prior to carrying out stereo matching, image segmentation is conducted in order to achieve precise matching results. In practice, in stereo vision, matching algorithm sometimes suffers from insufficient accuracy if occlusion is inherent with the scene of interest. Searching the matching regions is conducted based on cross correlation and based on finding a region of the minimum mean square error of the difference between the areas of interest defined in matching window. Middlebury dataset is used for experiments, comparison with the existed results, and the proposed algorithm shows better performance than the existed matching algorithms. To evaluate the proposed algorithm, we compare the result of disparity to the existed ones. KEYWORDS Occlusion, Stereo vision, Segmentation, Matching. Full Text : https://airccj.org/CSCP/vol9/csit90303.pdf 7th International Conference on Signal Image Processing and Multimedia (SIPM 2019) - http://airccse.org/csit/V9N03.html
  • 48. REFERENCES [1] Hartely, Richard. & Zisserman, Andrew (2003) Multiple View Geometry in Computer Vision, Computer graphics, image processing and robotics, Cambridge University Press. [2] Mühlmann, Karsten & Maier, Dennis & Hesser, Jürgen & Männer, Reinhard, (2002) “Calculating Dense Disparity Maps from Color Stereo Images, an Efficient Implementation”, International Journal of Computer Vision, Vol. 47, No. 1, pp.79-88. [3] Xu, Jintao & Yang, Qingxiong & Feng, Zuren, (2016) “Occlusion-Aware Stereo Matching”, International Journal of Computer Vision, Vol. 120, No. 3, pp.256-271. [4] Kim Kyung Rae & Kim Chang Su, (2016) “Adaptive smoothness constraints for efficient stereo matching using texture and edge information”, 2016 IEEE International Conference on Image Processing (ICIP), pp.3429-3433. [5] Brown, Myron Z & Burschka, Darius & Hager, Gregory D, (2003) “Advances in computational stereo”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 25, No. 8, pp.993- 1008. [6] Huang, Xiaoshui & Yuan, Chun & Zhang Jian, (2015) “Graph Cuts Stereo Matching Based on PatchMatch and Ground Control Points Constraint”, Advances in Multimedia Information Processing – PCM, Vol. 9315, pp14-23. [7] Mozerov, Mikhail G & Weijer, Joost van de, (2015) “Accurate Stereo Matching by Two-Step Energy Minimization”, IEEE Transactions on Image Processing, Vol. 24, No. 3, pp.1153-163. [8] Salehian, Behzad & Fotouhi, Ali M & Raie, Abolghasem A, (2018) “Dynamic programming-based dense stereo matching improvement using an efficient search space reduction technique”, Optik, Vol. 160, pp.1-12. [9] Zhu, Shiping & Yan, Lina, (2017) “Local stereo matching algorithm with efficient matching cost and adaptive guided image filter”. The Visual Computer, Vol. 33, No. 9, pp. 1087-1102. [10] Kang, C & Kim, J & Lee, S & Nam, K, (1997) “Stereo Matching Using Dynamic Programming with Region Partition”. Journal of the Institute of Electronics and Information Engineers, Vol. 20, No. 1, pp.479-482. [11] Lowe, David G, (1999) “Object recognition from local scale-invariant features”, Proceedings of the Seventh IEEE International Conference on Computer Vision, pp.1-8. [12] Bay, Herbert & Tuytelaars, Tinne & Gool, Luc V, (2008) “Speeded-Up Robust Features (SURF)”, Computer Vision and Image Understanding, Vol. 110, No. 3, pp.345-359. [13] Lee, K-M. and Lin, C-H, (2017) “And Image Segmentation and Merge Hierarchical Region using Mean-Shift Tracking Algorithm”, Proceedings of Annual Conference of IEIE, pp.704-706. [14] Scharstein, D & Szeliski, R, (2002) “A taxonomy and evaluation of dense two-frame stereo corresponding algorithms”, International Journal of Computer Vision, Vol. 47, No. 1, pp.7-42.
  • 49. [15] Scharstein, D & Szeliski, R, (2003) “High-Accuracy Stereo Depth Maps Using Structured Light”, IIEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), pp.195- 202. AUTHOR Jisu Kim is in department of computer engineering, Keimyung University, Daegu, Republic of Korea. He is currently working on image processing, computer vision, signal processing and machine learning. He is currently pursuing his M.S degree in computer engineering. Cheolhyeong Park is in department of computer engineering, Keimyung University, Daegu, Republic of Korea. He is currently working on geometric image analysis, computer vision, computer graphics and machine learning. He is in the course of integrated B.S and M.S degree in computer engineering. Ju O Kim is in department of computer engineering, Keimyung University, Daegu, Republic of Korea. He is currently working on image analysis and Processing. He is pursuing B.S degree in computer engineering. Dr. Deokwoo Lee is an Assistant Professor in the department of computer engineering at Keimyung University. Dr. Lee has received B.S degree in electrical engineering from Kyungpook National University, Daegu, Republic of Korea, and M.S and Ph.D degree from North Carolina State University, Raleigh, NC, USA, respectively. He has been working on the areas of computer vision, image processing, signal processing and machine learning. In particular, he has been conducting camera calibration, bio-signal analysis and image denoising.
  • 50. ORDER PRESERVING STREAM PROCESSING IN FOG COMPUTING ARCHITECTURES Vidyasankar Department of Computer Science, Memorial University of Newfoundland, St. John’s, Newfoundland, Canada ABSTRACT A Fog Computing architecture consists of edge nodes that generate and possibly pre-process (sensor) data, fog nodes that do some processing quickly and do any actuations that may be needed, and cloud nodes that may perform further detailed analysis for long-term and archival purposes. Processing of a batch of input data is distributed into sub-computations which are executed at the different nodes of the architecture. In many applications, the computations are expected to preserve the order in which the batches arrive at the sources. In this paper, we discuss mechanisms for performing the computations at a node in correct order, by storing some batches temporarily and/or dropping some batches. The former option causes a delay in processing and the latter option affects Quality of Service (QoS). We bring out the tradeoffsbetween processing delay and storage capabilities of the nodes, and also between QoS and the storage capabilities. KEYWORDS Fog computing, Order preserving computations, Quality of Service Full Text : https://airccj.org/CSCP/vol9/csit90104.pdf 3rd International Conference on Computer Science and Information Technology (COMIT 2019) - http://airccse.org/csit/V9N01.html
  • 51. REFERENCES [1] F. Bonomi, R. Milito, J. Zhu & S. Addepalli (2012)“Fog computing and its role in the internet of things”, Proceedings of the First Edition of the MCC Workshop on Mobile Cloud Computing, MCC ’12, pp 13–16, New York, NY, USA, ACM. [2] F. Bonomi, R. Milito, P. Natarajan & J. Zhu (2014) “Fog computing: A platform for internet of things and analytics”, In N. Bessis and C. Dobre, editors, Big Data and Internet of Things: A Roadmap for Smart Environments, pp169–186, Springer International Publishing, Cham. [3] C. Chang, S. N. Srirama& R. Buyya (2017)“Indie fog: An efficient fog-computing infrastructure for the internet of things”,Computer, Vol. 50, No. 9, pp 92–98. [4] A. V. Dastjerdi& R. Buyya(2016)“Fog computing: Helping the internet of things realize its potential”,Computer, Vol. 49, No. 8, pp 112–116. [5] K. Vidyasankar (1991)“Unified theory of database serializability”, FundamentaInformatica, Vol. 1, No. 2, pp 145-153. [6] K. Vidyasankar (2018a)“Distributing computations in fog architectures”, TOPIC’18 Proceedings. Association for Computing Machinery. [7] K. Vidyasankar (2018b)“Atomicity of executions in fog computing architectures”,Proceedings of the Twenty Seventh International Conference on Software Engineering and Data Engineering (SEDE18). [8] N. Conway (2008)“Transactions and data stream processing”, Online Publication, pages 1–28. http://neilconway.org/docs/stream_txn.pdf. [9] J. Meehan, N. Tatbul, S. Zdonik, C. Aslantas, U. Cetintemel, J. Du, T. Kraska, S. Madden, D. Maier, A. Pavlo, M. Stonebraker, K. Tufte, & H. Wang (2015) “ S-store: Streaming meets transaction processing”,Proc. VLDB Endow., Vol. 8, No. 13, pp 2134–2145. [10] I. Botan, P. M. Fischer, D. Kossmann, & N. Tatbul (2012)“Transactional stream processing”, Proceedings EDBT, ACM Press. [11] L. Gürgen, C. Roncancio, S. Labbé& V. Olive (2006)“Transactional issues in sensor data management”, Proceedings of the 3rd International Workshop on Data Management for Sensor Networks (DMSN’06), Seoul, South Korea, pp 27–32. [12] M. Oyamada, H. Kawashima, & H. Kitagawa (2013)“Continuous query processing with concurrency control: Reading updatable resources consistently”, Proceedings of the 28th Annual ACM Symposium on Applied Computing, SAC ’13, pp 788–794, New York, NY, USA, ACM. [13] K. Vidyasankar (2017) “On continuous queries in stream processing”, The 8th International Conference on Ambient Systems, Networks and Technologies (ANT-2017), Procedia Computer Science, pp 640–647. Elsevier.
  • 52. [14] L. Andrade, M. Serrano& C. Prazeres (2018)“The data interplay for the fog of things: A transition to edge computing with IoT”,Proceedings of the 2018 IEEE International Conference on Communications (ICC), IEEE Xplore. [15] S. H. Mortazavi, M. Salehe, C. S. Gomes, C. Phillips & E. de Lara (2017)“Cloudpath: A multi-tier cloud computing framework”, Proceedings of the Second ACM/IEEE Symposium on Edge Computing, SEC ’17, pp 20:1–20:13, New York, NY, USA, ACM. [16] storm.apache.org/releases/1.0.6/Transactional-topologies.html. [17] Jin Li , Kristin Tufte, VladislavShkapenyuk, VassilisPapadimos, Theodore Johnson & David Maier (2008) “Out-of-Order Processing: A new Architecture for high-performance stream systems”, PVLDB ’08, pp 274-288, VLDB Endowment. [18] Zhitao Shen, Vikram Kumaran, Michael J. Franklin, Sailesh Krishnamurthy, Amit Bhat, Madhu Kumar, Robert Lerche& Kim Macpherson (2015) “CSA: Streaming engine for internet of things”, Data Engineering bulletin, Vol. 38, No. 4, pp 39-50, IEEE Computer Society. [19] F. Xhafa, V. Naranjo, L. Barolli& M. Takizawa (2015)“On streaming consistency of big data stream processing in heterogeneous clusters”, Proceedings of the 18th International Conference on NetworkBased Information Systems. IEEE Xplore.