The pivotal research work that has been carried out and described in this literature acknowledges the
importance of various smoothing techniques for processing 3D human faces from 2.5D range face images.
The smoothing techniques have been developed and implemented using MATLAB-Simulink for real time
processing in embedded system. In addition, the significance of smoothed 2.5D range image over original
face range image has been discovered as well as its time complexity has also been reported with array of
experiments. The variations in time complexities are also accomplished using different optimization levels
and execution modes. A set of filtering techniques such as, Max filter, Min filter, Median filter, Mean filter,
Mid-point filter and Gaussian filter, have been designed and illustrated using Simulink model. The model
takes depth face image (i.e. the range face image) as input in real time and presents the improvement over
original face images. In the design flow, the performance of every block has also been characterized by
range face images from Frav3D, GavabDB, and Bosphorus databases. In the experimental section of this
research article, an array of performance analysis for these smoothing techniques with variation of
frameworks is explained.
Improvement of the Recognition Rate by Random ForestIJERA Editor
In this paper; we introduce a system of automatic recognition of characters based on the Random Forest Method in non-constrictive pictures that are stemmed from the terminals Mobile phone. After doing some pretreatments on the picture, the text is segmented into lines and then into characters. In the stage of characteristics extraction, we are representing the input data into the vector of primitives of the zoning types, of diagonal, horizontal and of the Zernike moment. These characteristics are linked to pixels’ densities and they are extracted on binary pictures. In the classification stage, we examine four classification methods with two different classifiers types namely the multi-layer perceptron (MLP) and the Random Forest method. After some checking tests, the system of learning and recognition which is based on the Random Forest has shown a good performance on a basis of 100 models of pictures
Improvement oh the recognition rate by random forestYoussef Rachidi
In this paper; we introduce a system of automatic recognition of characters based on the Random Forest Method in non-constrictive pictures that are stemmed from the terminals Mobile phone. After doing some pretreatments on the picture, the text is segmented into lines and then into characters. In the stage of characteristics extraction, we are representing the input data into the vector of primitives of the zoning types, of diagonal, horizontal and of the Zernike moment. These characteristics are linked to pixels’ densities and they are extracted on binary pictures. In the classification stage, we examine four classification methods with two different classifiers types namely the multi-layer perceptron (MLP) and the Random Forest method. After some checking tests, the system of learning and recognition which is based on the Random Forest has shown a good performance on a basis of 100 models of pictures.
Marker Controlled Segmentation Technique for Medical applicationRushin Shah
Medical image segmentation is a very important field for the medical science. In medical images, edge detection is an important work for object recognition of the human organs such as brain, heart or kidney etc. and it is an essential pre-processing step in medical image segmentation.
Medical images such as CT, MRI or X-Ray visualizes the various information’s of internal organs which is very important for doctors diagnoses as well as medical teaching, learning and research.
It is a tough job to locate the internal organs if images contains noise or rough structure of human body organs.
Performance Evaluation of 2D Adaptive Bilateral Filter For Removal of Noise F...CSCJournals
In this paper, we present the performance analysis of adaptive bilateral filter by pixel to noise ratio and mean square errors. It was evaluate changing the parameters of the adaptive filter half width values and standard deviations. In adaptive bilateral filter, the edge slope is enhanced by transforming the histogram via a range filter with adaptive offset and width. The variance of range filter can also be adaptive. The filter is applied to improve the sharpens of a gray level and color image by increasing the slope of the edges without producing overshoot or undershoots. The related graphs were plotted and the best filter parameters are obtained.
Ultrasound images and SAR i.e. synthetic aperture radar images are usually corrupted because of speckle
noise also called as granular noise. It is quite a tedious task to remove such noise and analyze those
corrupted images. Till now many researchers worked to remove speckle noise using frequency domain
methods, temporal methods, and adaptive methods. Different filters have been developed as Mean and
Median filters, Statistic Lee filter, Statistic Kuan filter, Frost filter, Srad filter. This paper reviews filters
used to remove speckle noise.
Frequency Domain Blockiness and Blurriness Meter for Image Quality AssessmentCSCJournals
Image and video compression introduces distortions (artefacts) to the coded image. The most prominent artefacts added are blockiness and blurriness. Many existing quality meters are normally distortion-specific. This paper proposes an objective quality meter for quantifying the combined blockiness and blurriness distortions in frequency domain. The model first applies edge detection and cancellation, then spatial masking to mimic the characteristics of the human visual system. Blockiness is then estimated by transforming image into frequency domain, followed by finding the ratio of harmonics to other AC components. Blurriness is determined by comparing the high frequency coefficients of the reference and coded images due to the fact that blurriness reduces the high frequency coefficients. Then, both blockiness and blurriness distortions are combined for a single quality metric. The meter is tested on blocky and blurred images from the LIVE image database, with a correlation coefficient of 95-96%.
Improvement of the Recognition Rate by Random ForestIJERA Editor
In this paper; we introduce a system of automatic recognition of characters based on the Random Forest Method in non-constrictive pictures that are stemmed from the terminals Mobile phone. After doing some pretreatments on the picture, the text is segmented into lines and then into characters. In the stage of characteristics extraction, we are representing the input data into the vector of primitives of the zoning types, of diagonal, horizontal and of the Zernike moment. These characteristics are linked to pixels’ densities and they are extracted on binary pictures. In the classification stage, we examine four classification methods with two different classifiers types namely the multi-layer perceptron (MLP) and the Random Forest method. After some checking tests, the system of learning and recognition which is based on the Random Forest has shown a good performance on a basis of 100 models of pictures
Improvement oh the recognition rate by random forestYoussef Rachidi
In this paper; we introduce a system of automatic recognition of characters based on the Random Forest Method in non-constrictive pictures that are stemmed from the terminals Mobile phone. After doing some pretreatments on the picture, the text is segmented into lines and then into characters. In the stage of characteristics extraction, we are representing the input data into the vector of primitives of the zoning types, of diagonal, horizontal and of the Zernike moment. These characteristics are linked to pixels’ densities and they are extracted on binary pictures. In the classification stage, we examine four classification methods with two different classifiers types namely the multi-layer perceptron (MLP) and the Random Forest method. After some checking tests, the system of learning and recognition which is based on the Random Forest has shown a good performance on a basis of 100 models of pictures.
Marker Controlled Segmentation Technique for Medical applicationRushin Shah
Medical image segmentation is a very important field for the medical science. In medical images, edge detection is an important work for object recognition of the human organs such as brain, heart or kidney etc. and it is an essential pre-processing step in medical image segmentation.
Medical images such as CT, MRI or X-Ray visualizes the various information’s of internal organs which is very important for doctors diagnoses as well as medical teaching, learning and research.
It is a tough job to locate the internal organs if images contains noise or rough structure of human body organs.
Performance Evaluation of 2D Adaptive Bilateral Filter For Removal of Noise F...CSCJournals
In this paper, we present the performance analysis of adaptive bilateral filter by pixel to noise ratio and mean square errors. It was evaluate changing the parameters of the adaptive filter half width values and standard deviations. In adaptive bilateral filter, the edge slope is enhanced by transforming the histogram via a range filter with adaptive offset and width. The variance of range filter can also be adaptive. The filter is applied to improve the sharpens of a gray level and color image by increasing the slope of the edges without producing overshoot or undershoots. The related graphs were plotted and the best filter parameters are obtained.
Ultrasound images and SAR i.e. synthetic aperture radar images are usually corrupted because of speckle
noise also called as granular noise. It is quite a tedious task to remove such noise and analyze those
corrupted images. Till now many researchers worked to remove speckle noise using frequency domain
methods, temporal methods, and adaptive methods. Different filters have been developed as Mean and
Median filters, Statistic Lee filter, Statistic Kuan filter, Frost filter, Srad filter. This paper reviews filters
used to remove speckle noise.
Frequency Domain Blockiness and Blurriness Meter for Image Quality AssessmentCSCJournals
Image and video compression introduces distortions (artefacts) to the coded image. The most prominent artefacts added are blockiness and blurriness. Many existing quality meters are normally distortion-specific. This paper proposes an objective quality meter for quantifying the combined blockiness and blurriness distortions in frequency domain. The model first applies edge detection and cancellation, then spatial masking to mimic the characteristics of the human visual system. Blockiness is then estimated by transforming image into frequency domain, followed by finding the ratio of harmonics to other AC components. Blurriness is determined by comparing the high frequency coefficients of the reference and coded images due to the fact that blurriness reduces the high frequency coefficients. Then, both blockiness and blurriness distortions are combined for a single quality metric. The meter is tested on blocky and blurred images from the LIVE image database, with a correlation coefficient of 95-96%.
Gabor filter is a powerful way to enhance biometric images like fingerprint images in order to extract correct features from these images, Gabor filter used in extracting features directly asin iris images, and sometimes Gabor filter has been used for texture analysis. In fingerprint images The even symmetric Gabor filter is contextual filter or multi-resolution filter will be used to enhance fingerprint imageby filling small gaps (low-pass effect) in the direction of the ridge (black regions) and to increase the discrimination between ridge and valley (black and white regions) in the direction, orthogonal to the ridge, the proposed method in applying Gabor filter on fingerprint images depending on translated fingerprint image into binary image after applying some simple enhancing methods to partially overcome time consuming problem of the Gabor filter.
Presentation on deformable model for medical image segmentationSubhash Basistha
Introduction to Image Processing
Steps of Image Processing
Types of Image Processing
Introduction to Image Segmentation
Introduction to Medical Image Segmentation
Application of Image Segmentation
Example of Image Segmentation
Need for Deformable Model
What is Deformable Model??
Types of Deformable Model
Extraction of texture features by using gabor filter in wheat crop disease de...eSAT Journals
Abstract
Like country India, there are so many people depending upon agriculture. In this area, many farmers don’t know about new
diseases which are impacting on their farm. As the disease changes, the disease control policy also changes. So many farmers
have very sharp observation on crop diseases, but whenever there is new diseases fall on crops then problems occur. Climate also
changes instantly many of times, because of such reasons farmers unable to understand various diseases.
If farmer unable to predict that diseases quickly then it will affect life of crops. Indirectly it gets affects on total productivity of
farm. As we are well known about that world facing lot of problems due rapid growth in population. So our goal is to increase
agricultural productivity using image processing technology which can help farmer in great extent [7].
In this research work, we are trying that crop disease using Artificial neural network (ANN) which work very effectively. First of
all, we have provided an digital image which is taken by digital camera. That image given to Gaussian filter firstly then
transferred to adaptive median filter to filter out noise present inside image. Gaussian filter removes Gaussian noise which is
present inside image. Adaptive noise filter removes impulsive noise which is present inside image. Also it will reduce distortions
which are present inside images. Then image transferred to segmentation part. In image segmentation we have choose CIELAB
color space method to extract color components properly. For segmentation we have used Gabor filter. After this we distinguish
crop diseases on the basis of texture features which are extracted by Gabor filter [6].
Key Words: Artificial Neural Networks, Image preprocessing, Image Acquisition, and Feature Extraction,
classification etc…
A STUDY AND ANALYSIS OF DIFFERENT EDGE DETECTION TECHNIQUEScscpconf
In the first study [1], a combination of K-means, watershed segmentation method, and Difference In Strength (DIS) map were used to perform image segmentation and edge detection
tasks. We obtained an initial segmentation based on K-means clustering technique. Starting from this, we used two techniques; the first is watershed technique with new merging
procedures based on mean intensity value to segment the image regions and to detect their boundaries. The second is edge strength technique to obtain accurate edge maps of our images without using watershed method. In this technique: We solved the problem of undesirable over segmentation results produced by the watershed algorithm, when used directly with raw data images. Also, the edge maps we obtained have no broken lines on entire image. In the 2nd study level set methods are used for the implementation of curve/interface evolution under various forces. In the third study the main idea is to detect regions (objects) boundaries, to isolate and extract individual components from a medical image. This is done using an active contours to detect regions in a given image, based on techniques of curve evolution, Mumford–Shah functional for segmentation and level sets. Once we classified our images into different intensity regions based on Markov Random Field. Then we detect regions whose boundaries are not necessarily defined by gradient by minimize an energy of Mumford–Shah functional forsegmentation, where in the level set formulation, the problem becomes a mean-curvature which will stop on the desired boundary. The stopping term does not depend on the gradient of the image as in the classical active contour. The initial curve of level set can be anywhere in the image, and interior contours are automatically detected. The final image segmentation is one
closed boundary per actual region in the image.
AN IMPROVED IRIS RECOGNITION SYSTEM BASED ON 2-D DCT AND HAMMING DISTANCE TEC...IJEEE
This paper proposes a new iris recognition system that implements Integro-Differential, Daugman Rubber Sheet Model, 2-D DCT, Hamming Distance to exact features from the iris and matching it with the sorted database.All these image-processing algorithms have been validated on noised real iris images & UBIRIS database
In this paper; we introduce a system of automatic recognition of Amazigh characters based on the Random Forest Method in non-constrictive pictures that are stemmed from the terminals Mobile phone. After doing some pretreatments on the picture, the text is segmented into lines and then into characters. In the stage of characteristics extraction, we are representing the input data into the vector of primitives of the zoning types, of diagonal, horizontal, Gabor filters and of the Zernike moment. These characteristics are linked to pixels’ densities and they are extracted on binary pictures. In the classification stage, we examine four classification methods with two different classifiers types namely the Support vector machines (SVM) and the Random Forest method. After some checking tests, the system of learning and recognition which is based on the Random Forest has shown a good performance on a basis of 100 models of pictures.
BIG DATA-DRIVEN FAST REDUCING THE VISUAL BLOCK ARTIFACTS OF DCT COMPRESSED IM...IJDKP
The Urban Surveillance Systems generate huge amount of video and image data and impose high pressure
onto the recording disks. It is obvious that the research of video is a key point of big data research areas.
Since videos are composed of images, the degree and efficiency of image compression are of great
importance. Although the DCT based JPEG standard are widely used, it encounters insurmountable
problems. For instance, image encoding deficiencies such as block artifacts have to be removed frequently.
In this paper, we propose a new, simple but effective method to fast reduce the visual block artifacts of DCT
compressed images for urban surveillance systems. The simulation results demonstrate that our proposed
method achieves better quality than widely used filters while consuming much less computer CPU
resources.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Efficient fingerprint image enhancement algorithm based on gabor filtereSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
14 offline signature verification based on euclidean distance using support v...INFOGAIN PUBLICATION
In this project, a support vector machine is developed for identity verification of offline signature based on the matrices derived through Euclidean distance. A set of signature samples are collected from 35 different people. Each person gives his 15 different copies of signature and then these signature samples are scanned to have softcopy of them to train SVM. These scanned signature images are then subjected to a number of image enhancement operations like binarization, complementation, filtering, thinning, edge detection and rotation. On the basis of 15 original signature copies from each individual, Euclidean distance is calculated. And every tested image is compared with the range of Euclidean distance. The values from the ED are fed to the support vector machine which draws a hyper plane and classifies the signature into original or forged based on a particular feature value.
A Survey on Tamil Handwritten Character Recognition using OCR Techniquescscpconf
In today’s fast growing technology, digital recognitions are playing wide role and providing
more scope to perform research in OCR techniques. Recognition of Tamil handwritten scripts is
complicated compared to other western language scripts. However, many researchers have
provided real-time solutions for offline Tamil character recognition also. Offline Tamil
handwritten documents recognition still offers many motivating challenges to researchers.
Current research offers many solutions on Tamil handwritten documents recognition even then
reasonable accuracy and performance has not been achieved. This paper analyses the various approaches and challenges concerning offline Tamil handwritten character recognition
A comparison of image segmentation techniques, otsu and watershed for x ray i...eSAT Journals
Abstract The most dangerous and rapidly spreading disease in the world is Tuberculosis. In the investigating for suspected tuberculosis (TB), chest radiography is the only key techniques of diagnosis based on the medical imaging So, Computer aided diagnosis (CAD) has been popular and many researchers are interested in this research areas and different approaches have been proposed for the TB detection. Image segmentation plays a great importance in most medical imaging, by extracting the anatomical structures from images. There exist many image segmentation techniques in the literature, each of them having their own advantages and disadvantages. The aim of X-ray segmentation is to subdivide the image in different portions, so that it can help during the study the structure of the bone, for the detection of disorder. The goal of this paper is to review the most important image segmentation methods starting from a data base composed by real X-ray images. Keywords— chest radiography, computer aided diagnosis, image segmentation, anatomical structures, real X-rays.
HARDWARE ACCELERATION OF THE GIPPS MODEL FOR REAL-TIME TRAFFIC SIMULATIONijesajournal
Traffic simulation software is becoming increasingly popular as more cities worldwide use it to better manage their crowded traffic networks. An important requirement for such software is the ability to produce accurate results in real time, requiring great computation resources. This work proposes an ASICbased hardware accelerated approach for the AIMSUN traffic simulator, taking advantage of repetitive tasks in the algorithm. Different system configurations using this accelerator are also discussed. Compared with the traditional software simulator, it has been found to improve the performance by as much as 9x when using a single processing element approach, or more depending on the chosen hardware configuration.
O design thinking para educadores permite criar espaços e resoluções de problemas em forma colaborativa. Por meio de 5 fases é vivenciado um percurso de projeção de protótipos responsivos à demanda dos grupos.
Gabor filter is a powerful way to enhance biometric images like fingerprint images in order to extract correct features from these images, Gabor filter used in extracting features directly asin iris images, and sometimes Gabor filter has been used for texture analysis. In fingerprint images The even symmetric Gabor filter is contextual filter or multi-resolution filter will be used to enhance fingerprint imageby filling small gaps (low-pass effect) in the direction of the ridge (black regions) and to increase the discrimination between ridge and valley (black and white regions) in the direction, orthogonal to the ridge, the proposed method in applying Gabor filter on fingerprint images depending on translated fingerprint image into binary image after applying some simple enhancing methods to partially overcome time consuming problem of the Gabor filter.
Presentation on deformable model for medical image segmentationSubhash Basistha
Introduction to Image Processing
Steps of Image Processing
Types of Image Processing
Introduction to Image Segmentation
Introduction to Medical Image Segmentation
Application of Image Segmentation
Example of Image Segmentation
Need for Deformable Model
What is Deformable Model??
Types of Deformable Model
Extraction of texture features by using gabor filter in wheat crop disease de...eSAT Journals
Abstract
Like country India, there are so many people depending upon agriculture. In this area, many farmers don’t know about new
diseases which are impacting on their farm. As the disease changes, the disease control policy also changes. So many farmers
have very sharp observation on crop diseases, but whenever there is new diseases fall on crops then problems occur. Climate also
changes instantly many of times, because of such reasons farmers unable to understand various diseases.
If farmer unable to predict that diseases quickly then it will affect life of crops. Indirectly it gets affects on total productivity of
farm. As we are well known about that world facing lot of problems due rapid growth in population. So our goal is to increase
agricultural productivity using image processing technology which can help farmer in great extent [7].
In this research work, we are trying that crop disease using Artificial neural network (ANN) which work very effectively. First of
all, we have provided an digital image which is taken by digital camera. That image given to Gaussian filter firstly then
transferred to adaptive median filter to filter out noise present inside image. Gaussian filter removes Gaussian noise which is
present inside image. Adaptive noise filter removes impulsive noise which is present inside image. Also it will reduce distortions
which are present inside images. Then image transferred to segmentation part. In image segmentation we have choose CIELAB
color space method to extract color components properly. For segmentation we have used Gabor filter. After this we distinguish
crop diseases on the basis of texture features which are extracted by Gabor filter [6].
Key Words: Artificial Neural Networks, Image preprocessing, Image Acquisition, and Feature Extraction,
classification etc…
A STUDY AND ANALYSIS OF DIFFERENT EDGE DETECTION TECHNIQUEScscpconf
In the first study [1], a combination of K-means, watershed segmentation method, and Difference In Strength (DIS) map were used to perform image segmentation and edge detection
tasks. We obtained an initial segmentation based on K-means clustering technique. Starting from this, we used two techniques; the first is watershed technique with new merging
procedures based on mean intensity value to segment the image regions and to detect their boundaries. The second is edge strength technique to obtain accurate edge maps of our images without using watershed method. In this technique: We solved the problem of undesirable over segmentation results produced by the watershed algorithm, when used directly with raw data images. Also, the edge maps we obtained have no broken lines on entire image. In the 2nd study level set methods are used for the implementation of curve/interface evolution under various forces. In the third study the main idea is to detect regions (objects) boundaries, to isolate and extract individual components from a medical image. This is done using an active contours to detect regions in a given image, based on techniques of curve evolution, Mumford–Shah functional for segmentation and level sets. Once we classified our images into different intensity regions based on Markov Random Field. Then we detect regions whose boundaries are not necessarily defined by gradient by minimize an energy of Mumford–Shah functional forsegmentation, where in the level set formulation, the problem becomes a mean-curvature which will stop on the desired boundary. The stopping term does not depend on the gradient of the image as in the classical active contour. The initial curve of level set can be anywhere in the image, and interior contours are automatically detected. The final image segmentation is one
closed boundary per actual region in the image.
AN IMPROVED IRIS RECOGNITION SYSTEM BASED ON 2-D DCT AND HAMMING DISTANCE TEC...IJEEE
This paper proposes a new iris recognition system that implements Integro-Differential, Daugman Rubber Sheet Model, 2-D DCT, Hamming Distance to exact features from the iris and matching it with the sorted database.All these image-processing algorithms have been validated on noised real iris images & UBIRIS database
In this paper; we introduce a system of automatic recognition of Amazigh characters based on the Random Forest Method in non-constrictive pictures that are stemmed from the terminals Mobile phone. After doing some pretreatments on the picture, the text is segmented into lines and then into characters. In the stage of characteristics extraction, we are representing the input data into the vector of primitives of the zoning types, of diagonal, horizontal, Gabor filters and of the Zernike moment. These characteristics are linked to pixels’ densities and they are extracted on binary pictures. In the classification stage, we examine four classification methods with two different classifiers types namely the Support vector machines (SVM) and the Random Forest method. After some checking tests, the system of learning and recognition which is based on the Random Forest has shown a good performance on a basis of 100 models of pictures.
BIG DATA-DRIVEN FAST REDUCING THE VISUAL BLOCK ARTIFACTS OF DCT COMPRESSED IM...IJDKP
The Urban Surveillance Systems generate huge amount of video and image data and impose high pressure
onto the recording disks. It is obvious that the research of video is a key point of big data research areas.
Since videos are composed of images, the degree and efficiency of image compression are of great
importance. Although the DCT based JPEG standard are widely used, it encounters insurmountable
problems. For instance, image encoding deficiencies such as block artifacts have to be removed frequently.
In this paper, we propose a new, simple but effective method to fast reduce the visual block artifacts of DCT
compressed images for urban surveillance systems. The simulation results demonstrate that our proposed
method achieves better quality than widely used filters while consuming much less computer CPU
resources.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Efficient fingerprint image enhancement algorithm based on gabor filtereSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
14 offline signature verification based on euclidean distance using support v...INFOGAIN PUBLICATION
In this project, a support vector machine is developed for identity verification of offline signature based on the matrices derived through Euclidean distance. A set of signature samples are collected from 35 different people. Each person gives his 15 different copies of signature and then these signature samples are scanned to have softcopy of them to train SVM. These scanned signature images are then subjected to a number of image enhancement operations like binarization, complementation, filtering, thinning, edge detection and rotation. On the basis of 15 original signature copies from each individual, Euclidean distance is calculated. And every tested image is compared with the range of Euclidean distance. The values from the ED are fed to the support vector machine which draws a hyper plane and classifies the signature into original or forged based on a particular feature value.
A Survey on Tamil Handwritten Character Recognition using OCR Techniquescscpconf
In today’s fast growing technology, digital recognitions are playing wide role and providing
more scope to perform research in OCR techniques. Recognition of Tamil handwritten scripts is
complicated compared to other western language scripts. However, many researchers have
provided real-time solutions for offline Tamil character recognition also. Offline Tamil
handwritten documents recognition still offers many motivating challenges to researchers.
Current research offers many solutions on Tamil handwritten documents recognition even then
reasonable accuracy and performance has not been achieved. This paper analyses the various approaches and challenges concerning offline Tamil handwritten character recognition
A comparison of image segmentation techniques, otsu and watershed for x ray i...eSAT Journals
Abstract The most dangerous and rapidly spreading disease in the world is Tuberculosis. In the investigating for suspected tuberculosis (TB), chest radiography is the only key techniques of diagnosis based on the medical imaging So, Computer aided diagnosis (CAD) has been popular and many researchers are interested in this research areas and different approaches have been proposed for the TB detection. Image segmentation plays a great importance in most medical imaging, by extracting the anatomical structures from images. There exist many image segmentation techniques in the literature, each of them having their own advantages and disadvantages. The aim of X-ray segmentation is to subdivide the image in different portions, so that it can help during the study the structure of the bone, for the detection of disorder. The goal of this paper is to review the most important image segmentation methods starting from a data base composed by real X-ray images. Keywords— chest radiography, computer aided diagnosis, image segmentation, anatomical structures, real X-rays.
HARDWARE ACCELERATION OF THE GIPPS MODEL FOR REAL-TIME TRAFFIC SIMULATIONijesajournal
Traffic simulation software is becoming increasingly popular as more cities worldwide use it to better manage their crowded traffic networks. An important requirement for such software is the ability to produce accurate results in real time, requiring great computation resources. This work proposes an ASICbased hardware accelerated approach for the AIMSUN traffic simulator, taking advantage of repetitive tasks in the algorithm. Different system configurations using this accelerator are also discussed. Compared with the traditional software simulator, it has been found to improve the performance by as much as 9x when using a single processing element approach, or more depending on the chosen hardware configuration.
O design thinking para educadores permite criar espaços e resoluções de problemas em forma colaborativa. Por meio de 5 fases é vivenciado um percurso de projeção de protótipos responsivos à demanda dos grupos.
HARDWARE/SOFTWARE CO-DESIGN OF A 2D GRAPHICS SYSTEM ON FPGAijesajournal
Embedded systems in several applications require a graphics system to display some application-specific
information. Yet, commercial graphic cards for the embedded systems either incur high costs, or they are
inconvenient to use. Furthermore, they tend to quickly become obsolete due to the advances in display technology. On the other hand, FPGAs provide reconfigurable hardware resources that can be used to implement graphics system in which they can be reconfigured to meet the ever-evolving requirements of graphics systems. Motivated from this fact, this study considers the design and implementation of a 2D graphics system on FPGA. The graphics system proposed is composed of a CPU IP core, peripheral IP cores (Bresenham, BitBLT, DDR Memory Controller, and VGA) and PLB bus to which CPU and all peripheral IP cores are attached. Furthermore, some graphics drivers and APIs are developed to complete
the whole graphics creation process.
DESIGN CHALLENGES IN WIRELESS FIRE SECURITY SENSOR NODES ijesajournal
A design of simple hardware circuit with different kind of fire sensors enables every user to use this wireless fire security system. The challenges in designing the nodes with various types of fire sensors are
discussed and the methods to overcome design problems are also analyzed. The circuit is interfaced with
the different types of sensor to sense different fire sources such as gas leakage, smoke, and heat. The cost,
circuit components, design requirements, power requirements of sensor node are minimized. The methods
to improve the quality of system to detect fire are analyzed. The system is fully controlled by the PIC
microcontroller. All the sensors and detectors are interconnected to PIC microcontroller by using various
types of interface circuits. The PIC microcontroller will continuously monitor all the sensors and if it senses
any security problem then the microcontroller will send the information to the PC central monitoring station wirelessly for a short distance of 300m indoor/1500m outdoor using zigbee technology. The gas
sensor, light sensor, smoke detector sensor, IR sensor, temperature & humidity sensor, fire sensor are interfaced with microcontroller to detect abnormal fire conditions in the environment in all possible ways.
SYMMETRICAL WEIGHTED SUBSPACE HOLISTIC APPROACH FOR EXPRESSION RECOGNITIONijcsit
Human face expression is one of the cognitive activity or attribute to deliver the opinions to others.This paper mainly delivers the performance of appearance based holistic approach subspace methods based on Principal Component Analysis (PCA). In this work texture features are extracted from face images using Gabor filter. It was observed that extracted texture feature vector space has higher dimensional and has
more number of redundant contents. Hence training, testing and classification time becomes more. The expression recognition accuracy rate is also reduced. To overcome this problem Symmetrical Weighted 2DPCA (SW2DPCA) subspace method is introduced. Extracted feature vector space is projected in to subspace by using SW2DPCA method. By implementing weighted principles on odd and even symmetrical
decomposition space of training samples sets proposed method have been formed. Conventional PCA and 2DPCA method yields less recognition rate due to larger variations in expressions and light due to more number of feature space redundant variants. Proposed SW2DPCA method optimizes this problem by reducing redundant contents and discarding unequal variants. In this work a well known JAFFE databases
is used for experiments and tested with proposed SW2DPCA algorithm. From the experimental results it was found that facial recognition accuracy rate of GF+SW2DPCA based feature fusion subspace method has been increased to 95.24% compared to 2DPCA method.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity
Validation Study of Dimensionality Reduction Impact on Breast Cancer Classifi...ijcsit
A fundamental problem in machine learning is identifying the most representative subset of features from
which we can construct a predictive model for a classification task. This paper aims to present a validation
study of dimensionality reduction effect on the classification accuracy of mammographic images. The
studied dimensionality reduction methods were: locality-preserving projection (LPP), locally linear
embedding (LLE), Isometric Mapping (ISOMAP) and spectral regression (SR). We have achieved high
rates of classifications. In some combinations the classification rate was 100%. But in most of the cases the
classification rate is about 95%. It was also found that the classification rate increases with the size of the
reduced space and the optimal value of space dimension is 60. We proceeded to validate the obtained
results by measuring some validation indices such as: Xie-Beni index, Dun index and Alternative Dun
index. The measurement of these indices confirms that the optimal value of reduced space dimension is
d=60.
Most face recognition algorithms are generally capable to achieve a high level of accuracy when
the image is acquired under wellcontrolled conditions. The face should be still during the acquisition
process; otherwise, the resulted image would be blur and hard for recognition. Enforcing persons to stand
still during the process is impractical; extremely likely that recognition should be performed on a blurred
image. It is important to understand the relation between the image blur and the recognition accuracy. The
ORL Database was used in the study. All images were in PGM format of 92 × 112 pixels from forty
different persons, ten images per person. Those images were randomly divided into training and testing
datasets with 50-50 ratio. Singular value decomposition was used to extract the features. The images in
the testing datasets were artificially blurred to represent a linear motion, and recognition was performed.
The blurred images were also filtered using various methods. The accuracy levels of the recognition on the
basis of the blurred faces and filtered faces were compared. The performed numerical study suggests that
at its best, the image improvement processes are capable to improve the recognition accuracy level by
less than five percent.
EV-SIFT - An Extended Scale Invariant Face Recognition for Plastic Surgery Fa...IJECEIAES
This paper presents a new technique called Entropy based SIFT (EV-SIFT) for accurate face recognition after the plastic surgery. The corresponding feature extracts the key points and volume of the scale-space structure for which the information rate is determined. This provides least effect on uncertain variations in the face since the entropy is the higher order statistical feature. The corresponding EV-SIFT features are applied to the Support vector machine for classification. The normal SIFT feature extracts the key points based on the contrast of the image and the V- SIFT feature extracts the key points based on the volume of the structure. However, the EV- SIFT method provides both the contrast and volume information. Thus EV-SIFT provide better performance when compared with PCA, normal SIFT and VSIFT based feature extraction.
International Journal of Artificial Intelligence & Applications (IJAIA)gerogepatton
Facial recognition (FR) is a pattern recognition problem, in which images can be considered as a matrix of
pixels.There are manychallenges that affect the performance of face recognitionincluding illumination
variation, occlusion, and blurring. In this paper,a few preprocessing techniques are suggested to handle the
illumination variationsproblem. Also, other phases of face recognition problems like feature extraction and
classification are discussed. Preprocessing techniques like Histogram Equalization (HE), Gamma Intensity
Correction (GIC), and Regional Histogram Equalization (RHE) are tested inthe AT&T database. For
feature extraction, methods such as Principal Component Analysis (PCA), Linear Discriminant Analysis
(LDA), Independent Component Analysis (ICA), and Local Binary Pattern (LBP) are applied. Support
Vector Machine (SVM) is used as the classifier. Both holistic and block-based methods are tested using the
AT&T database. For twelve different combinations of preprocessing, feature extraction, and classification
methods, experiments involving various block sizes are conducted to assess the computation performance
and recognition accuracy for the AT&T dataset.Using the block-based method, 100% accuracy is achieved
with the combination of GIC preprocessing, LDA feature extraction,and SVM classification using 2x2
block-sizingwhile the holistic method yields the maximum accuracy of 93.5%. The block-sized algorithm
performs better than the holistic approach under poor lighting conditions.SVM Radial Basis Function
performs extremely well on theAT&Tdataset for both holistic and block-based approaches.
PERFORMANCE EVALUATION OF BLOCK-SIZED ALGORITHMS FOR MAJORITY VOTE IN FACIAL ...ijaia
Facial recognition (FR) is a pattern recognition problem, in which images can be considered as a matrix of
pixels.There are manychallenges that affect the performance of face recognitionincluding illumination
variation, occlusion, and blurring. In this paper,a few preprocessing techniques are suggested to handle the
illumination variationsproblem. Also, other phases of face recognition problems like feature extraction and
classification are discussed. Preprocessing techniques like Histogram Equalization (HE), Gamma Intensity
Correction (GIC), and Regional Histogram Equalization (RHE) are tested inthe AT&T database. For
feature extraction, methods such as Principal Component Analysis (PCA), Linear Discriminant Analysis
(LDA), Independent Component Analysis (ICA), and Local Binary Pattern (LBP) are applied. Support
Vector Machine (SVM) is used as the classifier. Both holistic and block-based methods are tested using the
AT&T database. For twelve different combinations of preprocessing, feature extraction, and classification
methods, experiments involving various block sizes are conducted to assess the computation performance
and recognition accuracy for the AT&T dataset.Using the block-based method, 100% accuracy is achieved
with the combination of GIC preprocessing, LDA feature extraction,and SVM classification using 2x2
block-sizingwhile the holistic method yields the maximum accuracy of 93.5%. The block-sized algorithm
performs better than the holistic approach under poor lighting conditions.SVM Radial Basis Function
performs extremely well on theAT&Tdataset for both holistic and block-based approaches
PERFORMANCE EVALUATION OF BLOCK-SIZED ALGORITHMS FOR MAJORITY VOTE IN FACIAL ...gerogepatton
Facial recognition (FR) is a pattern recognition problem, in which images can be considered as a matrix of
pixels.There are manychallenges that affect the performance of face recognitionincluding illumination
variation, occlusion, and blurring. In this paper,a few preprocessing techniques are suggested to handle the
illumination variationsproblem. Also, other phases of face recognition problems like feature extraction and
classification are discussed. Preprocessing techniques like Histogram Equalization (HE), Gamma Intensity
Correction (GIC), and Regional Histogram Equalization (RHE) are tested inthe AT&T database. For
feature extraction, methods such as Principal Component Analysis (PCA), Linear Discriminant Analysis
(LDA), Independent Component Analysis (ICA), and Local Binary Pattern (LBP) are applied. Support
Vector Machine (SVM) is used as the classifier. Both holistic and block-based methods are tested using the
AT&T database. For twelve different combinations of preprocessing, feature extraction, and classification
methods, experiments involving various block sizes are conducted to assess the computation performance
and recognition accuracy for the AT&T dataset.Using the block-based method, 100% accuracy is achieved
with the combination of GIC preprocessing, LDA feature extraction,and SVM classification using 2x2
block-sizingwhile the holistic method yields the maximum accuracy of 93.5%. The block-sized algorithm
performs better than the holistic approach under poor lighting conditions.SVM Radial Basis Function
performs extremely well on theAT&Tdataset for both holistic and block-based approaches.
International Journal of Computer Science, Engineering and Information Techno...IJCSEIT Journal
Several methods for detecting the face and extracting the facial features and components exist in the
literature. These methods are different in their complexity, performance, type and nature of the images and
the targeted application. The facial features and components are used in security applications, robotics and
assistance for the disabled. We use these components and characteristics to determine the state of alertness
and fatigue for medical diagnoses. In this work we use plain color background images whose color is
different from the skin and which contain a single face. We are interested in FPGA implementation of this
application. This implementation must meet two constraints, which are the execution time and the FPGA
resources. We have selected and have associated a face detection algorithm based on the skin detection
(using the RGB space) with a facial-feature extraction algorithm based on tracking the gradient and the
geometric model.
FPGA ARCHITECTURE FOR FACIAL-FEATURES AND COMPONENTS EXTRACTIONijcseit
Several methods for detecting the face and extracting the facial features and components exist in the
literature. These methods are different in their complexity, performance, type and nature of the images and
the targeted application. The facial features and components are used in security applications, robotics and
assistance for the disabled. We use these components and characteristics to determine the state of alertness
and fatigue for medical diagnoses. In this work we use plain color background images whose color is
different from the skin and which contain a single face. We are interested in FPGA implementation of this
application. This implementation must meet two constraints, which are the execution time and the FPGA
resources. We have selected and have associated a face detection algorithm based on the skin detection
(using the RGB space) with a facial-feature extraction algorithm based on tracking the gradient and the
geometric model.
Fingerprint image enhancement is the key process in IAFIS systems. In order to reduce false identification ratio and to supply good fingerprint images to IAFIS systems for exact identification, fingerprint images are generally enhanced. A filtering process tries to filter out the noise from the input image, and emphasize on low, high and directional spatial frequency components of an image. This paper presents an experimental summary of enhancing fingerprint images using Gabor filters. Frequency, width and window domain filter ranges are fixed. The orientation angle alone is modified by 0 radians, π/2, π/4 and 3π/4 radians. The experimental results show that Gabor filter enhances the fingerprint image in a better way than other filtering methods and extracts features.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology
Multimodal Approach for Face Recognition using 3D-2D Face Feature FusionCSCJournals
3D Face recognition has been an area of interest among researchers for the past few decades especially in pattern recognition. The main advantage of 3D Face recognition is the availability of geometrical information of the face structure which is more or less unique for a subject. This paper focuses on the problems of person identification using 3D Face data. Use of unregistered 3D Face data for feature extraction significantly increases the operational speed of the system with huge database enrollment. In this work, unregistered 3D Face data is fed to a classifier in multiple spectral representations of the same data. Discrete Fourier Transform (DFT) and Discrete Cosine Transform (DCT) are used for the spectral representations. The face recognition accuracy obtained when the feature extractors are used individually is evaluated. The use of depth information alone in different spectral representation was not sufficient to increase the recognition rate. So a fusion of texture and depth information of face is proposed. Fusion of the matching scores proves that the recognition accuracy can be improved significantly by fusion of scores of multiple representations. FRAV3D database is used for testing the algorithm.
LOCAL REGION PSEUDO-ZERNIKE MOMENT- BASED FEATURE EXTRACTION FOR FACIAL RECOG...aciijournal
In the domain of image processing, face recognition is one of the most well-known research field. When
humans have very similar biometric properties, such as identical twins, the face recognition system is
considered as a challengeable problem. In this paper, the AdaBoost method is utilized to detect the
facial area of input image.
Local Region Pseudo-Zernike Moment- Based Feature Extraction for Facial Recog...aciijournal
In the domain of image processing, face recognition is one of the most well-known research field. When
humans have very similar biometric properties, such as identical twins, the face recognition system is
considered as a challengeable problem. In this paper, the AdaBoost method is utilized to detect the
facial area of input image. After that the facial area is divided into some local regions. Finally, new
efficient facial-based identical twins feature extractor based on the geometric moment is applied into
local regions of face image. The used feature extractor is Pseudo-Zernike Moment (PZM) which is
employed inside the local regions of facial area of identical twins images. To evaluate the proposed
method, two datasets, Twins Days Festival and Iranian Twin Society, are collected where the datasets
includes scaled and rotated facial images of identical twins in different illuminations. The experimental
results demonstrates the ability of proposed method to recognize a pair of identical twins in
different situations such as rotation, scaling and changing illumination
Local Region Pseudo-Zernike Moment- Based Feature Extraction for Facial Recog...aciijournal
In the domain of image processing, face recognition is one of the most well-known research field. When humans have very similar biometric properties, such as identical twins, the face recognition system is considered as a challengeable problem. In this paper, the AdaBoost method is utilized to detect the facial area of input image. After that the facial area is divided into some local regions. Finally, new efficient facial-based identical twins feature extractor based on the geometric moment is applied into local regions of face image. The used feature extractor is Pseudo-Zernike Moment (PZM) which is employed inside the local regions of facial area of identical twins images. To evaluate the proposed method, two datasets, Twins Days Festival and Iranian Twin Society, are collected where the datasets includes scaled and rotated facial images of identical twins in different illuminations. The experimental results demonstrates the ability of proposed method to recognize a pair of identical twins in different situations such as rotation, scaling and changing illumination
DESIGN OF AN EMBEDDED SYSTEM: BEDSIDE PATIENT MONITORijesajournal
Embedded systems in the range of from a tiny microcontroller-based sensor device to mobile smart phones
have vast variety of applications. However, in the literature there is no up to date system-level design of
embedded hardware and software, instead academic publications are mainly focused on the improvement
of specific features of embedded software/hardware and the embedded system designs for specific
applications. Moreover, commercially available embedded systems are not disclosed for the view of
researchers in the literature. Therefore, in this paper we first present how to design a state of art embedded
system including emerged hardware and software technologies. Bedside Patient monitor devices used in
intensive cares units of hospitals are also classified as embedded systems and run sophisticated software
and algorithms for better diagnosis of diseases. We reveal the architecture of our, commercially available,
bedside patient monitor to provide a design example of embedded systemsrelating to emerged technologies.
DESIGN OF AN EMBEDDED SYSTEM: BEDSIDE PATIENT MONITORijesajournal
Embedded systems in the range of from a tiny microcontroller-based sensor device to mobile smart phones
have vast variety of applications. However, in the literature there is no up to date system-level design of
embedded hardware and software, instead academic publications are mainly focused on the improvement
of specific features of embedded software/hardware and the embedded system designs for specific
applications. Moreover, commercially available embedded systems are not disclosed for the view of
researchers in the literature. Therefore, in this paper we first present how to design a state of art embedded
system including emerged hardware and software technologies. Bedside Patient monitor devices used in
intensive cares units of hospitals are also classified as embedded systems and run sophisticated software
and algorithms for better diagnosis of diseases. We reveal the architecture of our, commercially available,
bedside patient monitor to provide a design example of embedded systemsrelating to emerged technologies.
PIP-MPU: FORMAL VERIFICATION OF AN MPUBASED SEPARATION KERNEL FOR CONSTRAINED...ijesajournal
Pip-MPU is a minimalist separation kernel for constrained devices (scarce memory and power resources).
In this work, we demonstrate high-assurance of Pip-MPU’s isolation property through formal verification.
Pip-MPU offers user-defined on-demand multiple isolation levels guarded by the Memory Protection Unit
(MPU). Pip-MPU derives from the Pip protokernel, with a full code refactoring to adapt to the constrained
environment and targets equivalent security properties. The proofs verify that the memory blocks loaded in
the MPU adhere to the global partition tree model. We provide the basis of the MPU formalisation and the
demonstration of the formal verification strategy on two representative kernel services. The publicly
released proofs have been implemented and checked using the Coq Proof Assistant for three kernel
services, representing around 10000 lines of proof. To our knowledge, this is the first formal verification of
an MPU based separation kernel. The verification process helped discover a critical isolation-related bug.
International Journal of Embedded Systems and Applications (IJESA)ijesajournal
International Journal of Embedded Systems and Applications (IJESA) is a quarterly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the Embedded Systems and applications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on understanding Embedded Systems and establishing new collaborations in these areas.
Authors are solicited to contribute to the journal by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the areas of Embedded Systems & applications.
Pip-MPU: Formal Verification of an MPU-Based Separationkernel for Constrained...ijesajournal
Pip-MPU is a minimalist separation kernel for constrained devices (scarce memory and power resources). In this work, we demonstrate high-assurance of Pip-MPU’s isolation property through formal verification. Pip-MPU offers user-defined on-demand multiple isolation levels guarded by the Memory Protection Unit (MPU). Pip-MPU derives from the Pip protokernel, with a full code refactoring to adapt to the constrained environment and targets equivalent security properties. The proofs verify that the memory blocks loaded in the MPU adhere to the global partition tree model. We provide the basis of the MPU formalisation and the demonstration of the formal verification strategy on two representative kernel services. The publicly released proofs have been implemented and checked using the Coq Proof Assistant for three kernel services, representing around 10000 lines of proof. To our knowledge, this is the first formal verification of an MPU based separation kernel. The verification process helped discover a critical isolation-related bug.
International Journal of Embedded Systems and Applications (IJESA)ijesajournal
International Journal of Embedded Systems and Applications (IJESA) is a quarterly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the Embedded Systems and applications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on understanding Embedded Systems and establishing new collaborations in these areas.
Authors are solicited to contribute to the journal by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the areas of Embedded Systems & applications.
Call for papers -15th International Conference on Wireless & Mobile Network (...ijesajournal
15th International Conference on Wireless & Mobile Network (WiMo 2023) is dedicated to addressing the challenges in the areas of wireless & mobile networks. The Conference looks for significant contributions to the Wireless and Mobile computing in theoretical and practical aspects. The Wireless and Mobile computing domain emerges from the integration among personal computing, networks, communication technologies, cellular technology, and the Internet Technology. The modern applications are emerging in the area of mobile ad hoc networks and sensor networks. This Conference is intended to cover contributions in both the design and analysis in the context of mobile, wireless, ad-hoc, and sensor networks. The goal of this Conference is to bring together researchers and practitioners from academia and industry to focus on advanced wireless and Mobile computing concepts and establishing new collaborations in these areas.
Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the following areas, but are not limited to.
Call for Papers -International Conference on NLP & Signal (NLPSIG 2023)ijesajournal
Scope & Topics
International Conference on NLP & Signal (NLPSIG 2023) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Signal and Natural Language Processing (NLP).
Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the following areas, but are not limited to:
Topics of interest include, but are not limited to, the following
Chunking/Shallow Parsing
Dialogue and Interactive Systems
Deep learning and NLP
Discourseand Pragmatics
Information Extraction, Retrieval, Text Mining
Interpretability and Analysis of Models for NLP
Language Grounding to Vision, Robotics and Beyond
Lexical Semantics
Linguistic Resources
Machine Learning for NLP
Machine Translation
NLP and Signal Processing
NLP Applications
Ontology
Paraphrasing/Entailment/Generation
Parsing/Grammatical Formalisms
Phonology, Morphology
POS tagging
Question Answering
Resources and Evaluation
Semantic Processing
Sentiment Analysis, Stylistic Analysis, and Argument Mining
Speech and Multimodality
Speech Recognition and Synthesis
Spoken Language Processing
Statistical and Knowledge based methods
Summarization
Theory and Formalism in NLP
Signal Processing & NLP
Computer Vision, Image Processing& NLP
NLP, AI & Signal
Paper Submission
Authors are invited to submit papers through the conference Submission System by May 06, 2023. Submissions must be original and should not have been published previously or be under consideration for publication while being evaluated for this conference. The proceedings of the conference will be published by International Journal on Cybernetics & Informatics (IJCI) (Confirmed).
Selected papers from NLPSIG 2023, after further revisions, will be published in the special issue of the following journals.
International Journal on Natural Language Computing (IJNLC)
International Journal of Ubiquitous Computing (IJU)
International Journal of Data Mining & Knowledge Management Process (IJDKP)
Signal & Image Processing : An International Journal (SIPIJ)
International Journal of Ambient Systems and Applications (IJASA)
International Journal of Grid Computing & Applications (IJGCA)
Important Dates
Submission Deadline : May 06, 2023
Authors Notification : May 25, 2023
Final Manuscript Due : June 08, 2023
International Conference on NLP & Signal (NLPSIG 2023)ijesajournal
International Conference on NLP & Signal (NLPSIG 2023) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Signal and Natural Language Processing (NLP).
Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the following areas, but are not limited to:
11th International Conference on Software Engineering & Trends (SE 2023)ijesajournal
11th International Conference on Software Engineering & Trends (SE 2023)
May 27 ~ 28, 2023, Vancouver, Canada
https://acsit2023.org/se/index
Scope & Topics
11th International Conference on Software Engineering & Trends (SE 2023) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Software Engineering. The goal of this conference is to bring together researchers and practitioners from academia and industry to focus on understanding Modern software engineering concepts and establishing new collaborations in these areas.
Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the areas of software engineering & applications. Topics of interest include, but are not limited to, the following.
Topics of interest include, but are not limited to, the following
The Software Process
Software Engineering Practice
Web Engineering
Quality Management
Managing Software Projects
Advanced Topics in Software Engineering
Multimedia and Visual Software Engineering
Software Maintenance and Testing
Languages and Formal Methods
Web-based Education Systems and Learning Applications
Software Engineering Decision Making
Knowledge-based Systems and Formal Methods
Search Engines and Information Retrieval
Paper Submission
Authors are invited to submit papers through the conference Submission System by April 08, 2023. Submissions must be original and should not have been published previously or be under consideration for publication while being evaluated for this conference. The proceedings of the conference will be published by Computer Science Conference Proceedings (H index 35) in Computer Science & Information Technology (CS & IT) series (Confirmed).
Selected papers from SE 2023, after further revisions, will be published in the special issue of the following journals.
The International Journal of Software Engineering & Applications (IJSEA) -ERA indexed
International Journal of Computer Science, Engineering and Applications (IJCSEA)
Important Dates
Submission Deadline : April 08, 2023
Authors Notification : April 29, 2023
Final Manuscript Due : May 06, 2023
11th International Conference on Software Engineering & Trends (SE 2023)ijesajournal
11th International Conference on Software Engineering & Trends (SE 2023) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Software Engineering. The goal of this conference is to bring together researchers and practitioners from academia and industry to focus on understanding Modern software engineering concepts and establishing new collaborations in these areas.
Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the areas of software engineering & applications. Topics of interest include, but are not limited to, the following.
PERFORMING AN EXPERIMENTAL PLATFORM TO OPTIMIZE DATA MULTIPLEXINGijesajournal
This article is based on preliminary work on the OSI model management layers to optimized industrial
wired data transfer on low data rate wireless technology. Our previous contribution deal with the
development of a demonstrator providing CAN bus transfer frames (1Mbps) on a low rate wireless channel
provided by Zigbee technology. In order to be compatible with all the other industrial protocols, we
describe in this paper our contribution to design an innovative Wireless Device (WD) and a software tool,
which will aim to determine the best architecture (hardware/software) and wireless technology to be used
taking in account of the wired protocol requirements. To validate the proper functioning of this WD, we
will develop an experimental platform to test different strategies provided by our software tool. We can
consequently prove which is the best configuration (hardware/software) compared to the others by the
inclusion (inputs) of the required parameters of the wired protocol (load, binary rate, acknowledge
timeout) and the analysis of the WD architecture characteristics proposed (outputs) as the delay introduced
by system, buffer size needed, CPU speed, power consumption, meeting the input requirement. It will be
important to know whether gain comes from a hardware strategy with hardware accelerator e.g or a
software strategy with a more perf
GENERIC SOPC PLATFORM FOR VIDEO INTERACTIVE SYSTEM WITH MPMC CONTROLLERijesajournal
Today, a significant number of embedded systems focus on multimedia applications with almost insatiable demand for low-cost, high performance, and low power hardware cosumption. In this paper, we present a re-configurable and generic hardware platform for image and video processing. The proposed platform uses the benefits offered by the Field Programmable Gate Array (FPGA) to attain this goal. In this context,
a prototype system is developed based on the Xilinx Virtex-5 FPGA with the integration of embedded processors, embedded memory, DDR, interface technologies, Digital Clock Managers (DCM) and MPMC.
The MPMC is an essential component for design performance tuning and real time video processing. We demonstrate the importance role of this interface in multi video applications. In fact, to successful the
deployment of DRAM it is mandatory to use a flexible and scalable interface. Our system introduces diverse modules, such as cut video detection, video zoom-in and out. This provides the utility of using this architecture as a universal video processing platform according to different application requirements. This platform facilitates the development of video and image processing applications.
This paper presents an inverting buck-boost DCDC converter design. A negative supply voltage is needed
in a variety of applications, but only a few DCDC converters are available on the market. OLED, a new
display type especially suited for small digital camera or mobile phone displays. Design challenges that
came up when negative voltages have to be handled on chip will be discussed, such as
continuous/discontinuous mode transition problems, negative voltage feedback and negative over-voltage
protection. Both devices operate in a fixed frequency PWM mode or alternatively in PFM mode. The single
inductor topology is called inverting buck-boost converter or simply inverter. The proposed converter has
been implemented with a TSMC 0.13-um 2P4M CMOS process, and the chip area is 325 x 300 um2.
A Case Study: Task Scheduling Methodologies for High Speed Computing Systems ijesajournal
High Speed computing meets ever increasing real-time computational demands through the leveraging of
flexibility and parallelism. The flexibility is achieved when computing platform designed with
heterogeneous resources to support multifarious tasks of an application where as task scheduling brings
parallel processing. The efficient task scheduling is critical to obtain optimized performance in
heterogeneous computing Systems (HCS). In this paper, we brought a review of various application
scheduling models which provide parallelism for homogeneous and heterogeneous computing systems. In
this paper, we made a review of various scheduling methodologies targeted to high speed computing
systems and also prepared summary chart. The comparative study of scheduling methodologies for high
speed computing systems has been carried out based on the attributes of platform & application as well.
The attributes are execution time, nature of task, task handling capability, type of host & computing
platform. Finally a summary chart has been prepared and it demonstrates that the need of developing
scheduling methodologies for Heterogeneous Reconfigurable Computing Systems (HRCS) which is an
emerging high speed computing platform for real time applications.
A NOVEL METHODOLOGY FOR TASK DISTRIBUTION IN HETEROGENEOUS RECONFIGURABLE COM...ijesajournal
Modern embedded systems are being modeled as Heterogeneous Reconfigurable Computing Systems
(HRCS) where Reconfigurable Hardware i.e. Field Programmable Gate Array (FPGA) and soft core
processors acts as computing elements. So, an efficient task distribution methodology is essential for
obtaining high performance in modern embedded systems. In this paper, we present a novel methodology
for task distribution called Minimum Laxity First (MLF) algorithm that takes the advantage of runtime
reconfiguration of FPGA in order to effectively utilize the available resources. The MLF algorithm is a list
based dynamic scheduling algorithm that uses attributes of tasks as well computing resources as cost
function to distribute the tasks of an application to HRCS. In this paper, an on chip HRCS computing
platform is configured on Virtex 5 FPGA using Xilinx EDK. The real time applications JPEG, OFDM
transmitters are represented as task graph and then the task are distributed, statically as well dynamically,
to the platform HRCS in order to evaluate the performance of the designed task distribution model. Finally,
the performance of MLF algorithm is compared with existing static scheduling algorithms. The comparison
shows that the MLF algorithm outperforms in terms of efficient utilization of resources on chip and also
speedup an application execution.
Payment industry is largely aligned in their desire to create embedded payment systems ready for the
modern digital age. The trend to embed payments into a software platform is often regarded as first step
towards a broader trend of embedded finance based on digital representation of fiat currencies. Since it
became clear to our research team that there are no technologies and protocols that are protected against
attacks of quantum computing, and that enable automatic embedded payments, online or offline with no
fear of counterfeit, P2P or device-to-device to be made in real time without intermediaries, in any
denomination, even continuous payments per time or service, while preserving the privacy of all parties,
without enabling illicit activities, we decided to utilize the Generic Innovation Engine [1] that is based on
the Artificial Intelligence Assistance Innovation acceleration methodologies and tools in order to boost the
progress of innovation of the necessary solutions. These methodologies accelerate innovation across the
board. It proposes a framework for natural and artificial intelligence collaboration in pursuit of an
innovative (R&D) objective The outcome of deploying these Artificial Innovation Assistant (AIA)
methodologies was tens of patents that yield solutions, that a few of them are described in this paper. We
argue that a promising avenue for automated embedded payment systems to fulfil people’s desire for
privacy when conducting payments, and national security agencies demand for quantum-safe security,
could be based on DeFi and digital currencies platforms that does not suffer from flaws of DLT-based
solutions, while introducing real advantages, in all aspects, including being quantum-resilient, enabling
users to decide with whom, if at all, to share information, identity, transactions details, etc., all without
trade-offs, complying with AML measures, and accommodating the potential for high transaction volumes.
It is not legacy bank accounts, and it is not peer-dependent, nor a self-organizing network.
A NOVEL METHODOLOGY FOR TASK DISTRIBUTION IN HETEROGENEOUS RECONFIGURABLE COM...ijesajournal
Modern embedded systems are being modeled as Heterogeneous Reconfigurable Computing Systems
(HRCS) where Reconfigurable Hardware i.e. Field Programmable Gate Array (FPGA) and soft core
processors acts as computing elements. So, an efficient task distribution methodology is essential for
obtaining high performance in modern embedded systems. In this paper, we present a novel methodology
for task distribution called Minimum Laxity First (MLF) algorithm that takes the advantage of runtime
reconfiguration of FPGA in order to effectively utilize the available resources. The MLF algorithm is a list
based dynamic scheduling algorithm that uses attributes of tasks as well computing resources as cost
function to distribute the tasks of an application to HRCS. In this paper, an on chip HRCS computing
platform is configured on Virtex 5 FPGA using Xilinx EDK. The real time applications JPEG, OFDM
transmitters are represented as task graph and then the task are distributed, statically as well dynamically,
to the platform HRCS in order to evaluate the performance of the designed task distribution model. Finally,
the performance of MLF algorithm is compared with existing static scheduling algorithms. The comparison
shows that the MLF algorithm outperforms in terms of efficient utilization of resources on chip and also
speedup an application execution.
2 nd International Conference on Computing and Information Technology ijesajournal
2
nd International Conference on Computing and Information Technology Trends
(CCITT 2023) will provide an excellent international forum for sharing knowledge and
results in theory, methodology and applications of Computing and Information Technology
Trends. The Conference looks for significant contributions to all major fields of the
Computer Science, Compute Engineering, Information Technology and Trends in theoretical
and practical aspects.
A NOVEL METHODOLOGY FOR TASK DISTRIBUTION IN HETEROGENEOUS RECONFIGURABLE COM...ijesajournal
Modern embedded systems are being modeled as Heterogeneous Reconfigurable Computing Systems
(HRCS) where Reconfigurable Hardware i.e. Field Programmable Gate Array (FPGA) and soft core
processors acts as computing elements. So, an efficient task distribution methodology is essential for
obtaining high performance in modern embedded systems. In this paper, we present a novel methodology
for task distribution called Minimum Laxity First (MLF) algorithm that takes the advantage of runtime
reconfiguration of FPGA in order to effectively utilize the available resources. The MLF algorithm is a list
based dynamic scheduling algorithm that uses attributes of tasks as well computing resources as cost
function to distribute the tasks of an application to HRCS. In this paper, an on chip HRCS computing
platform is configured on Virtex 5 FPGA using Xilinx EDK. The real time applications JPEG, OFDM
transmitters are represented as task graph and then the task are distributed, statically as well dynamically,
to the platform HRCS in order to evaluate the performance of the designed task distribution model. Finally,
the performance of MLF algorithm is compared with existing static scheduling algorithms. The comparison
shows that the MLF algorithm outperforms in terms of efficient utilization of resources on chip and also
speedup an application execution.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Automatic analysis of smoothing techniques by simulation model based real time system for processing 3 d human
1. International Journal of Embedded Systems and Applications (IJESA) Vol.4,No.4, December 2014
DOI : 10.5121/ijesa.2014.4401 13
AUTOMATIC ANALYSIS OF SMOOTHING
TECHNIQUES BY SIMULATION MODEL BASED
REAL-TIME SYSTEM FOR PROCESSING 3D HUMAN
FACES
Suranjan Ganguly1
, Debotosh Bhattacharjee2
and Mita Nasipuri3
Department of Computer Science and Engineering, Jadavpur University, India
ABSTRACT
The pivotal research work that has been carried out and described in this literature acknowledges the
importance of various smoothing techniques for processing 3D human faces from 2.5D range face images.
The smoothing techniques have been developed and implemented using MATLAB-Simulink for real time
processing in embedded system. In addition, the significance of smoothed 2.5D range image over original
face range image has been discovered as well as its time complexity has also been reported with array of
experiments. The variations in time complexities are also accomplished using different optimization levels
and execution modes. A set of filtering techniques such as, Max filter, Min filter, Median filter, Mean filter,
Mid-point filter and Gaussian filter, have been designed and illustrated using Simulink model. The model
takes depth face image (i.e. the range face image) as input in real time and presents the improvement over
original face images. In the design flow, the performance of every block has also been characterized by
range face images from Frav3D, GavabDB, and Bosphorus databases. In the experimental section of this
research article, an array of performance analysis for these smoothing techniques with variation of
frameworks is explained.
KEYWORDS
3D face image, 2.5D face image, MATLAB-Simulink, Smoothing techniques, Range face image
1. INTRODUCTION
Computer vision based different methodologies like object recognition, registration,
identification, etc. deploys the 2D or 3D face images into automation system. Hence, the growth
of image scope, and variation of applications require the computation of a complex image
processing methodologies. But, sometimes these algorithms lack behind due to the presence of
noise, outliers, spikes, holes, etc. For this reason, some important image data is suppressed, or
lost, or some noisy data get itself processed that leads to poor performance of the particular
mechanisms.
The images may incorporate variations of noises due to acquisition problems, quantization or
digitization error or scanning error, etc. Now, it is very much required to filter out these noises
and smooth the facial surface of the input face images for practical use of the algorithm in real
time applications. In this context, the development and implementation of different linear as well
as non-linear filtering techniques [1] namely: Max filter, Min filter, Mid-point filter, Mean filter,
2. International Journal of Embedded Systems and Applications(IJESA) Vol.4,No.4,December 2014
14
Gaussian filter, and Median filter have been applied on 3D human face images and it will be
advantageous for further processing.
Human face images are considered to be more reliable biometric feature for automatic security
system for crucial properties such as uniqueness, universality, well accepted and well
understandable by people. It is always visible, and every one must have a face whereas other
biometric features like hand geometry, ear, eye may be lost due to some reason. The surveillance
cameras are also used to capture the human faces. Hence, face recognition [2-4] got most of the
researcher’s attention from last two decades.
In addition, there is a vast influence to prefer 3D human face [4] images rather than 2D images.
Specifically, the 2D images preserve the reflectance characteristics of the object in the pixel data.
So, it is mainly dependent on the illumination variations whereas 3D face images are particularly
used to preserve the depth values in X-Y plane. Another property that makes 3D face images
more convenient than 2D is 3D geometrical rotation along X, Y, and Z axes. Thus, the pose
variation, the major problem of current face recognition, can be resolved using face registration
[5-6] mechanism.
However, the states of the art of filtering techniques in case of 3D face processing reason have
been summarized in table 1. In this literature study, its importance in face registration and (or)
recognition has particularly an impact for developing an array of smoothing techniques
implementing in real time system and illustrating their significance for processing purpose.
Table 1. The state-of-the-art of image smoothing techniques for 3D face images.
Reference Description
[7] Authors have demonstrated the effect of the median filter for removing sharp
spikes, and again interpolation technique has been added to fill the holes on the
face image.
[8] Authors have compared the performance of landmark localization technique with
array of smoothing methods, namely Max Filter, Min filter, Gaussian filter, Mean
filter, and Weighted median filter.
[9] Here, authors have used median and Gaussian filter for smoothing purpose. The
median filter is used for spikes from 3D faces and again, Gaussian filtering is
applied for removing surface noise.
[10] To detect the nose-tip, authors have computed Gradient Weighting Filter method
during the smoothing process of their proposed algorithm.
2. MOTIVATION AND APPLICATION
Studying the recent state of the art regarding the influence of smoothing techniques for 3D human
face processing, authors have proposed an approach to real time processing of some of the
filtering techniques using MATLAB-Simulink model.
2.1. Range Image creation
The 2.5D range [11] face images are gray like face images. The difference between gray 2D and
2.5D is that, 2.5D images are comprised by depth values (or Z’s values) from 3D images where as
2D images are intensity values. Thus, the background has minimum depth value i.e. zero (0) and
nose region (especially ‘pronasal’) landmark has a maximum depth [6] [12] value 255. In figure
3. International Journal of Embedded Systems and Applications(IJESA) Vol.4,No.4,December 2014
15
1, 2D, 2.5D and 3D face images of randomly selected subjects from Frav3D database [13] is
described.
2D Face image 2.5D Face image 3D Face image
Figure 1. 2D, 2.5D and 3D face images of Frav3D database
Other than Frav3D face database, GavabDB [14] and Bosphorus [15] face databases have also
been considered for emphasizing the significance of smoothing technique using a simulation
model [19-20] of the embedded system. In figure 2, created range face images of randomly
selected subject from GavabDB and Bosphorus database have been illustrated in figure 2.
(a) From GavabDB database (b) From Bosphorus database
Figure 2. Created range face image
2.2. Smoothing algorithms
During the investigation phase, authors have implemented spatial linear as well as order-statistic
[1] [16] (i.e. non linear) filters on depth values of 2.5D range face images. The linear filters [17]
specifically an Mean filter and Gaussian filter are computed whereas in order statistic
categorization of image filtering, Median filter, Max filter, Min filter, and Midpoint filter are
applied on range face images.
2.2.1. Preprocessing technique
Before, these series of filters are experimented on depth values for their significance, a
preprocessing task have been carried out. The range images have been padded by zeros in the
opposite side of each row and column of the image. Thus, each and every depth values from the
furthest row and column of the image can be processed for better performance analysis.
Otherwise, it would not be considered during spatial image processing purpose. This phenomenon
is shown in figure 3. In this figure, a block of depth values with 8×8 grid from a section of 2.5D
range face image is shown.
4. International Journal of Embedded Systems and Applications(IJESA) Vol.4,No.4,December 2014
16
(a) without padding (b) with zero padding
Figure 3. The importance of depth image padding
The highlighted (circled by yellow color) depth value is processed first by the smoothing
techniques. If the smoothing technique is used on original range image then, the far most two
rows and columns will be unchanged, whereas with padding zeros with the original image will
effectively affect these sections. This padding is done in real time and again it has been removed
after the filtering techniques have been applied. Thus, the dimension is preserved before and after
smoothing technique.
.
2.2.2. Smoothing by linear filter
Linear filters do not depend on any kind of order of depth values (or intensity) from filtering
kernel. The filters in this category only compute the linear functions (like Gaussian or Averaging)
for removing the noises irrespective of the ordering of values encompassed by the filtering
window.
Gaussian filter: It is an important filter among set of smoothing filters from linear class. The
weight of the Gaussian filter [1] [16] is chosen from the Gaussian kernel. For the qualitative
measurement during this research work for 2.5D depth face image, 2D Gaussian kernel is
implemented. The kernel function [16] with ߪ = 3 is computed. It is observed that, a large value
of ߪ i.e. variance has the wider filter and smoothing impact.
Mean filter: Mean filter [1] is simple linear spatial filter that averages the neighbor’s depth values
of the filter mask. It is also referred as low pass filter [18]. To analyze the effect of the averaging
or mean filter for depth face images, a 3×3 kernel have been undertaken.
2.2.3. Smoothing by nonlinear filter
These filters are also known as order-statistic filter [1]. It is a nonlinear smoothing filter whose
output is emphasized on the ordering of the values encompassed by the filtering mask. Now, the
output from the ranking result is used to modify the center depth value of the mask. Here, for
nonlinear order statistic filter, authors have also considered for computed 3×3 kernel filter mask.
Max filter: In this noise filtering mechanism, 100% or highest depth value from neighborhood
depth values is chosen. Hence, for depth based image filtering scheme, the holes (containing
minimum or ‘0’ depth value) may be removed.
Min filter: It is useful to select the minimum or 0% depth value among the selected data by the
filtering window. Hence, the spikes (containing maximum depth) within the human face surface
due to scanning error can be minimized.
Mid-point filter: It is another type of smoothing technique which is used to select the depth value
in between maximum and minimum. It is a similar type of Mean filter as described above.
5. International Journal of Embedded Systems and Applications(IJESA) Vol.4,No.4,December 2014
17
Median filter: It is one of the famous and well known order-statistic filtering scheme where, 50%
among 9-values selected by 3×3 filtering window. It has another qualitative property that, it
provides us less blurring effect [1] than linear filters.
The different outputs from these filters have been demonstrated in the discussion section where
the significance of each output is broadly discussed.
2.3. Discussion
In this section, the outcomes of respective filters of randomly selected subjects from three
databases are shown in figure 4.
Frav3D
database
GavabDB
database
Bosphorus
database
After Smoothing Significance
Gaussian Smoothing
Mean filter
Max filter
Min filter
Mid-point filter
Median filter
Figure 4. Visualization of smoothing effects
6. International Journal of Embedded Systems and Applications(IJESA) Vol.4,No.4,December 2014
18
In this observation, it is noticed that the smoothing technique has a great significance over depth
values of the human face. So, its application in real time may significantly improve different
aspects of processing human face like face registration, recognition, etc.
Now, from Gaussian analysis it is noted that the outer portion is blurred out more than other. It
actually shrinks to center as it is dense and simultaneously blurs the edges. Hence, depth values
near different facial regions like eyes, eyebrows, the nose region, and lips all have this quality. It
is the property of the Gaussian filter, and it has successfully been executed in real time for depth
values. In the case of another linear filter namely, Mean filter the edges are preserved. In these
points, the depth value is nearly same as the average value that has been computed by 3×3
window. In might have a greater significance for landmark localization, face component
extraction, etc. The same significance has also been found for Mid-point filtering technique. By
mathematical logic it is determining 50% i.e. in between maximum and minimum, likely same as
average filter. For Max and Min filter, authors have observed the same significance after
smoothing technique. The reason of such significant output is that it is either selecting 0% or
100% depth value under the filtering window. Thus, it almost has a binary thresholded image as
shown in [11]. Hence, the spikes and holes can be removed in this process. The most well known
order statistic Median filtering method preserves the elliptical concave and convex curve details
near eye region, the nose region, lip region, etc.
3. MODEL DESIGN AND IMPLEMENTATION
Model has been designed and implemented using MATLAB-Simulink environment. Different
modules from Simulation tool have been coupled to finalize the implemented model. The
detailing of the blocks has been illustrated later in this section. It is an approach for real time
human computer interaction for visualizing the significant effect of different filtering techniques
on 3D human faces. Not only model design, successful code-generation has also been attempted.
In figure 5, developed model has been described.
Figure 5. Illustration of developed models
MAX /MIN / MEAN/
GAUSSIAN/MEDIAN/
MID-POINT Filter
7. International Journal of Embedded Systems and Applications(IJESA) Vol.4,No.4,December 2014
19
In the figure, the zoomed region shows different user defined embedded MATLAB function
blocks that have been designed for real time processing in embedded system.
In general, three ‘User-Defined MATLAB Function Block’, four ‘Sinks’ blocks, one ‘Constant
Source’, two ‘Math Operations’ block are incorporated with the advanced model. Each of these
blocks has its own crucial role for successful implementation of an advanced model.
The 3D face images [11] are far more different from 2D images as described earlier in the
motivation section. Instead of intensity values, depth (values along Z-axis) in X-Y plane is
preserved in 3D images in ‘.wrl’, ‘.abs,' ‘.bnt’ like formats. Hence, before the smoothing
technique is applied on 2.5D range face image, it has been generated from 3D face image using
an ‘Interpreted MATLAB Function’ (shown in upper-left corner of figure 5). The ‘Constant
Source’ is used as input of depth values to the Simulation model. Now, ‘MATLAB Function
Blocks’ are allowed to embed the source code for displaying the range face image and then it is
processed, and significance has been highlighted. The ‘Sinks’ have been used to produce an
output from each block for better human computer interaction. Finally, two mathematical
operations are used for real time 2D matrix manipulation purpose.
After, the successful implementation of ‘Simulation’ model, it is further required to generate code
for embedded system. For this purpose successful code generation have been accomplished by
choosing ‘C-language’ as target language with ‘Optimization on’ parameter of Compiler
optimization level, ‘Fixed-step’ solver option (i.e. fixed step size of 0.02) along with ‘Auto
generated comments’. In figure 6, code generation report for Mid-point filter is shown.
Figure 6. Description of the code generation report
In this figure, the expression of ‘ZI’, the constant block from the model, is highlighted. It contains
the depth values from range image which have been used during the execution time of the model.
8. International Journal of Embedded Systems and Applications(IJESA) Vol.4,No.4,December 2014
20
4. EXPERIMENTAL RESULT
Though there are other parameters for validating the model for real time application, but authors
have used two variations among them. One is Simulation mode i.e. whether the model is tested by
Normal mode or Accelerator or Rapid Accelerator mode and another parameter is Compiler
Optimization level (either Optimizations off or Optimizations on).
In table 2, an array of analysis of this execution parameters with simulation stop time 1.0 is
summarized. This description is used for the range image that has been selected randomly from
Frav3D database. It has been tested on 4-GB RAM with Windows 7 (64-bit) professional
Operating System environment and Inrel i5-3470 CPU with 3.20GHz processors.
[
Table 2. Performance analysis of different parameter configuration
Smoothing Techniques Simulation Mode
Simulation
Stop time 1.0
Compiler Optimization Level
Optimizations off Optimizations on
Gaussian Filter
Normal 13.967028 seconds 9.988573 seconds
Accelerator
9.907918 seconds
[After Successfully built
the Accelerator target]
9.813664 seconds
[After Successfully
built the Accelerator
target]
Rapid Accelerator
12.664001 seconds
[After Successfully built
the rapid accelerator
target]
11.924632 seconds
[After Successfully
built the rapid
accelerator target]
Mean filter
Normal 11.504452 seconds 6.921782 seconds
Accelerator
6.954723 seconds
[After Successfully built
the Accelerator target]
6.503646 seconds
[After Successfully
built the Accelerator
target]
Rapid Accelerator
9.511578 seconds
[After Successfully built
the rapid accelerator
target]
9.285953 seconds
[After Successfully
built the rapid
accelerator target]
Max filter
Normal 7.275049 seconds 7.225375 seconds
Accelerator
6.528666 seconds
[After Successfully built
the Accelerator target]
6.706566 seconds
[After Successfully
built the Accelerator
target]
Rapid Accelerator
10.056241 seconds
[After Successfully built
the rapid accelerator
target]
9.704065 seconds
[After Successfully
built the rapid
accelerator target]
Min filter
Normal 7.599231 seconds 7.000915 seconds
Accelerator
6.622307 seconds
[After Successfully built
the Accelerator target]
6.610166 seconds
[After Successfully
built the Accelerator
target]
Rapid Accelerator
9.467616 seconds
[After Successfully built
the rapid accelerator
target]
10.232370 seconds
[After Successfully
built the rapid
accelerator target]
Normal 11.867022 seconds 7.518357 seconds
7.109918 seconds 7.186364 seconds
9. International Journal of Embedded Systems and Applications(IJESA) Vol.4,No.4,December 2014
21
Mid-point filter
Accelerator [After Successfully built
the Accelerator target]
[After Successfully
built the Accelerator
target]
Rapid Accelerator
10.006641 seconds
[After Successfully built
the rapid accelerator
target]
9.296432 seconds
[After Successfully
built the rapid
accelerator target]
Median filter
Normal 12.546941 seconds 12.620668 seconds
Accelerator
12.267989 seconds
[After Successfully built
the Accelerator target]
12.691126 seconds
[After Successfully
built the Accelerator
target]
Rapid Accelerator
10.047549 seconds
[After Successfully built
the rapid accelerator
target]
9.809040 seconds
[After Successfully
built the rapid
accelerator target]
From this outline, it is noticed that the complexity is much higher for the techniques which are
required mathematical computation much higher than others. The Mid-point filter takes more
time than Max and (or) Min filter, whereas Median filter also accounts more time for smoothing
operation in real time. It requires ordering of depth values encompassed by the filtering window.
The Gaussian filter also consumes more time to process 3D human face image for real time
application.
In figure 7, a comparative study is shown among the time complexities of different smoothing
methods with an array of parameters arrangement.
Figure 7. Comparison of the performance study
10. International Journal of Embedded Systems and Applications(IJESA) Vol.4,No.4,December 2014
22
Selecting the two restrictions such as Simulation mode and Compiler optimization level is
explained here. The ‘Normal’, ‘Accelerator’ and ‘Rapid Accelerator’ modes are defined to
compare the time complexities side by side for better understanding of the minimum time span of
the specified filtering technique. Along with this, with the Compiler optimization parameter set-
up, fastest execution (by Optimizations on) and fastest compilation (by Optimizations off) has
also been represented.
5. CONCLUSIONS
The applications of smoothing techniques during image processing or computer vision (especially
in face recognition) have a crucial implication. In this literature, authors have explained the
influence of some of the smoothing techniques 2.5D face images. In addition, authors have
created a real time application model by MATLAB-Simulink model and validated by series of
parameters’ composition. The code generation has also been conducted and implemented. Along
with this, the validation of the model is done on three modern 3D face databases (namely Frav3D,
GavabDB, and Bosphorus) having two different 3D image formats like ‘.wrl’ and ‘.bnt’.
Now, authors are focusing to implement these methods for range face images using Field
Propagation Gateway Array (FPGA) to develop better dedicated system with much lesser time
complexity.
ACKNOWLEDGEMENTS
Authors are thankful to a project supported by DeitY (Letter No.: 12(12)/2012-ESD), MCIT,
Govt. of India, at Department of Computer Science and Engineering, Jadavpur University, India
for providing the necessary infrastructure for this work.
REFERENCES
[1] Gonzalez, R. C., And Woods, R.E., (2007) “Digital Image Processing,” 3rd Edition, Prentice Hall
Publisher.
[2] “Face Recognition,” Pp. 1-10, August 2006, Url: ttp://Www.Biometrics.Gov/Documents/Facerec.Pdf.
[3] Jelsovka, D., Hudec, R., Breznan, M., Kamencay, P., (2012) “2d-3d Face Recognition Using Shapes
Of Facial Curves Based On Modified Cca Method”, 22nd International Conference Radioelektronika
(Radioelektronika), Pp. 1-4.
[4] Ganguly, S., Bhattacharjee, D., And Nasipuri, M., (2014) “3d Face Recognition From Range Images
Based On Curvature Analysis”, Volume: 04, Issue: 03, Pp. 748-753.
[5] Ayyagari, V.R., Boughorbel, F., Koschan, A., Abidi, M.A., (2005) “A New Method For Automatic 3d
Face Registration”, Proceedings Of The Ieee Computer Society Conference On Computer Vision And
Pattern Recognition (Cvpr’05), Pp. 1-8.
[6] Ganguly, S., Bhattacharjee, D., And Nasipuri, M., (2014) “Range Face Image Registration Using Efri
From 3d Images”, In Advances In Intelligent And Soft Computing, Springer, Accepted In
Proceedings Of 3rd Frontiers Of Intelligent Computing: Theory And Applications (Ficta 2014).
[7] Soltana, W. B., Ardabilian, M. Lemaire, P., Huang, D., Szeptycki, P., Chen, L., Erdogmus, N.,
Daniel, L., Dugelay, J., Amor, B.B., Drira, H., Daoudi, M., Colineau, J., (2012) “3d Face
Recognition: A Robust Multi-Matcher Approach To Data Degradations”, In Proc Of Icb 2012, Pp
103-110.
[8] Bagchi, P., Bhattacharjee, D., Nasipuri, M., & Basu, D.K. (2012) “A Novel Approach To Nose-Tip
And Eye-Corners Detection Using H-K Curvature Analysis In Case Of 3d Images”, In Proc Of
International Journal Of Computational Intelligence And Informatics, Vol. 2: No. 1.
[9] Hatem, H., Beiji, Z., Majeed, R., Lutf, M., Waleed, J., (2013) “Nose Tip Localization In Three-
Dimensional Facial Mesh Data”, International Journal Of Advancements In Computing Technology
(Ijact), Volume5, Number13,Pp. 99-105.
11. International Journal of Embedded Systems and Applications(IJESA) Vol.4,No.4,December 2014
23
[10] Margret N. Silva, Vipul Dalal, (2013) “ Nose Tip Detection Using Gradient Weighting Filter
Smoothing,” International Journal Of Engineering Research And Development, Volume 9, Issue 5,
Pp. 09-11.
[11] Ganguly, S., Bhattacharjee, D., And Nasipuri, M., (2014) “2.5d Face Images: Acquisition, Processing
And Application”, Computer Networks And Security, International Conference On Communication
And Computing (Icc-2014), Organized By Alpha College Of Engineering, India, Publisher: Elsevier
Science And Technology, Pp. 36-44 Isbn: 978935107244.
[12] Dhane, P., Jain, A., Kutty, K. K., (2011) “A New Algorithm For 3d Object Representation And Its
Application For Human Face Verification”, International Conference On Image Information
Processing, Pp. 1-6.
[13] Frav3d Face Database, Url: Http://Www.Frav.Es/Databases/Frav3d/
[14] Gavabdb Face Database, Url: Http://Gavab.Escet.Urjc.Es/Recursos_En.Html
[15] Bosphorus Face Database, Url: Http://Bosphorus.Ee.Boun.Edu.Tr/Default.Aspx
[16] Jayaraman, S., Esakkirajan,S., And Veerakumar, T., (2010), “Digital Image Processing”, 3rd Edition,
Tmh Publisher.
[17] Linear Filters, Url: ttp://Luthuli.Cs.Uiuc.Edu/~Daf/Courses/Cs5432009/Week%203/Simplefilters.Pdf
[18] Spatial Filters - Mean Filter, Url: Http://Homepages.Inf.Ed.Ac.Uk/Rbf/Hipr2/Mean.Htm
[19] Ganguly, S., Bhattacharjee, D., And Nasipuri, M., (2014) “Analyzing The Performance Of Haar-
Wavelet Transform On Thermal Facial Image Using Matlab-Simulink Model”, Proceedings Of The
1st International Conference On Microelectronics, Circuit And Systems,: Volume 2, Pages: 106-111,
Isbn:81-85824-46-0.
[20] Tfrs Using Simulink, Url: Https://Www.Youtube.Com/Watch?V=3l-Qd2zv5xs&Feature=Youtu.Be
AUTHORS
SURANJAN GANGULY received the M.Tech (Computer Technology) degree from
Jadavpur University, India, in 2014. He completed B-Tech (Information Technology) in
2011. His research interest includes image processing, pattern recognition. He was a
project fellow of UGC, Govt. of India, sponsored major research project at Jadavpur
University. Currently, he is a project fellow of DietY (Govt. of India, MCIT) funded
research project at Jadavpur University.
BHATTACHARJEE received the MCSE and Ph.D. (Eng.) degrees from Jadavpur
University, India, in 1997 and 2004 respectively. He was associated with different
institutes in various capacities until March 2007. After that he joined his Alma Mater,
Jadavpur University. His research interests pertain to the applications of computational
intelligence techniques like Fuzzy logic, Artificial Neural Network, Genetic Algorithm,
Rough Set Theory, etc. in Face Recognition, OCR, and Information Security. He is a life
member of Indian Society for Technical Education (ISTE, New Delhi), Indian Unit for
Pattern Recognition and Artificial Intelligence (IUPRAI), and a senior member of IEEE (USA).
MITA NASIPURI received her B.E.Tel.E., M.E.Tel.E., and Ph.D. (Engg.) degrees
from Jadavpur University, in 1979, 1981 and 1990, respectively. Prof. Nasipuri has been
a faculty member of J.U since 1987. Her current research interest includes image
processing, pattern recognition, and multimedia systems. She is a senior member of the
IEEE, U.S.A., Fellow of I.E. (India) and W.B.A.S.T, Kolkata, India.