Abstract
The present paper proposes a novel approach for facial expression identification based on the statistical features extracted from
the 4-bit co-occurrence matrix. In the present, first color image is converted into grey level image using 7-bit quantization
technique. To reduce the complexity of the Grey Level Co-occurrence matrix the present paper generates the co-occurrence
matrix on 4 bit grey image which is generated by quantization technique. From The 4-bit CM extract 4 features; contrast,
homogeneity, energy and correlation and generate the feature vector. By using the K-NN Classification technique classify the test
image and extract the four statistical features and identify the type of expression of the test facial image. The present method got
better comparative results when it is tested on Japanese Female Facial Expression (JAFFE) Database.
Key Words: Facial Expression identification, Classification, k-NN classifier, Quantization, and GLCM
Extraction of texture features by using gabor filter in wheat crop disease de...eSAT Journals
Abstract
Like country India, there are so many people depending upon agriculture. In this area, many farmers don’t know about new
diseases which are impacting on their farm. As the disease changes, the disease control policy also changes. So many farmers
have very sharp observation on crop diseases, but whenever there is new diseases fall on crops then problems occur. Climate also
changes instantly many of times, because of such reasons farmers unable to understand various diseases.
If farmer unable to predict that diseases quickly then it will affect life of crops. Indirectly it gets affects on total productivity of
farm. As we are well known about that world facing lot of problems due rapid growth in population. So our goal is to increase
agricultural productivity using image processing technology which can help farmer in great extent [7].
In this research work, we are trying that crop disease using Artificial neural network (ANN) which work very effectively. First of
all, we have provided an digital image which is taken by digital camera. That image given to Gaussian filter firstly then
transferred to adaptive median filter to filter out noise present inside image. Gaussian filter removes Gaussian noise which is
present inside image. Adaptive noise filter removes impulsive noise which is present inside image. Also it will reduce distortions
which are present inside images. Then image transferred to segmentation part. In image segmentation we have choose CIELAB
color space method to extract color components properly. For segmentation we have used Gabor filter. After this we distinguish
crop diseases on the basis of texture features which are extracted by Gabor filter [6].
Key Words: Artificial Neural Networks, Image preprocessing, Image Acquisition, and Feature Extraction,
classification etc…
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Content Based Image Retrieval using Color Boosted Salient Points and Shape fe...CSCJournals
Salient points are locations in an image where there is a significant variation with respect to a chosen image feature. Since the set of salient points in an image capture important local characteristics of that image, they can form the basis of a good image representation for content-based image retrieval (CBIR). Salient features are generally determined from the local differential structure of images. They focus on the shape saliency of the local neighborhood. Most of these detectors are luminance based which have the disadvantage that the distinctiveness of the local color information is completely ignored in determining salient image features. To fully exploit the possibilities of salient point detection in color images, color distinctiveness should be taken into account in addition to shape distinctiveness. This paper presents a method for salient points determination based on color saliency. The color and texture information around these points of interest serve as the local descriptors of the image. In addition, the shape information is captured in terms of edge images computed using Gradient Vector Flow fields. Invariant moments are then used to record the shape features. The combination of the local color, texture and the global shape features provides a robust feature set for image retrieval. The experimental results demonstrate the efficacy of the method.
Extraction of texture features by using gabor filter in wheat crop disease de...eSAT Journals
Abstract
Like country India, there are so many people depending upon agriculture. In this area, many farmers don’t know about new
diseases which are impacting on their farm. As the disease changes, the disease control policy also changes. So many farmers
have very sharp observation on crop diseases, but whenever there is new diseases fall on crops then problems occur. Climate also
changes instantly many of times, because of such reasons farmers unable to understand various diseases.
If farmer unable to predict that diseases quickly then it will affect life of crops. Indirectly it gets affects on total productivity of
farm. As we are well known about that world facing lot of problems due rapid growth in population. So our goal is to increase
agricultural productivity using image processing technology which can help farmer in great extent [7].
In this research work, we are trying that crop disease using Artificial neural network (ANN) which work very effectively. First of
all, we have provided an digital image which is taken by digital camera. That image given to Gaussian filter firstly then
transferred to adaptive median filter to filter out noise present inside image. Gaussian filter removes Gaussian noise which is
present inside image. Adaptive noise filter removes impulsive noise which is present inside image. Also it will reduce distortions
which are present inside images. Then image transferred to segmentation part. In image segmentation we have choose CIELAB
color space method to extract color components properly. For segmentation we have used Gabor filter. After this we distinguish
crop diseases on the basis of texture features which are extracted by Gabor filter [6].
Key Words: Artificial Neural Networks, Image preprocessing, Image Acquisition, and Feature Extraction,
classification etc…
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Content Based Image Retrieval using Color Boosted Salient Points and Shape fe...CSCJournals
Salient points are locations in an image where there is a significant variation with respect to a chosen image feature. Since the set of salient points in an image capture important local characteristics of that image, they can form the basis of a good image representation for content-based image retrieval (CBIR). Salient features are generally determined from the local differential structure of images. They focus on the shape saliency of the local neighborhood. Most of these detectors are luminance based which have the disadvantage that the distinctiveness of the local color information is completely ignored in determining salient image features. To fully exploit the possibilities of salient point detection in color images, color distinctiveness should be taken into account in addition to shape distinctiveness. This paper presents a method for salient points determination based on color saliency. The color and texture information around these points of interest serve as the local descriptors of the image. In addition, the shape information is captured in terms of edge images computed using Gradient Vector Flow fields. Invariant moments are then used to record the shape features. The combination of the local color, texture and the global shape features provides a robust feature set for image retrieval. The experimental results demonstrate the efficacy of the method.
Hierarchical Approach for Total Variation Digital Image InpaintingIJCSEA Journal
The art of recovering an image from damage in an undetectable form is known as inpainting. The manual work of inpainting is most often a very time consum ing process. Due to digitalization of this technique, it is automatic and faster. In this paper, after the user selects the regions to be reconstructed, the algorithm automatically reconstruct the lost regions with the help of the information surrounding them. The existing methods perform very well when the region to be reconstructed is very small, but fails in proper reconstruction as the area increases. This paper describes a Hierarchical method by which the area to be inpainted is reduced in multiple levels and Total Variation(TV) method is used to inpaint in each level. This algorithm gives better performance when compared with other existing algorithms such as nearest neighbor interpolation, Inpainting through Blurring and Sobolev Inpainting.
Face Recognition Using Neural Network Based Fourier Gabor Filters & Random Pr...CSCJournals
Face detection and recognition has many applications in a variety of fields such as authentication, security, video surveillance and human interaction systems. In this paper, we present a neural network system for face recognition. Feature vector based on Fourier Gabor filters is used as input of our classifier, which is a Back Propagation Neural Network (BPNN). The input vector of the network will have large dimension, to reduce its feature subspace we investigate the use of the Random Projection as method of dimensionality reduction. Theory and experiment indicates the robustness of our solution.
Segmentation of medical images using metric topology – a region growing approachIjrdt Journal
A metric topological approach to the region growing based segmentation is presented in this article. Region based growing techniques has gained a significant importance in the medical image processing field for finest of segregation of tumor detected part in the image. Conventional algorithms were concentrated on segmentation at the coarser level which failed to produce enough evidence for the validity of the algorithm. In this article a novel technique is proposed based on metric topological neighbourhood also with the introduction of new objective measure entropy, apart from the traditional validity measures of Accuracy, PSNR and MSE. This measure is introduced to prove the amount of information lost after segmentation is reduced to greater extent which elucidates the effectiveness of the algorithm. This algorithm is tested on the well known benchmarking of testing in ground truth images in par with the proposed region based growing segmented images. The results validated show the validation of effectiveness of the algorithm.
Fuzzy Region Merging Using Fuzzy Similarity Measurement on Image Segmentation IJECEIAES
Some image’s regions have unbalance information, such as blurred contour, shade, and uneven brightness. Those regions are called as ambiguous regions. Ambiguous region cause problem during region merging process in interactive image segmentation because that region has double information, both as object and background. We proposed a new region merging strategy using fuzzy similarity measurement for image segmentation. The proposed method has four steps; the first step is initial segmentation using mean-shift algorithm. The second step is giving markers manually to indicate the object and background region. The third step is determining the fuzzy region or ambiguous region in the images. The last step is fuzzy region merging using fuzzy similarity measurement. The experimental results demonstrated that the proposed method is able to segment natural images and dental panoramic images successfully with the average value of misclassification error (ME) 1.96% and 5.47%, respectively.
Image enhancement technique plays vital role in improving the quality of the image. Enhancement
technique basically enhances the foreground information and retains the background and improve the
overall contrast of an image. In some case the background of an image hides the structural information of
an image. This paper proposes an algorithm which enhances the foreground image and the background
part separately and stretch the contrast of an image at inter-object level and intra-object level and then
combines it to an enhanced image. The results are compared with various classical methods using image
quality measures
A comparative study on content based image retrieval methodsIJLT EMAS
Content-based image retrieval (CBIR) is a method of
finding images from a huge image database according to persons’
interests. Content-based here means that the search involves
analysis the actual content present in the image. As database of
images is growing daybyday, researchers/scholars are searching
for better techniques for retrieval of images maintaining good
efficiency. This paper presents the visual features and various
ways for image retrieval from the huge image database.
Content Based Image Retrieval Using Full Haar SectorizationCSCJournals
Content based image retrieval (CBIR) deals with retrieval of relevant images from the large image database. It works on the features of images extracted. In this paper we are using very innovative idea of sectorization of Full Haar Wavelet transformed images for extracting the features into 4, 8, 12 and 16 sectors. The paper proposes two planes to be sectored i.e. Forward plane (Even plane) and backward plane (Odd plane). Similarity measure is also very essential part of CBIR which lets one to find the closeness of the query image with the database images. We have used two similarity measures namely Euclidean distance (ED) and sum of absolute difference (AD). The overall performance of retrieval of the algorithm has been measured by average precision and recall cross over point and LIRS, LSRR. The paper compares the performance of the methods with respect to type of planes, number of sectors, types of similarity measures and values of LIRS and LSRR.
Rice seed image classification based on HOG descriptor with missing values im...TELKOMNIKA JOURNAL
Rice is a primary source of food consumed by almost half of world population. Rice quality mainly depends on the purity of the rice seed. In order to ensure the purity of rice variety, the recognition process is an essential stage. In this paper, we firstly propose to use histogram of oriented gradient (HOG) descriptor to characterize rice seed images. Since the size of image is totally random and the features extracted by HOG can not be used directly by classifier due to the different dimensions. We apply several imputation methods to fill the missing data for HOG descriptor. The experiment is applied on the VNRICE benchmark dataset to evaluate the proposed approach.
In this paper; we introduce a system of automatic recognition of Amazigh characters based on the Random Forest Method in non-constrictive pictures that are stemmed from the terminals Mobile phone. After doing some pretreatments on the picture, the text is segmented into lines and then into characters. In the stage of characteristics extraction, we are representing the input data into the vector of primitives of the zoning types, of diagonal, horizontal, Gabor filters and of the Zernike moment. These characteristics are linked to pixels’ densities and they are extracted on binary pictures. In the classification stage, we examine four classification methods with two different classifiers types namely the Support vector machines (SVM) and the Random Forest method. After some checking tests, the system of learning and recognition which is based on the Random Forest has shown a good performance on a basis of 100 models of pictures.
The Effects of Segmentation Techniques in Digital Image Based Identification ...TELKOMNIKA JOURNAL
This paper presents the effects of segmentation techniques in the identification of Ethiopian
coffee variety. In Ethiopia, coffee varieties are classified based on their growing region. The most widely
coffee growing regions in Ethiopia are Bale, Harar, Jimma, Limu, Sidamo and Welega. Coffee beans of
these regions very in color shape and texture. We investigated various segmentation techniques for
efficient coffee beans variety identification system. Images of six different coffee beans varieties in Oromia
and Southern Ethiopia were acquired and analyzed. For this study Otsu, Fuzzy-C-Means (FCM) and Kmeans
segmentation techniques are considered. For classification of the varieties of Ethiopian coffee
beans back propagation neural network (BPNN) is used. From the experiment 94.54% accuracy is
achieved when BPNN is used on FCM segmentation technique.
BAYESIAN CLASSIFICATION OF FABRICS USING BINARY CO-OCCURRENCE MATRIXijistjournal
Classification of fabrics is usually performed manually which requires considerable human efforts. The goal of this paper is to recognize and classify the types of fabrics, in order to identify a weave pattern automatically using image processing system. In this paper, fabric texture feature is extracted using Grey Level Co-occurrence Matrices as well as Binary Level Co-occurrence Matrices. The Co-occurrence matrices functions characterize the texture of an image by calculating how often pairs of pixel with specific values and in a specified spatial relationship occur in an image, and then extracting statistical measures from this matrix. The extracted features from GLCM and BLCM are used to classify the texture by Bayesian classifier to compare their effectiveness.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Contrast enhancement using various statistical operations and neighborhood pr...sipij
Histogram Equalization is a simple and effective contrast enhancement technique. In spite of its popularity
Histogram Equalization still have some limitations –produces artifacts, unnatural images and the local
details are not considered, therefore due to these limitations many other Equalization techniques have been
derived from it with some up gradation. In this proposed method statistics play an important role in image
processing, where statistical operations is applied to the image to get the desired result such as
manipulation of brightness and contrast. Thus, a novel algorithm using statistical operations and
neighborhood processing has been proposed in this paper where the algorithm has proven to be effective in
contrast enhancement based on the theory and experiment.
An Intelligent Skin Color Detection Method based on Fuzzy C-Means with Big Da...CrimsonpublishersTTEFT
An Intelligent Skin Color Detection Method based on Fuzzy C-Means with Big Data by Chih Huang Yen in Trends in Textile Engineering & Fashion Technology
Hierarchical Approach for Total Variation Digital Image InpaintingIJCSEA Journal
The art of recovering an image from damage in an undetectable form is known as inpainting. The manual work of inpainting is most often a very time consum ing process. Due to digitalization of this technique, it is automatic and faster. In this paper, after the user selects the regions to be reconstructed, the algorithm automatically reconstruct the lost regions with the help of the information surrounding them. The existing methods perform very well when the region to be reconstructed is very small, but fails in proper reconstruction as the area increases. This paper describes a Hierarchical method by which the area to be inpainted is reduced in multiple levels and Total Variation(TV) method is used to inpaint in each level. This algorithm gives better performance when compared with other existing algorithms such as nearest neighbor interpolation, Inpainting through Blurring and Sobolev Inpainting.
Face Recognition Using Neural Network Based Fourier Gabor Filters & Random Pr...CSCJournals
Face detection and recognition has many applications in a variety of fields such as authentication, security, video surveillance and human interaction systems. In this paper, we present a neural network system for face recognition. Feature vector based on Fourier Gabor filters is used as input of our classifier, which is a Back Propagation Neural Network (BPNN). The input vector of the network will have large dimension, to reduce its feature subspace we investigate the use of the Random Projection as method of dimensionality reduction. Theory and experiment indicates the robustness of our solution.
Segmentation of medical images using metric topology – a region growing approachIjrdt Journal
A metric topological approach to the region growing based segmentation is presented in this article. Region based growing techniques has gained a significant importance in the medical image processing field for finest of segregation of tumor detected part in the image. Conventional algorithms were concentrated on segmentation at the coarser level which failed to produce enough evidence for the validity of the algorithm. In this article a novel technique is proposed based on metric topological neighbourhood also with the introduction of new objective measure entropy, apart from the traditional validity measures of Accuracy, PSNR and MSE. This measure is introduced to prove the amount of information lost after segmentation is reduced to greater extent which elucidates the effectiveness of the algorithm. This algorithm is tested on the well known benchmarking of testing in ground truth images in par with the proposed region based growing segmented images. The results validated show the validation of effectiveness of the algorithm.
Fuzzy Region Merging Using Fuzzy Similarity Measurement on Image Segmentation IJECEIAES
Some image’s regions have unbalance information, such as blurred contour, shade, and uneven brightness. Those regions are called as ambiguous regions. Ambiguous region cause problem during region merging process in interactive image segmentation because that region has double information, both as object and background. We proposed a new region merging strategy using fuzzy similarity measurement for image segmentation. The proposed method has four steps; the first step is initial segmentation using mean-shift algorithm. The second step is giving markers manually to indicate the object and background region. The third step is determining the fuzzy region or ambiguous region in the images. The last step is fuzzy region merging using fuzzy similarity measurement. The experimental results demonstrated that the proposed method is able to segment natural images and dental panoramic images successfully with the average value of misclassification error (ME) 1.96% and 5.47%, respectively.
Image enhancement technique plays vital role in improving the quality of the image. Enhancement
technique basically enhances the foreground information and retains the background and improve the
overall contrast of an image. In some case the background of an image hides the structural information of
an image. This paper proposes an algorithm which enhances the foreground image and the background
part separately and stretch the contrast of an image at inter-object level and intra-object level and then
combines it to an enhanced image. The results are compared with various classical methods using image
quality measures
A comparative study on content based image retrieval methodsIJLT EMAS
Content-based image retrieval (CBIR) is a method of
finding images from a huge image database according to persons’
interests. Content-based here means that the search involves
analysis the actual content present in the image. As database of
images is growing daybyday, researchers/scholars are searching
for better techniques for retrieval of images maintaining good
efficiency. This paper presents the visual features and various
ways for image retrieval from the huge image database.
Content Based Image Retrieval Using Full Haar SectorizationCSCJournals
Content based image retrieval (CBIR) deals with retrieval of relevant images from the large image database. It works on the features of images extracted. In this paper we are using very innovative idea of sectorization of Full Haar Wavelet transformed images for extracting the features into 4, 8, 12 and 16 sectors. The paper proposes two planes to be sectored i.e. Forward plane (Even plane) and backward plane (Odd plane). Similarity measure is also very essential part of CBIR which lets one to find the closeness of the query image with the database images. We have used two similarity measures namely Euclidean distance (ED) and sum of absolute difference (AD). The overall performance of retrieval of the algorithm has been measured by average precision and recall cross over point and LIRS, LSRR. The paper compares the performance of the methods with respect to type of planes, number of sectors, types of similarity measures and values of LIRS and LSRR.
Rice seed image classification based on HOG descriptor with missing values im...TELKOMNIKA JOURNAL
Rice is a primary source of food consumed by almost half of world population. Rice quality mainly depends on the purity of the rice seed. In order to ensure the purity of rice variety, the recognition process is an essential stage. In this paper, we firstly propose to use histogram of oriented gradient (HOG) descriptor to characterize rice seed images. Since the size of image is totally random and the features extracted by HOG can not be used directly by classifier due to the different dimensions. We apply several imputation methods to fill the missing data for HOG descriptor. The experiment is applied on the VNRICE benchmark dataset to evaluate the proposed approach.
In this paper; we introduce a system of automatic recognition of Amazigh characters based on the Random Forest Method in non-constrictive pictures that are stemmed from the terminals Mobile phone. After doing some pretreatments on the picture, the text is segmented into lines and then into characters. In the stage of characteristics extraction, we are representing the input data into the vector of primitives of the zoning types, of diagonal, horizontal, Gabor filters and of the Zernike moment. These characteristics are linked to pixels’ densities and they are extracted on binary pictures. In the classification stage, we examine four classification methods with two different classifiers types namely the Support vector machines (SVM) and the Random Forest method. After some checking tests, the system of learning and recognition which is based on the Random Forest has shown a good performance on a basis of 100 models of pictures.
The Effects of Segmentation Techniques in Digital Image Based Identification ...TELKOMNIKA JOURNAL
This paper presents the effects of segmentation techniques in the identification of Ethiopian
coffee variety. In Ethiopia, coffee varieties are classified based on their growing region. The most widely
coffee growing regions in Ethiopia are Bale, Harar, Jimma, Limu, Sidamo and Welega. Coffee beans of
these regions very in color shape and texture. We investigated various segmentation techniques for
efficient coffee beans variety identification system. Images of six different coffee beans varieties in Oromia
and Southern Ethiopia were acquired and analyzed. For this study Otsu, Fuzzy-C-Means (FCM) and Kmeans
segmentation techniques are considered. For classification of the varieties of Ethiopian coffee
beans back propagation neural network (BPNN) is used. From the experiment 94.54% accuracy is
achieved when BPNN is used on FCM segmentation technique.
BAYESIAN CLASSIFICATION OF FABRICS USING BINARY CO-OCCURRENCE MATRIXijistjournal
Classification of fabrics is usually performed manually which requires considerable human efforts. The goal of this paper is to recognize and classify the types of fabrics, in order to identify a weave pattern automatically using image processing system. In this paper, fabric texture feature is extracted using Grey Level Co-occurrence Matrices as well as Binary Level Co-occurrence Matrices. The Co-occurrence matrices functions characterize the texture of an image by calculating how often pairs of pixel with specific values and in a specified spatial relationship occur in an image, and then extracting statistical measures from this matrix. The extracted features from GLCM and BLCM are used to classify the texture by Bayesian classifier to compare their effectiveness.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Contrast enhancement using various statistical operations and neighborhood pr...sipij
Histogram Equalization is a simple and effective contrast enhancement technique. In spite of its popularity
Histogram Equalization still have some limitations –produces artifacts, unnatural images and the local
details are not considered, therefore due to these limitations many other Equalization techniques have been
derived from it with some up gradation. In this proposed method statistics play an important role in image
processing, where statistical operations is applied to the image to get the desired result such as
manipulation of brightness and contrast. Thus, a novel algorithm using statistical operations and
neighborhood processing has been proposed in this paper where the algorithm has proven to be effective in
contrast enhancement based on the theory and experiment.
An Intelligent Skin Color Detection Method based on Fuzzy C-Means with Big Da...CrimsonpublishersTTEFT
An Intelligent Skin Color Detection Method based on Fuzzy C-Means with Big Data by Chih Huang Yen in Trends in Textile Engineering & Fashion Technology
Tracking number plate from vehicle usingijfcstjournal
In Traffic surveillance, Tracking of the number plate from the vehicle is an important task, which demands
intelligent solution. In this document, extraction and Recognization of number plate from vehicles image
has been done using Matlab. It is assumed that images of the vehicle have been captured from Digital
Camera. Alphanumeric Characters on plate has been Extracted and recognized using template images of
alphanumeric characters.
This paper presents a new algorithm in MATLAB which has been used to extract the number plate from the
vehicle in various luminance conditions. Extracted image of the number plate can be seen in a text file for
verification purpose. Number plate identification is helpful in finding stolen cars, car parking management
system and identification of vehicle in traffic.
Face detection using the 3 x3 block rank patterns of gradient magnitude imagessipij
Face detection locates faces prior to various face-
related applications. The objective of face detecti
on is to
determine whether or not there are any faces in an
image and, if any, the location of each face is det
ected.
Face detection in real images is challenging due to
large variability of illumination and face appeara
nces.
This paper proposes a face detection algorithm usin
g the 3×3 block rank patterns of gradient magnitude
images and a geometrical face model. First, the ill
umination-corrected image of the face region is obt
ained
using the brightness plane that is produced using t
he locally minimum brightness of each block. Next,
the
illumination-corrected image is histogram equalized
, the face region is divided into nine (3×3) blocks
, and
two directional (horizontal and vertical) gradient
magnitude images are computed, from which the 3×3
block rank patterns are obtained. For face detectio
n, using the FERET and GT databases three types of
the
3×3 block rank patterns are a priori determined as
templates based on the distribution of the sum of t
he
gradient magnitudes of each block in the face candi
date region that is also composed of nine (3×3) blo
cks.
The 3×3 block rank patterns roughly classify whethe
r the detected face candidate region contains a fac
e or
not. Finally, facial features are detected and used
to validate the face model. The face candidate is
validated as a face if it is matched with the geome
trical face model. The proposed algorithm is tested
on the
Caltech database images and real images. Experiment
al results with a number of test images show the
effectiveness of the proposed algorithm.
National Flags Recognition Based on Principal Component Analysisijtsrd
Recognizing an unknown flag in a scene is challenging due to the diversity of the data and to the complexity of the identification process. And flags are associated with geographical regions, countries and nations. But flag identification of different countries is a challenging and difficult task. Recognition of an unknown flag image in a scene is challenging due to the diversity of the data and to the complexity of the identification process. The aim of the study is to propose a feature extraction based recognition system for Myanmar's national flag. Image features are acquired from the region and state of flags which are identified by using principal component analysis PCA . PCA is a statistical approach used for reducing the number of features in National flags recognition system. Soe Moe Myint | Moe Moe Myint | Aye Aye Cho "National Flags Recognition Based on Principal Component Analysis" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26775.pdfPaper URL: https://www.ijtsrd.com/other-scientific-research-area/other/26775/national-flags-recognition-based-on-principal-component-analysis/soe-moe-myint
Optimization of deep learning features for age-invariant face recognition IJECEIAES
This paper presents a methodology for Age-Invariant Face Recognition (AIFR), based on the optimization of deep learning features. The proposed method extracts deep learning features using transfer deep learning, extracted from the unprocessed face images. To optimize the extracted features, a Genetic Algorithm (GA) procedure is designed in order to select the most relevant features to the problem of identifying a person based on his/her facial images over different ages. For classification, K-Nearest Neighbor (KNN) classifiers with different distance metrics are investigated, i.e., Correlation, Euclidian, Cosine, and Manhattan distance metrics. Experimental results using a Manhattan distance KNN classifier achieve the best Rank-1 recognition rates of 86.2% and 96% on the standard FGNET and MORPH datasets, respectively. Compared to the state-of-the-art methods, our proposed method needs no preprocessing stages. In addition, the experiments show its privilege over other related methods.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This paper describes for a robust face recognition system using skin segmentation technique. This paper addresses the problem of detecting faces in color images in the presence of various lighting conditions. In this paper the face is preprocessed using histogram equalization to avoid illumination problems and then is detected using skin segmentation method. The principal component analysis using neural network is used to recognize the extracted facial features.
A novel approach for performance parameter estimation of face recognition bas...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
CLUSTERING AN AFRICAN HAIRSTYLE DATASET USING PCA AND K-MEANSIJCI JOURNAL
The adoption of digital transformation was not expressed in building an African face shape classifier. In
this paper, an approach is presented that uses k-means to classify African women images. African women
rely on beauty standards recommendations, personal preference, or the newest trends in hairstyles to
decide on the appropriate hairstyle for them. In this paper, an approach is presented that uses K-means
clustering to classify African women's images. In order to identify potential facial clusters, Haarcascade is
used for feature-based training, and K-means clustering is applied for image classification.
With India growing it has made the ownership of vehicles a necessity. This has resulted in an civic problem. Parking areas have become congested due to the growing numbers of vehicles. The Number Plate Recognition System (NPR) plays an important role in different application as it is from parking to monitoring urban traffic and tracking automobile thefts. There are various NPR systems available today which work with different methodologies. In this paper, we attempt to review the various techniques and their usage. The NPR system has been implemented using template Matching and its accuracy was found to be 90.2% for number plates
ZERNIKE MOMENT-BASED FEATURE EXTRACTION FOR FACIAL RECOGNITION OF IDENTICAL T...ijcseit
Face recognition is one of the most challenging problems in the domain of image processing and machine
vision. The face recognition system is critical when individuals have very similar biometric signature such
as identical twins. In this paper, new efficient facial-based identical twins recognition is proposed
according to geometric moment. The utilized geometric moment is Zernike Moment (ZM) as a feature
extractor inside the facial area of identical twins images. Also, the facial area in an image is detected using
AdaBoost approach. The proposed method is evaluated on two datasets, Twins Days Festival and Iranian
Twin Society which contain scaled and rotated facial images of identical twins in different illuminations.
The results prove the ability of proposed method to recognize a pair of identical twins. Also, results show
that the proposed method is robust to rotation, scaling and changing illumination.
A Novel Mathematical Based Method for Generating Virtual Samples from a Front...CSCJournals
This paper deals with one sample face recognition which is a new challenging problem in pattern recognition. In the proposed method, the frontal 2D face image of each person divided to some sub-regions. After computing the 3D shape of each sub-region, a fusion scheme is applied on sub-regions to create a total 3D shape for whole face image. Then, 2D face image is added to the corresponding 3D shape to construct 3D face image. Finally by rotating the 3D face image, virtual samples with different views are generated. Experimental results on ORL dataset using nearest neighbor as classifier reveal an improvement about 5% in recognition rate for one sample per person by enlarging training set using generated virtual samples. Compared with other related works, the proposed method has the following advantages: 1) only one single frontal face is required for face recognition and the outputs are virtual images with variant views for each individual 2) need only 3 key points of face (eyes and nose) 3) 3D shape estimation for generating virtual samples is fully automatic and faster than other 3D reconstruction approaches 4) it is fully mathematical with no training phase and the estimated 3D model is unique for each individual.
Human’s facial parts extraction to recognize facial expressionijitjournal
Real-time facial expression analysis is an important yet challenging task in human computer interaction.
This paper proposes a real-time person independent facial expression recognition system using a
geometrical feature-based approach. The face geometry is extracted using the modified active shape
model. Each part of the face geometry is effectively represented by the Census Transformation (CT) based
feature histogram. The facial expression is classified by the SVM classifier with exponential chi-square
weighted merging kernel. The proposed method was evaluated on the JAFFE database and in real-world
environment. The experimental results show that the approach yields a high recognition rate and is
applicable in real-time facial expression analysis.
Face Alignment Using Active Shape Model And Support Vector MachineCSCJournals
The Active Shape Model (ASM) is one of the most popular local texture models for face alignment. It applies in many fields such as locating facial features in the image, face synthesis, etc. However, the experimental results show that the accuracy of the classical ASM for some applications is not high. This paper suggests some improvements on the classical ASM to increase the performance of the model in the application: face alignment. Four of our major improvements include: i) building a model combining Sobel filter and the 2-D profile in searching face in image; ii) applying Canny algorithm for the enhancement edge on image; iii) Support Vector Machine (SVM) is used to classify landmarks on face, in order to determine exactly location of these landmarks support for ASM; iv) automatically adjust 2-D profile in the multi-level model based on the size of the input image. The experimental results on CalTech face database and Technical University of Denmark database (imm_face) show that our proposed improvement leads to far better performance.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Similar to Facial expression identification using four bit co- occurrence matrixfeatures and k-nn classifier (20)
Mechanical properties of hybrid fiber reinforced concrete for pavementseSAT Journals
Abstract
The effect of addition of mono fibers and hybrid fibers on the mechanical properties of concrete mixture is studied in the present
investigation. Steel fibers of 1% and polypropylene fibers 0.036% were added individually to the concrete mixture as mono fibers and
then they were added together to form a hybrid fiber reinforced concrete. Mechanical properties such as compressive, split tensile and
flexural strength were determined. The results show that hybrid fibers improve the compressive strength marginally as compared to
mono fibers. Whereas, hybridization improves split tensile strength and flexural strength noticeably.
Keywords:-Hybridization, mono fibers, steel fiber, polypropylene fiber, Improvement in mechanical properties.
Material management in construction – a case studyeSAT Journals
Abstract
The objective of the present study is to understand about all the problems occurring in the company because of improper application
of material management. In construction project operation, often there is a project cost variance in terms of the material, equipments,
manpower, subcontractor, overhead cost, and general condition. Material is the main component in construction projects. Therefore,
if the material management is not properly managed it will create a project cost variance. Project cost can be controlled by taking
corrective actions towards the cost variance. Therefore a methodology is used to diagnose and evaluate the procurement process
involved in material management and launch a continuous improvement was developed and applied. A thorough study was carried
out along with study of cases, surveys and interviews to professionals involved in this area. As a result, a methodology for diagnosis
and improvement was proposed and tested in selected projects. The results obtained show that the main problem of procurement is
related to schedule delays and lack of specified quality for the project. To prevent this situation it is often necessary to dedicate
important resources like money, personnel, time, etc. To monitor and control the process. A great potential for improvement was
detected if state of the art technologies such as, electronic mail, electronic data interchange (EDI), and analysis were applied to the
procurement process. These helped to eliminate the root causes for many types of problems that were detected.
Managing drought short term strategies in semi arid regions a case studyeSAT Journals
Abstract
Drought management needs multidisciplinary action. Interdisciplinary efforts among the experts in various fields of the droughts
prone areas are helpful to achieve tangible and permanent solution for this recurring problem. The Gulbarga district having the total
area around 16, 240 sq.km, and accounts 8.45 per cent of the Karnataka state area. The district has been situated with latitude 17º 19'
60" North and longitude of 76 º 49' 60" east. The district is situated entirely on the Deccan plateau positioned at a height of 300 to
750 m above MSL. Sub-tropical, semi-arid type is one among the drought prone districts of Karnataka State. The drought
management is very important for a district like Gulbarga. In this paper various short term strategies are discussed to mitigate the
drought condition in the district.
Keywords: Drought, South-West monsoon, Semi-Arid, Rainfall, Strategies etc.
Life cycle cost analysis of overlay for an urban road in bangaloreeSAT Journals
Abstract
Pavements are subjected to severe condition of stresses and weathering effects from the day they are constructed and opened to traffic
mainly due to its fatigue behavior and environmental effects. Therefore, pavement rehabilitation is one of the most important
components of entire road systems. This paper highlights the design of concrete pavement with added mono fibers like polypropylene,
steel and hybrid fibres for a widened portion of existing concrete pavement and various overlay alternatives for an existing
bituminous pavement in an urban road in Bangalore. Along with this, Life cycle cost analyses at these sections are done by Net
Present Value (NPV) method to identify the most feasible option. The results show that though the initial cost of construction of
concrete overlay is high, over a period of time it prove to be better than the bituminous overlay considering the whole life cycle cost.
The economic analysis also indicates that, out of the three fibre options, hybrid reinforced concrete would be economical without
compromising the performance of the pavement.
Keywords: - Fatigue, Life cycle cost analysis, Net Present Value method, Overlay, Rehabilitation
Laboratory studies of dense bituminous mixes ii with reclaimed asphalt materialseSAT Journals
Abstract
The issue of growing demand on our nation’s roadways over that past couple of decades, decreasing budgetary funds, and the need to
provide a safe, efficient, and cost effective roadway system has led to a dramatic increase in the need to rehabilitate our existing
pavements and the issue of building sustainable road infrastructure in India. With these emergency of the mentioned needs and this
are today’s burning issue and has become the purpose of the study.
In the present study, the samples of existing bituminous layer materials were collected from NH-48(Devahalli to Hassan) site.The
mixtures were designed by Marshall Method as per Asphalt institute (MS-II) at 20% and 30% Reclaimed Asphalt Pavement (RAP).
RAP material was blended with virgin aggregate such that all specimens tested for the, Dense Bituminous Macadam-II (DBM-II)
gradation as per Ministry of Roads, Transport, and Highways (MoRT&H) and cost analysis were carried out to know the economics.
Laboratory results and analysis showed the use of recycled materials showed significant variability in Marshall Stability, and the
variability increased with the increase in RAP content. The saving can be realized from utilization of recycled materials as per the
methodology, the reduction in the total cost is 19%, 30%, comparing with the virgin mixes.
Keywords: Reclaimed Asphalt Pavement, Marshall Stability, MS-II, Dense Bituminous Macadam-II
Laboratory investigation of expansive soil stabilized with natural inorganic ...eSAT Journals
Abstract
Soil stabilization has proven to be one of the oldest techniques to improve the soil properties. Literature review conducted revealed
that uses of natural inorganic stabilizers are found to be one of the best options for soil stabilization. In this regard an attempt has
been made to evaluate the influence of RBI-81 stabilizer on properties of black cotton soil through laboratory investigations. Black
cotton soil with varying percentages of RBI-81 viz., 0, 0.5, 1, 1.5, 2, and 2.5 percent were studied for moisture density relationships
and strength behaviour of soils. Also the effect of curing period was evaluated as literature review clearly emphasized the strength
gain of soils stabilized with RBI-81 over a period of time. The results obtained shows that the unconfined compressive strength of
specimens treated with RBI-81 increased approximately by 250% for a curing period of 28 days as compared to virgin soil. Further
the CBR value improved approximately by 400%. The studies indicated an increasing trend for soil strength behaviour with
increasing percentage of RBI-81 suggesting its potential applications in soil stabilization.
Influence of reinforcement on the behavior of hollow concrete block masonry p...eSAT Journals
Abstract
Reinforced masonry was developed to exploit the strength potential of masonry and to solve its lack of tensile strength. Experimental
and analytical studies have been carried out to investigate the effect of reinforcement on the behavior of hollow concrete block
masonry prisms under compression and to predict ultimate failure compressive strength. In the numerical program, three dimensional
non-linear finite elements (FE) model based on the micro-modeling approach is developed for both unreinforced and reinforced
masonry prisms using ANSYS (14.5). The proposed FE model uses multi-linear stress-strain relationships to model the non-linear
behavior of hollow concrete block, mortar, and grout. Willam-Warnke’s five parameter failure theory has been adopted to model the
failure of masonry materials. The comparison of the numerical and experimental results indicates that the FE models can successfully
capture the highly nonlinear behavior of the physical specimens and accurately predict their strength and failure mechanisms.
Keywords: Structural masonry, Hollow concrete block prism, grout, Compression failure, Finite element method,
Numerical modeling.
Influence of compaction energy on soil stabilized with chemical stabilizereSAT Journals
Abstract
Increase in traffic along with heavier magnitude of wheel loads cause rapid deterioration in pavements. There is a need to improve
density, strength of soil subgrade and other pavement layers. In this study an attempt is made to improve the properties of locally
available loamy soil using twin approaches viz., i) increasing the compaction of soil and ii) treating the soil with chemical stabilizer.
Laboratory studies are carried out on both untreated and treated soil samples compacted by different compaction efforts. Studies
show that increase in compaction effort results in increase in density of soil. However in soil treated with chemical stabilizer, rate of
increase in density is not significant. The soil treated with chemical stabilizer exhibits improvement in both strength and performance
properties.
Keywords: compaction, density, subgradestabilization, resilient modulus
Geographical information system (gis) for water resources managementeSAT Journals
Abstract
Water resources projects are inherited with overlapping and at times conflicting objectives. These projects are often of varied sizes
ranging from major projects with command areas of millions of hectares to very small projects implemented at the local level. Thus,
in all these projects there is seldom proper coordination which is essential for ensuring collective sustainability.
Integrated watershed development and management is the accepted answer but in turn requires a comprehensive framework that can
enable planning process involving all the stakeholders at different levels and scales is compulsory. Such a unified hydrological
framework is essential to evaluate the cause and effect of all the proposed actions within the drainage basins.
The present paper describes a hydrological framework developed in the form of a Hydrologic Information System (HIS) which is
intended to meet the specific information needs of the various line departments of a typical State connected with water related aspects.
The HIS consist of a hydrologic information database coupled with tools for collating primary and secondary data and tools for
analyzing and visualizing the data and information. The HIS also incorporates hydrological model base for indirect assessment of
various entities of water balance in space and time. The framework would be maintained and updated to reflect fully the most
accurate ground truth data and the infrastructure requirements for planning and management.
Keywords: Hydrological Information System (HIS); WebGIS; Data Model; Web Mapping Services
Forest type mapping of bidar forest division, karnataka using geoinformatics ...eSAT Journals
Abstract
The study demonstrate the potentiality of satellite remote sensing technique for the generation of baseline information on forest types
including tree plantation details in Bidar forest division, Karnataka covering an area of 5814.60Sq.Kms. The Total Area of Bidar
forest division is 5814Sq.Kms analysis of the satellite data in the study area reveals that about 84% of the total area is Covered by
crop land, 1.778% of the area is covered by dry deciduous forest, 1.38 % of mixed plantation, which is very threatening to the
environmental stability of the forest, future plantation site has been mapped. With the use of latest Geo-informatics technology proper
and exact condition of the trees can be observed and necessary precautions can be taken for future plantation works in an appropriate
manner
Keywords:-RS, GIS, GPS, Forest Type, Tree Plantation
Factors influencing compressive strength of geopolymer concreteeSAT Journals
Abstract
To study effects of several factors on the properties of fly ash based geopolymer concrete on the compressive strength and also the
cost comparison with the normal concrete. The test variables were molarities of sodium hydroxide(NaOH) 8M,14M and 16M, ratio of
NaOH to sodium silicate (Na2SiO3) 1, 1.5, 2 and 2.5, alkaline liquid to fly ash ratio 0.35 and 0.40 and replacement of water in
Na2SiO3 solution by 10%, 20% and 30% were used in the present study. The test results indicated that the highest compressive
strength 54 MPa was observed for 16M of NaOH, ratio of NaOH to Na2SiO3 2.5 and alkaline liquid to fly ash ratio of 0.35. Lowest
compressive strength of 27 MPa was observed for 8M of NaOH, ratio of NaOH to Na2SiO3 is 1 and alkaline liquid to fly ash ratio of
0.40. Alkaline liquid to fly ash ratio of 0.35, water replacement of 10% and 30% for 8 and 16 molarity of NaOH and has resulted in
compressive strength of 36 MPa and 20 MPa respectively. Superplasticiser dosage of 2 % by weight of fly ash has given higher
strength in all cases.
Keywords: compressive strength, alkaline liquid, fly ash
Experimental investigation on circular hollow steel columns in filled with li...eSAT Journals
Abstract
Composite Circular hollow Steel tubes with and without GFRP infill for three different grades of Light weight concrete are tested for
ultimate load capacity and axial shortening , under Cyclic loading. Steel tubes are compared for different lengths, cross sections and
thickness. Specimens were tested separately after adopting Taguchi’s L9 (Latin Squares) Orthogonal array in order to save the initial
experimental cost on number of specimens and experimental duration. Analysis was carried out using ANN (Artificial Neural
Network) technique with the assistance of Mini Tab- a statistical soft tool. Comparison for predicted, experimental & ANN output is
obtained from linear regression plots. From this research study, it can be concluded that *Cross sectional area of steel tube has most
significant effect on ultimate load carrying capacity, *as length of steel tube increased- load carrying capacity decreased & *ANN
modeling predicted acceptable results. Thus ANN tool can be utilized for predicting ultimate load carrying capacity for composite
columns.
Keywords: Light weight concrete, GFRP, Artificial Neural Network, Linear Regression, Back propagation, orthogonal
Array, Latin Squares
Experimental behavior of circular hsscfrc filled steel tubular columns under ...eSAT Journals
Abstract
This paper presents an outlook on experimental behavior and a comparison with predicted formula on the behaviour of circular
concentrically loaded self-consolidating fibre reinforced concrete filled steel tube columns (HSSCFRC). Forty-five specimens were
tested. The main parameters varied in the tests are: (1) percentage of fiber (2) tube diameter or width to wall thickness ratio (D/t
from 15 to 25) (3) L/d ratio from 2.97 to 7.04 the results from these predictions were compared with the experimental data. The
experimental results) were also validated in this study.
Keywords: Self-compacting concrete; Concrete-filled steel tube; axial load behavior; Ultimate capacity.
Evaluation of punching shear in flat slabseSAT Journals
Abstract
Flat-slab construction has been widely used in construction today because of many advantages that it offers. The basic philosophy in
the design of flat slab is to consider only gravity forces; this method ignores the effect of punching shear due to unbalanced moments
at the slab column junction which is critical. An attempt has been made to generate generalized design sheets which accounts both
punching shear due to gravity loads and unbalanced moments for cases (a) interior column; (b) edge column (bending perpendicular
to shorter edge); (c) edge column (bending parallel to shorter edge); (d) corner column. These design sheets are prepared as per
codal provisions of IS 456-2000. These design sheets will be helpful in calculating the shear reinforcement to be provided at the
critical section which is ignored in many design offices. Apart from its usefulness in evaluating punching shear and the necessary
shear reinforcement, the design sheets developed will enable the designer to fix the depth of flat slab during the initial phase of the
design.
Keywords: Flat slabs, punching shear, unbalanced moment.
Evaluation of performance of intake tower dam for recent earthquake in indiaeSAT Journals
Abstract
Intake towers are typically tall, hollow, reinforced concrete structures and form entrance to reservoir outlet works. A parametric
study on dynamic behavior of circular cylindrical towers can be carried out to study the effect of depth of submergence, wall thickness
and slenderness ratio, and also effect on tower considering dynamic analysis for time history function of different soil condition and
by Goyal and Chopra accounting interaction effects of added hydrodynamic mass of surrounding and inside water in intake tower of
dam
Key words: Hydrodynamic mass, Depth of submergence, Reservoir, Time history analysis,
Evaluation of operational efficiency of urban road network using travel time ...eSAT Journals
Abstract
Efficiency of the road network system is analyzed by travel time reliability measures. The study overlooks on an important measure of
travel time reliability and prioritizing Tiruchirappalli road network. Traffic volume and travel time were collected using license plate
matching method. Travel time measures were estimated from average travel time and 95th travel time. Effect of non-motorized vehicle
on efficiency of road system was evaluated. Relation between buffer time index and traffic volume was created. Travel time model has
been developed and travel time measure was validated. Then service quality of road sections in network were graded based on
travel time reliability measures.
Keywords: Buffer Time Index (BTI); Average Travel Time (ATT); Travel Time Reliability (TTR); Buffer Time (BT).
Estimation of surface runoff in nallur amanikere watershed using scs cn methodeSAT Journals
Abstract
The development of watershed aims at productive utilization of all the available natural resources in the entire area extending from
ridge line to stream outlet. The per capita availability of land for cultivation has been decreasing over the years. Therefore, water and
the related land resources must be developed, utilized and managed in an integrated and comprehensive manner. Remote sensing and
GIS techniques are being increasingly used for planning, management and development of natural resources. The study area, Nallur
Amanikere watershed geographically lies between 110 38’ and 110 52’ N latitude and 760 30’ and 760 50’ E longitude with an area of
415.68 Sq. km. The thematic layers such as land use/land cover and soil maps were derived from remotely sensed data and overlayed
through ArcGIS software to assign the curve number on polygon wise. The daily rainfall data of six rain gauge stations in and around
the watershed (2001-2011) was used to estimate the daily runoff from the watershed using Soil Conservation Service - Curve Number
(SCS-CN) method. The runoff estimated from the SCS-CN model was then used to know the variation of runoff potential with different
land use/land cover and with different soil conditions.
Keywords: Watershed, Nallur watershed, Surface runoff, Rainfall-Runoff, SCS-CN, Remote Sensing, GIS.
Estimation of morphometric parameters and runoff using rs & gis techniqueseSAT Journals
Abstract
Land and water are the two vital natural resources, the optimal management of these resources with minimum adverse environmental
impact are essential not only for sustainable development but also for human survival. Satellite remote sensing with geographic
information system has a pragmatic approach to map and generate spatial input layers of predicting response behavior and yield of
watershed. Hence, in the present study an attempt has been made to understand the hydrological process of the catchment at the
watershed level by drawing the inferences from moprhometric analysis and runoff. The study area chosen for the present study is
Yagachi catchment situated in Chickamaglur and Hassan district lies geographically at a longitude 75⁰52’08.77”E and
13⁰10’50.77”N latitude. It covers an area of 559.493 Sq.km. Morphometric analysis is carried out to estimate morphometric
parameters at Micro-watershed to understand the hydrological response of the catchment at the Micro-watershed level. Daily runoff
is estimated using USDA SCS curve number model for a period of 10 years from 2001 to 2010. The rainfall runoff relationship of the
study shows there is a positive correlation.
Keywords: morphometric analysis, runoff, remote sensing and GIS, SCS - method
-
Effect of variation of plastic hinge length on the results of non linear anal...eSAT Journals
Abstract The nonlinear Static procedure also well known as pushover analysis is method where in monotonically increasing loads are applied to the structure till the structure is unable to resist any further load. It is a popular tool for seismic performance evaluation of existing and new structures. In literature lot of research has been carried out on conventional pushover analysis and after knowing deficiency efforts have been made to improve it. But actual test results to verify the analytically obtained pushover results are rarely available. It has been found that some amount of variation is always expected to exist in seismic demand prediction of pushover analysis. Initial study is carried out by considering user defined hinge properties and default hinge length. Attempt is being made to assess the variation of pushover analysis results by considering user defined hinge properties and various hinge length formulations available in literature and results compared with experimentally obtained results based on test carried out on a G+2 storied RCC framed structure. For the present study two geometric models viz bare frame and rigid frame model is considered and it is found that the results of pushover analysis are very sensitive to geometric model and hinge length adopted. Keywords: Pushover analysis, Base shear, Displacement, hinge length, moment curvature analysis
Effect of use of recycled materials on indirect tensile strength of asphalt c...eSAT Journals
Abstract
Depletion of natural resources and aggregate quarries for the road construction is a serious problem to procure materials. Hence
recycling or reuse of material is beneficial. On emphasizing development in sustainable construction in the present era, recycling of
asphalt pavements is one of the effective and proven rehabilitation processes. For the laboratory investigations reclaimed asphalt
pavement (RAP) from NH-4 and crumb rubber modified binder (CRMB-55) was used. Foundry waste was used as a replacement to
conventional filler. Laboratory tests were conducted on asphalt concrete mixes with 30, 40, 50, and 60 percent replacement with RAP.
These test results were compared with conventional mixes and asphalt concrete mixes with complete binder extracted RAP
aggregates. Mix design was carried out by Marshall Method. The Marshall Tests indicated highest stability values for asphalt
concrete (AC) mixes with 60% RAP. The optimum binder content (OBC) decreased with increased in RAP in AC mixes. The Indirect
Tensile Strength (ITS) for AC mixes with RAP also was found to be higher when compared to conventional AC mixes at 300C.
Keywords: Reclaimed asphalt pavement, Foundry waste, Recycling, Marshall Stability, Indirect tensile strength.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
An Approach to Detecting Writing Styles Based on Clustering Techniquesambekarshweta25
An Approach to Detecting Writing Styles Based on Clustering Techniques
Authors:
-Devkinandan Jagtap
-Shweta Ambekar
-Harshit Singh
-Nakul Sharma (Assistant Professor)
Institution:
VIIT Pune, India
Abstract:
This paper proposes a system to differentiate between human-generated and AI-generated texts using stylometric analysis. The system analyzes text files and classifies writing styles by employing various clustering algorithms, such as k-means, k-means++, hierarchical, and DBSCAN. The effectiveness of these algorithms is measured using silhouette scores. The system successfully identifies distinct writing styles within documents, demonstrating its potential for plagiarism detection.
Introduction:
Stylometry, the study of linguistic and structural features in texts, is used for tasks like plagiarism detection, genre separation, and author verification. This paper leverages stylometric analysis to identify different writing styles and improve plagiarism detection methods.
Methodology:
The system includes data collection, preprocessing, feature extraction, dimensional reduction, machine learning models for clustering, and performance comparison using silhouette scores. Feature extraction focuses on lexical features, vocabulary richness, and readability scores. The study uses a small dataset of texts from various authors and employs algorithms like k-means, k-means++, hierarchical clustering, and DBSCAN for clustering.
Results:
Experiments show that the system effectively identifies writing styles, with silhouette scores indicating reasonable to strong clustering when k=2. As the number of clusters increases, the silhouette scores decrease, indicating a drop in accuracy. K-means and k-means++ perform similarly, while hierarchical clustering is less optimized.
Conclusion and Future Work:
The system works well for distinguishing writing styles with two clusters but becomes less accurate as the number of clusters increases. Future research could focus on adding more parameters and optimizing the methodology to improve accuracy with higher cluster values. This system can enhance existing plagiarism detection tools, especially in academic settings.
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...ssuser7dcef0
Power plants release a large amount of water vapor into the
atmosphere through the stack. The flue gas can be a potential
source for obtaining much needed cooling water for a power
plant. If a power plant could recover and reuse a portion of this
moisture, it could reduce its total cooling water intake
requirement. One of the most practical way to recover water
from flue gas is to use a condensing heat exchanger. The power
plant could also recover latent heat due to condensation as well
as sensible heat due to lowering the flue gas exit temperature.
Additionally, harmful acids released from the stack can be
reduced in a condensing heat exchanger by acid condensation. reduced in a condensing heat exchanger by acid condensation.
Condensation of vapors in flue gas is a complicated
phenomenon since heat and mass transfer of water vapor and
various acids simultaneously occur in the presence of noncondensable
gases such as nitrogen and oxygen. Design of a
condenser depends on the knowledge and understanding of the
heat and mass transfer processes. A computer program for
numerical simulations of water (H2O) and sulfuric acid (H2SO4)
condensation in a flue gas condensing heat exchanger was
developed using MATLAB. Governing equations based on
mass and energy balances for the system were derived to
predict variables such as flue gas exit temperature, cooling
water outlet temperature, mole fraction and condensation rates
of water and sulfuric acid vapors. The equations were solved
using an iterative solution technique with calculations of heat
and mass transfer coefficients and physical properties.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
DESIGN AND ANALYSIS OF A CAR SHOWROOM USING E TABS
Facial expression identification using four bit co- occurrence matrixfeatures and k-nn classifier
1. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 04 Issue: 12 | Dec-2015, Available @ http://www.ijret.org 81
FACIAL EXPRESSION IDENTIFICATION USING FOUR-BIT CO-
OCCURRENCE MATRIXFEATURES AND K-NN CLASSIFIER
Bonagiri C S K Sunil Kumar1
, V Bala Shankar2
, Pullela S V V S R Kumar3
1,2,3
Department of Computer Science & Engineering, Aditya College of Engineering, Surampalem, East Godavari
District, Andhra Pradesh, India
Abstract
The present paper proposes a novel approach for facial expression identification based on the statistical features extracted from
the 4-bit co-occurrence matrix. In the present, first color image is converted into grey level image using 7-bit quantization
technique. To reduce the complexity of the Grey Level Co-occurrence matrix the present paper generates the co-occurrence
matrix on 4 bit grey image which is generated by quantization technique. From The 4-bit CM extract 4 features; contrast,
homogeneity, energy and correlation and generate the feature vector. By using the K-NN Classification technique classify the test
image and extract the four statistical features and identify the type of expression of the test facial image. The present method got
better comparative results when it is tested on Japanese Female Facial Expression (JAFFE) Database.
Key Words: Facial Expression identification, Classification, k-NN classifier, Quantization, and GLCM
--------------------------------------------------------------------***----------------------------------------------------------------------
1. INTRODUCTION
One of the rapidly growing fields of research is facial
expression recognition due to continually growing interest in
applications for automatic human behavior psychoanalysis
and new technologies for human-machine announcement
and multimedia retrieval. Mostly systems for Facial
Expression Recognition and its Analysis proposed in the
literature focused on detecting the occurrence of
expressions, often using basic emotions or the Facial Action
Coding System (FACS Action Units, AUs,[1]. In reality,
expressions vary greatly in concentration, and concentration
is often a strong cue for the explanation of the sense of
expressions.
So many approaches are available in literature for the
recognition of the facial expressions [2, 3]. The proposed
method included the model based techniques for analyzing
the face, which is considered as one global pattern without
decomposing the face to different components. A feature
vector, representing the expression information, is obtained
by preprocessing the face image and applying different
transformations for facial expression feature extraction, like
Gabor wavelet [4], curvelets [5, 6], and local binary pattern
[7, 8, 18, 19]. Principal component analysis (PCA) is mainly
used for dimensionality reduction, and multilayer neural
networks or support vector machines are widely used for
image classification [9, 10]. The analytic or geometric
feature based methods are used to divide the face into
smaller components or subsection using which we can
identify the expressions. Characteristic points are detected,
and distances between Characteristic points are calculated to
form the feature vectors or build a model of appearance [11,
12, 13] and support vector machines SVM [14].
The above approached methods have its own disadvantages
and own demerits especially in the form of computational
time and percentage of recognition. To overcome such
disadvantages, the present paper derives a concept for the
facial expression based on the statistical features derived
from 16 gray values co-occurrence matrix. The paper is
organized into 4 sections. Section 2 describe the proposed
method and section 3 describes the results and discussions
and conclusions are given in section 4.
2. METHODOLOGY: FACIAL EXPRESSION
RECOGNITION SYSTEM
The proposed method mainly consists of 5 steps. In the first
step, crop the given input image which covers the all facial
skin of the image. In the second step, convert the cropped
color image into 4-bit quantization technique. In the third
step, generate the co-occurrence matrix which consists of 16
shades. Extract the 4 statistical features and generate the
Feature Vector in the forth step. Based on the FV values and
k-NN classifier recognize the type of expression in the last
step. The block diagram of the facial expression recognition
is shown in figure 1.
Color
Image
Cropped
Image
4-bit
quantization
k-NN
Classifier
16 shades
Grey level
Image
Generate the
Co-occurrence
Matrix
Extract the
4 statistical
features
Identify the
Facial
Expression
Fig -1: Block Diagram of the Facial expression
Identification
Step 1 : Crop the given input image which covers the all
facial skin of the image
Step 2: Crop the grey scale image: The gray facial image is
cropped to cover whole skin area of the facial image. For
2. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 04 Issue: 12 | Dec-2015, Available @ http://www.ijret.org 82
efficient identification of the expression, cropping is
necessary. The cropped image consists of forehead; two-
eyes nose, mount, and chin area. All these appearance
changed according the type of expression. Figure 2 shows
an example of the original facial image and the cropped
image.
(a) (b)
Fig -2: (a) original image (b) cropped image
Step 3: Convert the Color Image into 4-bit gray scale image
using 4-bit quantization technique
The size of the Grey Level Concurrence Matrix (GLCM)
depends on number of shades in the image. Generally, Gray
Scale image has 256 shades and the size of the GLCM
matrix is 256. So, in order to reduce the complexity of
GLCM matrix, the present paper uses the 16 shades to
generate the co occurrence matrix. 16 shades are represented
by using 4 –bits. It can be accomplished by using
quantization technique. The present paper uses 4-bit Binary
Code quantization technique.
In 4-bit Binary Code quantization technique, the original
images are quantized into 16shades of gray image is
computed from the 256 gray shades images and then the
statistical information of facial image is calculated to
describe features of the image
To extract gray level features from color information, the
present paper utilized the Gray shades which quantize the
gray shades into 4-bins to obtain 16 gray levels. The index
matrix of 16values gray image is denoted as GS(x, y). The
Gray quantization process is done by using 4-bit binary code
of 16 colors as follows:
GS(x,y)= 4*I(Red)+2*I(Green)+I(Blue) where
I(Red)= 0, 0≤Red≤16, I(Red)=k,
((4*k)+1) ≤ Red ≤ (4*(k+1)) k = [1, 2, 3, 4]
I(Green)= 0, 0≤Green≤16, I(Green)= k,
((4*k)+1) ≤ Green ≤ (4*(k+1)) k = [1, 2, 3, 4]
I(Blue)= 0, 0≤Blue≤16, I(Blur)= k,
((4*k)+1) ≤ Blue ≤ (4*(k+1)) k = [1, 2, 3, 4]
Therefore, each value of GS(x, y) is a 4 bit binary code
ranging from 0 to 15. The process of 4-bit gray scale image
generation is depicted in figure 3
139 128 131 134 132 9 8 8 8 8
142 140 127 133 138 9 9 8 8 9
142 124 129 145 142 9 8 8 9 9
141 129 128 131 133 9 8 8 8 8
137 134 134 137 137 9 8 8 9 9
(a) (b)
Fig -3: 4-bit quantization procedure a) Sample Gray level
Image b) Generated 4-bit 16 shade gray image
Step 4: generate the 4 bit grey scale concurrence matrix:
Grey level co-occurrence matrix (GLCM) is the one of the
simplest technique for describing texture image[15]. The
GLCM is gives complete and detailed information about the
image. Generally the size of co-occurrence matrix of image
number of gray level on the image. For Generating GLCM
of the Gray level image, the size of the GLCM is 256×256.
This I is the one of the drawback of GLCM. It takes more
computational time for generating the co-occurrence matrix.
To avoid such problem, the present paper generates the co-
occurrence matrix on 16 shades image so that the size of the
GLCM is reduced to 16×16 and its causes reduces the
computational cost drastically. The generated matrix called
Four-bit Co-occurrence Matrix (FCoM).
A set of statistical features defined by Haralick [15] are
extracted on FCoM of the facial image. The features used in
this method are energy, contrast, homogeneity, and
correlation. The equations of the considered feature are
shown in equations 1 to 4. The proposed FCoM combines
the merits of both statistical computation cost for processing
of the facial images. The GLCM gives the whole
information of the facial expression image
3. RESULTS AND DISCUSSIONS
To prove the functionality of the proposed scheme, it has
established a database contains 213 facial expression images
of females from Japanese Female Facial Expression
(JAFFE) Database. The Database was collected by Kamachi
and Gyoba at Kyushu University[16], Japan. Original
3. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 04 Issue: 12 | Dec-2015, Available @ http://www.ijret.org 83
images have been rescaled and cropped such that the eyes
are roughly at the same position within a distance of 60
pixels in the final images (resolution: 256 × 256 pixels). The
images include the seven categories of expressions (neutral,
sadness, surprise, happiness, anger, disgust and fear) is
roughly the same. A few of them are shown in Figure. 4.
Fig -4: Facial expression database: Examples
In the proposed method the test images are grouped into
seven categories based on expression (neutral, sadness,
surprise, happiness, disgust, anger, and fear). The four
statistical features are extracted from JAFFE database and
generate the feature vector. FCoM of seven category results
are stored in the feature vector. Feature vector directs to
demonstration of features of the FAFFE images. The
statistical features of seven groups of facial expressions are
shown in tables1, 2, 3, 4, 5, 6, and 7 respectively. To prove
the efficiency of the proposed method, took100 different
facial expressions from JAFFE data base, Google database
and scanned images and extracted the statistical features
from those images and the results are stored in the test
vector. Estimate the minimum distance between feature
vector and test vector and stored the result in the library.
Classify the test images into appropriate expression category
using minimum distance K-nearest neighbor classifier.
Some of the successful classification results of test data
based on the proposed method are shown in table 8. The
classification results of the proposed method are shown in
table 9 and corresponding classification graph is shown in
figure 5.
Table 1: Feature set values of Anger expression images
S.
No
Image
Name
Correla-
tion
Contrast Energy Homogeneity
1 Img.104 -0.0358 32.33 0.028 0.2128
2 Img.105 -0.0317 31.39 0.028 0.2117
3 Img.106 -0.0683 31.67 0.033 0.2097
4 Img.125 0.0313 30.27 0.035 0.2404
5 Img.127 0.0499 30.75 0.032 0.2439
6 Img.146 0.0306 30.14 0.032 0.2407
7 Img.147 0.0387 29.93 0.027 0.2315
8 Img.148 0.0496 29.47 0.027 0.2280
9 Img.167 0.0013 33.94 0.037 0.2421
10 Img.168 0.0120 32.72 0.035 0.2350
11 Img.169 0.0117 32.61 0.032 0.2375
12 Img.17 0.0518 29.65 0.038 0.2616
13 Img.18 0.0568 30.45 0.036 0.2459
14 Img.19 0.0621 30.19 0.043 0.2627
15 Img.190 0.0595 29.75 0.033 0.2477
16 Img.191 0.0577 30.92 0.035 0.2479
17 Img.192 0.0484 30.58 0.034 0.2439
18 Img.211 0.0175 32.86 0.037 0.2344
19 Img.212 0.0560 31.45 0.037 0.2503
20 Img.213 0.0582 30.83 0.032 0.2492
6. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 04 Issue: 12 | Dec-2015, Available @ http://www.ijret.org 86
Table 9: Classification Results of the test database
Data
Base
Name
No of
Images
Correctly
Classified
Not
correctly
classified
%
recognition
JAFFE 60 59 1 98.33
Google 20 19 1 95.00
Scanned 20 19 1 95.00
3. CONCLUSIONS
The present paper proposed a new scheme for generating the
gray scale image with 16 shades. The novelty of the present
approach is that its take less time to generate the co-
occurrence matrix because of the image has only 16 gray
levels. The proposed method has got the %recognition is
about 96.11 when applied on 3 different databases.
Fig -5: Classification graph of the proposed method
REFERENCES
[1] M. Bartlett, G. Littlewort, M. Frank, C. Lainscsek, I.
Fasel, and J. Movellan. Automatic recognition of facial
actions in spontaneous expressions. Journal of
Multimedia, 1(6):22–35, 2006.
[2] B. Fasel and J. Luettin. Automatic facial expression
analysis: a survey.Pattern Recognition, 36(1):259–275,
2003.
[3] Z. Zeng, M. Pantic, G. I. Roisman, and T. S. Huang. A
survey of affectrecognition methods: audio, visual, and
spontaneous expressions.IEEE Trans. Pattern Analysis
nd Machine Intelligence, 31(1):39–58,2009
[4] Juliano J. Bazzo Marcus V. Lamar, “Recognizing
Facial Actions Using Gabor Wavelets with Neutral
Face Average Difference,” Sixth IEEE International
Conference on Automatic Face and Gesture
Recognition (FGR’04), 2004.
[5] Xianxing Wu and Jieyu Zhao, "Curvelet Feature
Extraction for Face Recognition and Facial Expression
Recognition", Sixth International
[6] Conference on Natural Computation, pp.1212-1216,
2010
[7] E. Candés, L. Demanet, D. Donoho and L. Ying, “Fast
Discrete Curvelet Transforms,”Applied and
Computational Mathematics, Caltech, Pasadena, 2006.
[8] S. Singh and R. Maurya, A. Mittal,"Application of
Complete Local Binary Pattern Method for Facial
Expression Recognition",IEEE Proceedings of 4th
International Conference on Intelligent Human
Computer Interaction, Kharagpur, India, December 27-
29, 2012.
[9] R.Hablani,N. Chaudhari andS.Tanwani "Recognition of
Facial Expressions using Local Binary Patternsof
Important Facial Parts," International Journal of Image
Processing (IJIP), vol. 7 (2), 2013.
[10]M.Pantic and L.J.M. Rothkrantz, “Expert System For
Automatic Analysis Of Facial Expressions,” Image And
Vision Computing,vol.18, pp. 881-905,2000.
[11]D. Arumugan, S. Purushothaman.”Emotion
Classification Using Facial Expression,” International
Journal of advanced Computer Science and
Application, vol. 2, no 7,2011.
[12]T. F. Cootes, G. J. Edwards and C. J. Taylor. “Active
Appearance Models,” IEEE Transaction on Pattern
Analysis and Machine Intelligence , vol. 23, n° 6, 2001.
[13]M. S. Bartlett, J. C. Hager, P. Ekman, and T. J.
Sejnowski.“Measuring facial expressions by computer
image analysis,” Psychophysiology, Cambridge
University, USA, vol. 36, pp. 253–263, 1999.
[14]Zheng You. “Feature-Based Facial Expression
Recognition: Sensitivity Analysis and Experiments with
a Multi-Layer Perceptron,” International Journal of
pattern Recognition and artificial Intelligence vol.
13(6), pp. 893-911, 1999.
[15]M. S. Bartlett, J. C. Hager, P. Ekman, and T. J.
Sejnowski.“Measuring facial expressions by computer
image analysis, ”Psychophysiology, Cambridge
University, USA, vol. 36, pp. 253–263, 1999.
[16]Robert M. Haralick, K Shanmugam and Its’Hak
Dinstein(1979). “Textural Features for Image
Classification.” IEEE Transactions on Systems, Man,
and Cybernetics pp. 610-621.
[17]M. Lyons, S. Akamatsu, M. Kamachi, and J. Gyoba.
Coding facial expressions with gabor wavelets. In
Proceedings of the Third IEEE International
Conference on Automatic Face and Gesture
Recognition, Nara, Japan, Apr. 1998
[18]SVVSR Kumar, Pullela., V.Vijaya Kumar, and
Rampay. Venkatarao. "Age Classification Based On
Integrated Approach", International Journal of Image
Graphics and Signal Processing, 2014
[19]Satyanarayana Murty, Gorti, J Sasi Kiran, and V.Vijaya
Kumar. "Facial Expression Recognition Based on
Features Derived From the Distinct LBP and GLCM",
International Journal of Image Graphics and Signal
Processing, 2014.
[20]Pullela S V V S R Kumar et.al., Classification of Facial
Expressions based on Transitions Derived from Third
Order Neighborhood LBP, Global Journal of Computer
Science and Technology, Graphics & Vision, Vol. 14
no. 1, pp. 1-12, 2014.
BIOGRAPHIES
Bonagiri C S K Sunil Kumar is pursuing
his masters degree from Aditya College of
Engineering, Surampalem. His research
interests include Data Mining and Image
processing.
7. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 04 Issue: 12 | Dec-2015, Available @ http://www.ijret.org 87
V Bala Shankar is working as Senior
Assistant Professor in the Department of
Computer Science & Engineering, Aditya
College of Engineering, Surampalem. His
research interests include Image Processing,
Cloud Computing.
Dr. Pullela S V V S R Kumar is working
as Professor in CSE at Aditya College of
Engineering, Surampalem. He received his
Ph.D degree from Acharya Nagarjuna
University. He is having more than 15 years
of experience and published 13 research
papers in various International Journals and
Conferences. He acted as a reviewer and PC Member for
various international conferences. His research interests
include Data Mining, Pattern Recognition and Image
Processing.