This paper presents shape and level analysis using local standard deviation and Hough transform technique to detect the shape and level of the bottle. A 155 sample images are used as a test product to detect shape and level. Local standard deviation is used contrast gain technique to segment the shape of the bottle by enhancing the contrast of the image. The ratio of the area is calculated from the extent parameter. The maximum and minimum water level is created by using Hough transform technique to identify the level of the water. Decision tree is applied to classify the shape and level of the bottle either good or defect condition. From experimental result, 97% and 93% accuracy of shape and level is achieved which shows that the proposed analysis technique is potential to be applied for beverages product inspection system.
Statistical Models for Face Recognition System With Different Distance MeasuresCSCJournals
Face recognition is one of the challenging applications of image processing. Robust face recognition algorithm should posses the ability to recognize identity despite many variations in pose, lighting and appearance. Principle Component Analysis (PCA) method has a wide application in the field of image processing for dimension reduction of the data. But these algorithms have certain limitations like poor discriminatory power and ability to handle large computational load. This paper proposes a face recognition techniques based on PCA with Gabor wavelets in the preprocessing stage and statistical modeling methods like LDA and ICA for feature extraction. The classification for the proposed system is done using various distance measure methods like Euclidean Distance(ED), Cosine Distance (CD), Mahalanobis Distance (MHD) methods and the recognition rate were compared for different distance measures. The proposed method has been successfully tested on ORL face data base with 400 frontal images corresponding to 40 different subjects which are acquired under variable illumination and facial expressions. It is observed from the results that use of PCA with Gabor filters and features extracted through ICA method gives a recognition rate of about 98% when classified using Mahalanobis distance classifier. This recognition rate stands better than the conventional PCA and PCA + LDA methods employing other and classifier techniques.
AN ILLUMINATION INVARIANT FACE RECOGNITION USING 2D DISCRETE COSINE TRANSFORM...ijcsit
Automatic face recognition performance is affected due to the head rotations and tilt, lighting intensity and
angle, facial expressions, aging and partial occlusion of face using Hats, scarves, glasses etc.In this paper,
illumination normalization of face images is done by combining 2D Discrete Cosine Transform and
Contrast Limited Adaptive Histogram Equalization. The proposed method selects certain percentage of
DCT coefficients and rest is set to 0. Then, inverse DCT is applied which is followed by logarithm
transform and CLAHE. Thesesteps create illumination invariant face image, termed as ‘DCT CLAHE’
image. The fisher face subspace method extracts features from ‘DCT CLAHE’ imageand features are
matched with cosine similarity. The proposed method is tested in AR database and performance measures
like recognition rate, Verification rate at 1% FAR and Equal Error Rate are computed. The experimental
results shows high recognition rate in AR database.
There are three major complications of diabetes which lead to blindness. They are retinopathy, cataracts, and glaucoma among which diabetic retinopathy is considered as the most serious complication affecting the blood vessels in the retina. Diabetic retinopathy (DR) occurs when tiny vessels swell and leak fluid or abnormal new blood vessels grow hampering normal vision.
Diabetic retinopathy is a widespread problem of visual impairment. The abnormalities like microaneurysms, hemorrhages and exudates are the key symptoms which play an important role in diagnosis of diabetic retinopathy. Early detection of these abnormalities may prevent the blurred vision or vision loss due to diabetic retinopathy. Basically exudates are lipid lesions able to be seen in optical images. Exudates are categorized into hard exudates and soft exudates based on its appearance. Hard exudates come out as intense yellow regions and soft exudates have fuzzy manifestations. Automatic detection of exudates may aid ophthalmologists in diagnosis of diabetic retinopathy and its early treatment. Fig. 1 shows the key symptoms of diabetic retinopathy.
Abstract:
A technique for exudate detectionin fundus image is been presented in this paper. Due to diabetic retinopathy an abnormality is caused known as exudates.The loss of vision can be prevented by detecting the exudates as early as possible. The work mainly aims at detecting exudates which is present in the green channel of the RGB image by applying few preprocessing steps, DWT and feature extraction. The extracted features are fed to 3 different classifiers such as KNN, SVM and NN. Based on the classifier result if an exudate is present the extraction of exudate ROI is done based on canny edge detection followed by morphological operations. The severity of the exudates is established on the area of the detected exudate.
Keywords:Exudates, Fundus image, Diabetic retinopathy, DWT, KNN, SVM, NN, Canny edge detection, Morphological operations.
Image Processing for Rapidly Eye Detection based on Robust Haar Sliding Window IJECEIAES
Object Detection using Haar Cascade Clasifier widely applied in several devices and applications as a medium of interaction between human and computer such as a tool control that utilizes the detection of eye movements. Obviously speed and precision in the detection process such as eyes, has an effect if implemented on a device. If the eye could not detect accurately, controlling device systems could reach bad detection as well. The proposed method can be used as an approach to detect the eye region of eye based on haar classifier method by means of modifying the direction of sliding window. In which, it was initially placed in the middle position of image on facial area by assuming the location of eyes area in the central region of the image. While the window region of conventional haar cascade scan the whole of image start from the left top corner. From the experiment by using our proposed method, it can speed up the the computation time and improve accuracy significantly reach to 92,4%.
Efficient Small Template Iris Recognition System Using Wavelet TransformCSCJournals
This document presents an efficient iris recognition system using wavelet transform that achieves a small template size of 364 bits. The system first segments the iris from eye images and applies normalization to account for variations in scale. It then proposes a method to remove the eyelid and eyelash regions by masking parts of the iris. Next, image enhancement increases contrast before feature extraction using wavelet transform. The document evaluates different combinations of wavelet coefficients and quantization levels to determine the most effective approach. Experimental results demonstrate false accept and reject rates of 0% and 1% respectively.
Microarray Data Classification Using Support Vector MachineCSCJournals
DNA microarrays allow biologist to measure the expression of thousands of genes simultaneously on a small chip. These microarrays generate huge amount of data and new methods are needed to analyse them. In this paper, a new classification method based on support vector machine is proposed. The proposed method is used to classify gene expression data recorded on DNA microarrays. It is found that the proposed method is faster than neural network and the classification performance is not less than neural network.
IRJET - Human Eye Pupil Detection Technique using Center of Gravity MethodIRJET Journal
This document presents a pupil detection technique using the center of gravity method. It first applies Gaussian filtering, double thresholding, and morphological closing to an eye image to isolate the pupil region. It then uses the center of gravity method to calculate the x and y coordinates of the pupil center by dividing the total pixel values in each dimension by the total number of black pixels. Experimental results on the CASIA iris database demonstrate the accuracy of the proposed computationally efficient pupil detection method. A hardware implementation of the technique is also presented, which could be used for real-time iris localization in biometric recognition applications.
Statistical Models for Face Recognition System With Different Distance MeasuresCSCJournals
Face recognition is one of the challenging applications of image processing. Robust face recognition algorithm should posses the ability to recognize identity despite many variations in pose, lighting and appearance. Principle Component Analysis (PCA) method has a wide application in the field of image processing for dimension reduction of the data. But these algorithms have certain limitations like poor discriminatory power and ability to handle large computational load. This paper proposes a face recognition techniques based on PCA with Gabor wavelets in the preprocessing stage and statistical modeling methods like LDA and ICA for feature extraction. The classification for the proposed system is done using various distance measure methods like Euclidean Distance(ED), Cosine Distance (CD), Mahalanobis Distance (MHD) methods and the recognition rate were compared for different distance measures. The proposed method has been successfully tested on ORL face data base with 400 frontal images corresponding to 40 different subjects which are acquired under variable illumination and facial expressions. It is observed from the results that use of PCA with Gabor filters and features extracted through ICA method gives a recognition rate of about 98% when classified using Mahalanobis distance classifier. This recognition rate stands better than the conventional PCA and PCA + LDA methods employing other and classifier techniques.
AN ILLUMINATION INVARIANT FACE RECOGNITION USING 2D DISCRETE COSINE TRANSFORM...ijcsit
Automatic face recognition performance is affected due to the head rotations and tilt, lighting intensity and
angle, facial expressions, aging and partial occlusion of face using Hats, scarves, glasses etc.In this paper,
illumination normalization of face images is done by combining 2D Discrete Cosine Transform and
Contrast Limited Adaptive Histogram Equalization. The proposed method selects certain percentage of
DCT coefficients and rest is set to 0. Then, inverse DCT is applied which is followed by logarithm
transform and CLAHE. Thesesteps create illumination invariant face image, termed as ‘DCT CLAHE’
image. The fisher face subspace method extracts features from ‘DCT CLAHE’ imageand features are
matched with cosine similarity. The proposed method is tested in AR database and performance measures
like recognition rate, Verification rate at 1% FAR and Equal Error Rate are computed. The experimental
results shows high recognition rate in AR database.
There are three major complications of diabetes which lead to blindness. They are retinopathy, cataracts, and glaucoma among which diabetic retinopathy is considered as the most serious complication affecting the blood vessels in the retina. Diabetic retinopathy (DR) occurs when tiny vessels swell and leak fluid or abnormal new blood vessels grow hampering normal vision.
Diabetic retinopathy is a widespread problem of visual impairment. The abnormalities like microaneurysms, hemorrhages and exudates are the key symptoms which play an important role in diagnosis of diabetic retinopathy. Early detection of these abnormalities may prevent the blurred vision or vision loss due to diabetic retinopathy. Basically exudates are lipid lesions able to be seen in optical images. Exudates are categorized into hard exudates and soft exudates based on its appearance. Hard exudates come out as intense yellow regions and soft exudates have fuzzy manifestations. Automatic detection of exudates may aid ophthalmologists in diagnosis of diabetic retinopathy and its early treatment. Fig. 1 shows the key symptoms of diabetic retinopathy.
Abstract:
A technique for exudate detectionin fundus image is been presented in this paper. Due to diabetic retinopathy an abnormality is caused known as exudates.The loss of vision can be prevented by detecting the exudates as early as possible. The work mainly aims at detecting exudates which is present in the green channel of the RGB image by applying few preprocessing steps, DWT and feature extraction. The extracted features are fed to 3 different classifiers such as KNN, SVM and NN. Based on the classifier result if an exudate is present the extraction of exudate ROI is done based on canny edge detection followed by morphological operations. The severity of the exudates is established on the area of the detected exudate.
Keywords:Exudates, Fundus image, Diabetic retinopathy, DWT, KNN, SVM, NN, Canny edge detection, Morphological operations.
Image Processing for Rapidly Eye Detection based on Robust Haar Sliding Window IJECEIAES
Object Detection using Haar Cascade Clasifier widely applied in several devices and applications as a medium of interaction between human and computer such as a tool control that utilizes the detection of eye movements. Obviously speed and precision in the detection process such as eyes, has an effect if implemented on a device. If the eye could not detect accurately, controlling device systems could reach bad detection as well. The proposed method can be used as an approach to detect the eye region of eye based on haar classifier method by means of modifying the direction of sliding window. In which, it was initially placed in the middle position of image on facial area by assuming the location of eyes area in the central region of the image. While the window region of conventional haar cascade scan the whole of image start from the left top corner. From the experiment by using our proposed method, it can speed up the the computation time and improve accuracy significantly reach to 92,4%.
Efficient Small Template Iris Recognition System Using Wavelet TransformCSCJournals
This document presents an efficient iris recognition system using wavelet transform that achieves a small template size of 364 bits. The system first segments the iris from eye images and applies normalization to account for variations in scale. It then proposes a method to remove the eyelid and eyelash regions by masking parts of the iris. Next, image enhancement increases contrast before feature extraction using wavelet transform. The document evaluates different combinations of wavelet coefficients and quantization levels to determine the most effective approach. Experimental results demonstrate false accept and reject rates of 0% and 1% respectively.
Microarray Data Classification Using Support Vector MachineCSCJournals
DNA microarrays allow biologist to measure the expression of thousands of genes simultaneously on a small chip. These microarrays generate huge amount of data and new methods are needed to analyse them. In this paper, a new classification method based on support vector machine is proposed. The proposed method is used to classify gene expression data recorded on DNA microarrays. It is found that the proposed method is faster than neural network and the classification performance is not less than neural network.
IRJET - Human Eye Pupil Detection Technique using Center of Gravity MethodIRJET Journal
This document presents a pupil detection technique using the center of gravity method. It first applies Gaussian filtering, double thresholding, and morphological closing to an eye image to isolate the pupil region. It then uses the center of gravity method to calculate the x and y coordinates of the pupil center by dividing the total pixel values in each dimension by the total number of black pixels. Experimental results on the CASIA iris database demonstrate the accuracy of the proposed computationally efficient pupil detection method. A hardware implementation of the technique is also presented, which could be used for real-time iris localization in biometric recognition applications.
A New Approach of Iris Detection and RecognitionIJECEIAES
This paper proposes an IRIS recognition and detection model for measuring the e-security. This proposed model consists of the following blocks: segmentation and normalization, feature encoding and feature extraction, and classification. In first phase, histogram equalization and canny edge detection is used for object detection. And then, Hough Transformation is utilized for detecting the center of the pupil of an IRIS. In second phase, Daugmen’s Rubber Sheet model and Log Gabor filter is used for normalization and encoding and as a feature extraction method GNS (Global Neighborhood Structure) map is used, finally extracted feature of GNS is feed to the SVM (Support Vector Machine) for training and testing. For our tested dataset, experimental results demonstrate 92% accuracy in real portion and 86% accuracy in imaginary portion for both eyes. In addition, our proposed model outperforms than other two conventional methods exhibiting higher accuracy.
Automatic detection of optic disc and blood vessels from retinal images using...eSAT Journals
Abstract Diabetic retinopathy is the common cause of blindness. This paper presents the mathematical morphology method to detect and eliminate the optic disc (OD) and the blood vessels. Detection of optic disc and the blood vessels are the necessary steps in the detection of diabetic retinopathy because the blood vessels and the optic disc are the normal features of the retinal image. And also, the optic disc and the exudates are the brightest portion of the image. Detection of optic disc and the blood vessels can help the ophthalmologists to detect the diseases earlier and faster. Optic disc and the blood vessels are detected and eliminated by using mathematical morphology methods such as closing, filling, morphological reconstruction and Otsu algorithm. The objective of this paper is to detect the normal features of the image. By using the result, the ophthalmologists can detect the diseases easily. Keywords: Blood vessels, Diabetic retinopathy, mathematical morphology, Otsu algorithm, optic disc (OD)
A SIMPLE APPROACH FOR RELATIVELY AUTOMATED HIPPOCAMPUS SEGMENTATION FROM SAGI...ijbbjournal
In this paper, we present a relatively automated method to segment the hippocampus in t1 weighted
magnetic resonance images that can be acquired in the routine clinical setting. This paper describes a
simple approach for segmenting the hippocampus automatically from sagittal view of brain MRI. Large
datasets of structural MR images are collected to quantitatively analyze the relationships between brain
anatomy, disease progression, treatment regimens, and genetic influences upon brain structure..This
method segments the hippocampus without any human intervention for few slices present mid position in
the total volume. Experimental results using this method show a good agreement with the manual
segmented gold standard. These results may support the clinical studies of memory and neurodegenerative
disease
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
This document provides a literature review of various techniques for automatic facial expression recognition. It discusses approaches such as principal component analysis (PCA), linear discriminant analysis (LDA), independent component analysis (ICA), 2D PCA, global eigen approaches using color images, subpattern extended 2D PCA, multilinear image analysis, color subspace LDA, 2D Gabor filter banks, and local Gabor binary patterns. It provides a table comparing the performance and disadvantages of these different methods. Recently, tensor perceptual color frameworks have been introduced that apply tensor concepts and perceptual color spaces to improve recognition performance under varying illumination conditions.
Image type water meter character recognition based on embedded dspcsandit
1) The document presents a method for automatic water meter character recognition using image processing techniques and a DSP processor. Images of water meters are collected via camera and processed using segmentation, binarization, and filtering.
2) Characters are recognized using a projection method by matching projection curves to templates, with additional methods used to recognize similar characters. Recognition accuracy of over 95% was achieved.
3) The system was tested on a hardware platform and was able to automatically read meters, replacing manual reading and improving efficiency while reducing costs.
Haemorrhage Detection and Classification: A ReviewIJERA Editor
In Indian population, the count of diabetic peoples gets increasing day by day. Due to improper balance of insulin in the human body causes Diabetic. The most common symptom of the person with diabetes is diabetic retinopathy, which leads to blindness. The effect due to DR can reduce by early detection of Haemorrhages and treated at an early stage. In recent year, there is an increased interest in the field of medical image processing. Many researchers have developed advanced algorithms for Haemorrhage detection using fundus images. In proposed paper, we discuss various methods for Haemorrhage detection and classification.
COMPRESSION BASED FACE RECOGNITION USING TRANSFORM DOMAIN FEATURES FUSED AT M...sipij
The physiological biometric trait face images are used to identify a person effectively. In this paper, we
propose compression based face recognition using transform domain features fused at matching level. The
2D images are converted into 1-D vectors using mean to compress number of pixels. The Fast Fourier
Transform (FFT) and Discrete Wavelet Transform (DWT) are used to extract features. The low and high
frequency coefficients of DWT are concatenated to obtained final DWT features. The performance
parameters are computed by comparing database and test image features of FFT and DWT using Euclidian
Distance (ED). The performance parameters of FFT and DWT are fused at matching level to obtain better
results. It is observed that the performance of proposed method is better than the existing methods.
Glaucoma Disease Diagnosis Using Feed Forward Neural Network ijcisjournal
Glaucoma is an eye disease which damages the optic nerve and or loss of the field of vision which leads to
complete blindness caused by the pressure buildup by the fluid of the eye i.e. the intraocular pressure
(IOP). This optic disorder with a gradual loss of the field of vision leads to progressive and irreversible
blindness, so it should be diagnosed and treated properly at an early stage. In this paper,
thedaubechies(db3) or symlets (sym3)and reverse biorthogonal (rbio3.7) wavelet filters are employed for
obtaining average and energy texture feature which are used to classify glaucoma disease with high
accuracy. The Feed-Forward neural network classifies the glaucoma disease with an accuracy of 96.67%.
In this work, the computational complexity is minimized by reducing the number of filters while retaining
the same accuracy.
WCTFR : W RAPPING C URVELET T RANSFORM B ASED F ACE R ECOGNITIONcsandit
The recognition of a person based on biological fea
tures are efficient compared with traditional
knowledge based recognition system. In this paper w
e propose Wrapping Curvelet Transform
based Face Recognition (WCTFR). The Wrapping Curve
let Transform (WCT) is applied on
face images of database and test images to derive c
oefficients. The obtained coefficient matrix is
rearranged to form WCT features of each image. The
test image WCT features are compared
with database images using Euclidean Distance (ED)
to compute Equal Error Rate (EER) and
True Success Rate (TSR). The proposed algorithm wit
h WCT performs better than Curvelet
Transform algorithms used in [1], [10] and [11].
IRJET- Comparative Study of PCA, KPCA, KFA and LDA Algorithms for Face Re...IRJET Journal
This document compares the performance of four face recognition algorithms - PCA, KPCA, KFA, and LDA - on three standard datasets: AT&T, Yale, and UMIST. It finds that KFA generally achieves the highest recognition rates, particularly for the AT&T and Yale datasets which involve changes in facial expressions and lighting. The Yale dataset, with its variations, yields the best results overall for KFA and LDA. The UMIST dataset, with its profile images, produces lower recognition rates across algorithms due to less similarity between training and test images.
Instant fracture detection using ir-raysijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Most face recognition algorithms are generally capable to achieve a high level of accuracy when
the image is acquired under wellcontrolled conditions. The face should be still during the acquisition
process; otherwise, the resulted image would be blur and hard for recognition. Enforcing persons to stand
still during the process is impractical; extremely likely that recognition should be performed on a blurred
image. It is important to understand the relation between the image blur and the recognition accuracy. The
ORL Database was used in the study. All images were in PGM format of 92 × 112 pixels from forty
different persons, ten images per person. Those images were randomly divided into training and testing
datasets with 50-50 ratio. Singular value decomposition was used to extract the features. The images in
the testing datasets were artificially blurred to represent a linear motion, and recognition was performed.
The blurred images were also filtered using various methods. The accuracy levels of the recognition on the
basis of the blurred faces and filtered faces were compared. The performed numerical study suggests that
at its best, the image improvement processes are capable to improve the recognition accuracy level by
less than five percent.
Transform Domain Based Iris Recognition using EMD and FFTIOSRJVSP
Iris is one of the physiological trait which is used to identify the individuals. In this paper Transform Domain Based Iris Recognition using EMD and FFT is proposed. Circular Hough Transform is used in the Preprocessing stage to extract circular part of eye. The circular iris part is converted into rectangular rubber sheet model in Region of Interest (ROI).Empirical Mode Functions (EMF)’s are obtained by applying Empirical Mode Decomposition (EMD) on the Iris. FFT is also applied on ROI to extract the features. These features are added arithmetically to obtain final features. The features of the database are compared with test iris using Euclidian Distance(ED) to compute performance parameters. It is observed that the values of CRR and EER are better in the case of propsed algorithm compared to existing algorithms.
Intrinsic biometric, nowadays, has become a trend
in research on human identification due to some disadvantages of
the extrinsic biometric features. Extrinsic biometric features are
easily imitated and lost as they are located outside the human
body and are easy to change due to accidents. Therefore, in this
paper we focus on a method which can extract a feature from an
image of intrinsic biometric. Moreover, we use palm skin vein as
the intrinsic biometric feature for human recognition application.
The feature of an image can be extracted by using a specific
method, such as Local binary pattern (LBP), which has been
commonly used in many research works. A modified LBP, called
cross-LBP (DVHLBP), has been proposed in our previous paper.
DVHLBP has better performance compared with the
conventional LBP. In this paper, we further optimize the
DVHLBP method. In this paper, DVHLBP is used as the
extraction feature algorithm on palm vein and histogram
intersection is used for the matching process. In the simulation,
the ratio of data model to data testing was 5:5. Testing was done
by applying some scenarios. The optimization is done by
examining the number of regions that yield the optimal threshold
value. The optimal configuration is achieved when we use 8
neighborhood pixels with radius of 12, 16 regions. Simulation
results show that the false accepted rate (FAR) and false rejected
rate (FRR) are 0.01 and 0.01, respectively, with recognition rate
of 99%. In addition, we show that the optimized DVHLBP has
improvement in the accuracy and equal error rate (EER).
19 9742 the application paper id 0016(edit ty)IAESIJEECS
This document summarizes a study on applying a modified Least Trimmed Squares with Genetic Algorithms (LTS-GAs) method to face recognition with occluded images. The method was tested on the AT&T and Yale face datasets with different image sizes and levels of added salt and pepper noise. Recognition rates generally decreased as noise levels increased. For clean images, larger AT&T images performed best, but smaller Yale images performed best for noisy images. The study concludes the modified LTS-GAs method shows potential for face recognition with occlusions, warranting further comparison against other algorithms.
A Spectral Domain Dominant Feature Extraction Algorithm for Palm-print Recogn...CSCJournals
In this paper, a spectral feature extraction algorithm is proposed for palm-print recognition, which can efficiently capture the detail spatial variations in a palm-print image. The entire image is segmented into several spatial modules and the task of feature extraction is carried out using two dimensional Fourier transform within those spatial modules. A dominant spectral feature selection algorithm is proposed, which offers an advantage of very low feature dimension and results in a very high within-class compactness and between-class separability of the extracted features. A principal component analysis is performed to further reduce the feature dimension. From our extensive experimentations on different palm-print databases, it is found that the performance of the proposed method in terms of recognition accuracy and computational complexity is superior to that of some of the recent methods.
COMPARATIVE ANALYSIS OF MINUTIAE BASED FINGERPRINT MATCHING ALGORITHMSijcsit
Biometric matching involves finding similarity between fingerprint images.The accuracy and speed of the
matching algorithmdetermines its effectives. This researchaims at comparing two types of matching
algorithms namely(a) matching using global orientation features and (b) matching using minutia triangulation.The comparison is done using accuracy, time and number of similar features. The experiment is conducted on a datasets of 100 candidates using four (4) fingerprints from each candidate. The data is sampled from a mass registration conducted by a reputable organization in Kenya.Theresearch reveals that fingerprint matching based on algorithm (b) performs better in speed with an average of 38.32 milliseconds
as compared to matching based on algorithm (a) with an average of 563.76 milliseconds. On accuracy,algorithm(a) performs better with an average accuracy of 0.142433 as compared to algorithm (b) with an average accuracy score of 0.004202.
Brain tumor detection and segmentation using watershed segmentation and morph...eSAT Journals
This document describes a method for detecting and segmenting brain tumors from MRI images using watershed segmentation and morphological operations. The method involves preprocessing the MRI image, removing the skull via thresholding, segmenting the brain tissue using marker-controlled watershed segmentation, detecting the tumor region using erosion-based morphological operations, calculating the tumor area, and determining the tumor location. The method was implemented in MATLAB and experimental results demonstrated that it could accurately extract and detect tumor regions from brain MRI images.
IRJET- A Plant Identification and Recommendation SystemIRJET Journal
This document describes a plant identification and recommendation system that uses image recognition techniques. The system takes an image of a leaf as input, preprocesses it by resizing, converting to grayscale, and extracting features. It then uses a convolutional neural network with the Inception-v3 model to identify the plant by comparing features to those in its database. Based on the identified plant, it recommends other plants that could grow in that location. The system is implemented as both a mobile app and web application to be accessible anywhere.
IMAGE TYPE WATER METER CHARACTER RECOGNITION BASED ON EMBEDDED DSP cscpconf
In the paper, we combined DSP processor with image processing algorithm and studied the method of water meter character recognition. We collected water meter image through camera at a fixed angle, and the projection method is used to recognize those digital images. The experiment results show that the method can recognize the meter characters accurately and artificial meter reading is replaced by automatic digital recognition, which improves working efficiency.
This document describes an inspection system that uses machine vision to inspect bottles of liquid medicine on a production line. The system uses a camera and MATLAB software to analyze images of bottles and check that the liquid level and bottle cap meet specifications. It summarizes the experimental setup, image processing and analysis methods, and results of testing the system on 4 sample bottles. The system was able to accurately inspect the bottles and determine if they passed or failed inspection of the liquid level and bottle cap.
A New Approach of Iris Detection and RecognitionIJECEIAES
This paper proposes an IRIS recognition and detection model for measuring the e-security. This proposed model consists of the following blocks: segmentation and normalization, feature encoding and feature extraction, and classification. In first phase, histogram equalization and canny edge detection is used for object detection. And then, Hough Transformation is utilized for detecting the center of the pupil of an IRIS. In second phase, Daugmen’s Rubber Sheet model and Log Gabor filter is used for normalization and encoding and as a feature extraction method GNS (Global Neighborhood Structure) map is used, finally extracted feature of GNS is feed to the SVM (Support Vector Machine) for training and testing. For our tested dataset, experimental results demonstrate 92% accuracy in real portion and 86% accuracy in imaginary portion for both eyes. In addition, our proposed model outperforms than other two conventional methods exhibiting higher accuracy.
Automatic detection of optic disc and blood vessels from retinal images using...eSAT Journals
Abstract Diabetic retinopathy is the common cause of blindness. This paper presents the mathematical morphology method to detect and eliminate the optic disc (OD) and the blood vessels. Detection of optic disc and the blood vessels are the necessary steps in the detection of diabetic retinopathy because the blood vessels and the optic disc are the normal features of the retinal image. And also, the optic disc and the exudates are the brightest portion of the image. Detection of optic disc and the blood vessels can help the ophthalmologists to detect the diseases earlier and faster. Optic disc and the blood vessels are detected and eliminated by using mathematical morphology methods such as closing, filling, morphological reconstruction and Otsu algorithm. The objective of this paper is to detect the normal features of the image. By using the result, the ophthalmologists can detect the diseases easily. Keywords: Blood vessels, Diabetic retinopathy, mathematical morphology, Otsu algorithm, optic disc (OD)
A SIMPLE APPROACH FOR RELATIVELY AUTOMATED HIPPOCAMPUS SEGMENTATION FROM SAGI...ijbbjournal
In this paper, we present a relatively automated method to segment the hippocampus in t1 weighted
magnetic resonance images that can be acquired in the routine clinical setting. This paper describes a
simple approach for segmenting the hippocampus automatically from sagittal view of brain MRI. Large
datasets of structural MR images are collected to quantitatively analyze the relationships between brain
anatomy, disease progression, treatment regimens, and genetic influences upon brain structure..This
method segments the hippocampus without any human intervention for few slices present mid position in
the total volume. Experimental results using this method show a good agreement with the manual
segmented gold standard. These results may support the clinical studies of memory and neurodegenerative
disease
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
This document provides a literature review of various techniques for automatic facial expression recognition. It discusses approaches such as principal component analysis (PCA), linear discriminant analysis (LDA), independent component analysis (ICA), 2D PCA, global eigen approaches using color images, subpattern extended 2D PCA, multilinear image analysis, color subspace LDA, 2D Gabor filter banks, and local Gabor binary patterns. It provides a table comparing the performance and disadvantages of these different methods. Recently, tensor perceptual color frameworks have been introduced that apply tensor concepts and perceptual color spaces to improve recognition performance under varying illumination conditions.
Image type water meter character recognition based on embedded dspcsandit
1) The document presents a method for automatic water meter character recognition using image processing techniques and a DSP processor. Images of water meters are collected via camera and processed using segmentation, binarization, and filtering.
2) Characters are recognized using a projection method by matching projection curves to templates, with additional methods used to recognize similar characters. Recognition accuracy of over 95% was achieved.
3) The system was tested on a hardware platform and was able to automatically read meters, replacing manual reading and improving efficiency while reducing costs.
Haemorrhage Detection and Classification: A ReviewIJERA Editor
In Indian population, the count of diabetic peoples gets increasing day by day. Due to improper balance of insulin in the human body causes Diabetic. The most common symptom of the person with diabetes is diabetic retinopathy, which leads to blindness. The effect due to DR can reduce by early detection of Haemorrhages and treated at an early stage. In recent year, there is an increased interest in the field of medical image processing. Many researchers have developed advanced algorithms for Haemorrhage detection using fundus images. In proposed paper, we discuss various methods for Haemorrhage detection and classification.
COMPRESSION BASED FACE RECOGNITION USING TRANSFORM DOMAIN FEATURES FUSED AT M...sipij
The physiological biometric trait face images are used to identify a person effectively. In this paper, we
propose compression based face recognition using transform domain features fused at matching level. The
2D images are converted into 1-D vectors using mean to compress number of pixels. The Fast Fourier
Transform (FFT) and Discrete Wavelet Transform (DWT) are used to extract features. The low and high
frequency coefficients of DWT are concatenated to obtained final DWT features. The performance
parameters are computed by comparing database and test image features of FFT and DWT using Euclidian
Distance (ED). The performance parameters of FFT and DWT are fused at matching level to obtain better
results. It is observed that the performance of proposed method is better than the existing methods.
Glaucoma Disease Diagnosis Using Feed Forward Neural Network ijcisjournal
Glaucoma is an eye disease which damages the optic nerve and or loss of the field of vision which leads to
complete blindness caused by the pressure buildup by the fluid of the eye i.e. the intraocular pressure
(IOP). This optic disorder with a gradual loss of the field of vision leads to progressive and irreversible
blindness, so it should be diagnosed and treated properly at an early stage. In this paper,
thedaubechies(db3) or symlets (sym3)and reverse biorthogonal (rbio3.7) wavelet filters are employed for
obtaining average and energy texture feature which are used to classify glaucoma disease with high
accuracy. The Feed-Forward neural network classifies the glaucoma disease with an accuracy of 96.67%.
In this work, the computational complexity is minimized by reducing the number of filters while retaining
the same accuracy.
WCTFR : W RAPPING C URVELET T RANSFORM B ASED F ACE R ECOGNITIONcsandit
The recognition of a person based on biological fea
tures are efficient compared with traditional
knowledge based recognition system. In this paper w
e propose Wrapping Curvelet Transform
based Face Recognition (WCTFR). The Wrapping Curve
let Transform (WCT) is applied on
face images of database and test images to derive c
oefficients. The obtained coefficient matrix is
rearranged to form WCT features of each image. The
test image WCT features are compared
with database images using Euclidean Distance (ED)
to compute Equal Error Rate (EER) and
True Success Rate (TSR). The proposed algorithm wit
h WCT performs better than Curvelet
Transform algorithms used in [1], [10] and [11].
IRJET- Comparative Study of PCA, KPCA, KFA and LDA Algorithms for Face Re...IRJET Journal
This document compares the performance of four face recognition algorithms - PCA, KPCA, KFA, and LDA - on three standard datasets: AT&T, Yale, and UMIST. It finds that KFA generally achieves the highest recognition rates, particularly for the AT&T and Yale datasets which involve changes in facial expressions and lighting. The Yale dataset, with its variations, yields the best results overall for KFA and LDA. The UMIST dataset, with its profile images, produces lower recognition rates across algorithms due to less similarity between training and test images.
Instant fracture detection using ir-raysijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Most face recognition algorithms are generally capable to achieve a high level of accuracy when
the image is acquired under wellcontrolled conditions. The face should be still during the acquisition
process; otherwise, the resulted image would be blur and hard for recognition. Enforcing persons to stand
still during the process is impractical; extremely likely that recognition should be performed on a blurred
image. It is important to understand the relation between the image blur and the recognition accuracy. The
ORL Database was used in the study. All images were in PGM format of 92 × 112 pixels from forty
different persons, ten images per person. Those images were randomly divided into training and testing
datasets with 50-50 ratio. Singular value decomposition was used to extract the features. The images in
the testing datasets were artificially blurred to represent a linear motion, and recognition was performed.
The blurred images were also filtered using various methods. The accuracy levels of the recognition on the
basis of the blurred faces and filtered faces were compared. The performed numerical study suggests that
at its best, the image improvement processes are capable to improve the recognition accuracy level by
less than five percent.
Transform Domain Based Iris Recognition using EMD and FFTIOSRJVSP
Iris is one of the physiological trait which is used to identify the individuals. In this paper Transform Domain Based Iris Recognition using EMD and FFT is proposed. Circular Hough Transform is used in the Preprocessing stage to extract circular part of eye. The circular iris part is converted into rectangular rubber sheet model in Region of Interest (ROI).Empirical Mode Functions (EMF)’s are obtained by applying Empirical Mode Decomposition (EMD) on the Iris. FFT is also applied on ROI to extract the features. These features are added arithmetically to obtain final features. The features of the database are compared with test iris using Euclidian Distance(ED) to compute performance parameters. It is observed that the values of CRR and EER are better in the case of propsed algorithm compared to existing algorithms.
Intrinsic biometric, nowadays, has become a trend
in research on human identification due to some disadvantages of
the extrinsic biometric features. Extrinsic biometric features are
easily imitated and lost as they are located outside the human
body and are easy to change due to accidents. Therefore, in this
paper we focus on a method which can extract a feature from an
image of intrinsic biometric. Moreover, we use palm skin vein as
the intrinsic biometric feature for human recognition application.
The feature of an image can be extracted by using a specific
method, such as Local binary pattern (LBP), which has been
commonly used in many research works. A modified LBP, called
cross-LBP (DVHLBP), has been proposed in our previous paper.
DVHLBP has better performance compared with the
conventional LBP. In this paper, we further optimize the
DVHLBP method. In this paper, DVHLBP is used as the
extraction feature algorithm on palm vein and histogram
intersection is used for the matching process. In the simulation,
the ratio of data model to data testing was 5:5. Testing was done
by applying some scenarios. The optimization is done by
examining the number of regions that yield the optimal threshold
value. The optimal configuration is achieved when we use 8
neighborhood pixels with radius of 12, 16 regions. Simulation
results show that the false accepted rate (FAR) and false rejected
rate (FRR) are 0.01 and 0.01, respectively, with recognition rate
of 99%. In addition, we show that the optimized DVHLBP has
improvement in the accuracy and equal error rate (EER).
19 9742 the application paper id 0016(edit ty)IAESIJEECS
This document summarizes a study on applying a modified Least Trimmed Squares with Genetic Algorithms (LTS-GAs) method to face recognition with occluded images. The method was tested on the AT&T and Yale face datasets with different image sizes and levels of added salt and pepper noise. Recognition rates generally decreased as noise levels increased. For clean images, larger AT&T images performed best, but smaller Yale images performed best for noisy images. The study concludes the modified LTS-GAs method shows potential for face recognition with occlusions, warranting further comparison against other algorithms.
A Spectral Domain Dominant Feature Extraction Algorithm for Palm-print Recogn...CSCJournals
In this paper, a spectral feature extraction algorithm is proposed for palm-print recognition, which can efficiently capture the detail spatial variations in a palm-print image. The entire image is segmented into several spatial modules and the task of feature extraction is carried out using two dimensional Fourier transform within those spatial modules. A dominant spectral feature selection algorithm is proposed, which offers an advantage of very low feature dimension and results in a very high within-class compactness and between-class separability of the extracted features. A principal component analysis is performed to further reduce the feature dimension. From our extensive experimentations on different palm-print databases, it is found that the performance of the proposed method in terms of recognition accuracy and computational complexity is superior to that of some of the recent methods.
COMPARATIVE ANALYSIS OF MINUTIAE BASED FINGERPRINT MATCHING ALGORITHMSijcsit
Biometric matching involves finding similarity between fingerprint images.The accuracy and speed of the
matching algorithmdetermines its effectives. This researchaims at comparing two types of matching
algorithms namely(a) matching using global orientation features and (b) matching using minutia triangulation.The comparison is done using accuracy, time and number of similar features. The experiment is conducted on a datasets of 100 candidates using four (4) fingerprints from each candidate. The data is sampled from a mass registration conducted by a reputable organization in Kenya.Theresearch reveals that fingerprint matching based on algorithm (b) performs better in speed with an average of 38.32 milliseconds
as compared to matching based on algorithm (a) with an average of 563.76 milliseconds. On accuracy,algorithm(a) performs better with an average accuracy of 0.142433 as compared to algorithm (b) with an average accuracy score of 0.004202.
Brain tumor detection and segmentation using watershed segmentation and morph...eSAT Journals
This document describes a method for detecting and segmenting brain tumors from MRI images using watershed segmentation and morphological operations. The method involves preprocessing the MRI image, removing the skull via thresholding, segmenting the brain tissue using marker-controlled watershed segmentation, detecting the tumor region using erosion-based morphological operations, calculating the tumor area, and determining the tumor location. The method was implemented in MATLAB and experimental results demonstrated that it could accurately extract and detect tumor regions from brain MRI images.
IRJET- A Plant Identification and Recommendation SystemIRJET Journal
This document describes a plant identification and recommendation system that uses image recognition techniques. The system takes an image of a leaf as input, preprocesses it by resizing, converting to grayscale, and extracting features. It then uses a convolutional neural network with the Inception-v3 model to identify the plant by comparing features to those in its database. Based on the identified plant, it recommends other plants that could grow in that location. The system is implemented as both a mobile app and web application to be accessible anywhere.
IMAGE TYPE WATER METER CHARACTER RECOGNITION BASED ON EMBEDDED DSP cscpconf
In the paper, we combined DSP processor with image processing algorithm and studied the method of water meter character recognition. We collected water meter image through camera at a fixed angle, and the projection method is used to recognize those digital images. The experiment results show that the method can recognize the meter characters accurately and artificial meter reading is replaced by automatic digital recognition, which improves working efficiency.
This document describes an inspection system that uses machine vision to inspect bottles of liquid medicine on a production line. The system uses a camera and MATLAB software to analyze images of bottles and check that the liquid level and bottle cap meet specifications. It summarizes the experimental setup, image processing and analysis methods, and results of testing the system on 4 sample bottles. The system was able to accurately inspect the bottles and determine if they passed or failed inspection of the liquid level and bottle cap.
This paper presents shape analysis using Local Standard Deviation (LSD) technique to detect shape defect of the bottle for product quality inspection. The proposed analysis framework includes segmentation, feature extraction, and classification. The shape of the bottle was segmented using LSD technique in order to obtain higher enhancement at the low contrast area and low enhancement at the high contrast area. The contrast gain that was applied in Adaptive Contrast Enhancement (ACE) algorithm, was presented inversely proportional to LSD in order to detect and eliminate background noise at the bottle edge. After the segmentation process, the parameters of the bottle shape such as height, width, area, and extent were extracted and applied in classification stage. The rule-based classifier was used to classify the shape of the bottle either good or defect. The offline experimental results exhibit superior segmentation on performance with 100% accuracy for 100 sample images. This shows that the LSD could be an effective technique to monitor the product quality.
Supervised Blood Vessel Segmentation in Retinal Images Using Gray level and M...IJTET Journal
The segmentation of membranel blood vessels within the retina may be a essential step in designation of diabetic retinopathy during this paper, gift a replacement methodology for mechanically segmenting blood vessels in retinal pictures. 2 techniques for segmenting retinal blood vessels, supported totally different image process techniques, square measure represented and their strengths and weaknesses square measure compared. This methodology uses a neural network (NN) theme for element classification and gray-level and moment invariants-based options for element illustration. The performance of every algorithmic program was tested on the STARE and DRIVE dataset. wide used for this purpose, since they contain retinal pictures and also the
vascular structures. Performance on each sets of check pictures is healthier than different existing pictures. The methodology
proves particularly correct for vessel detection in STARE pictures. This effectiveness and lustiness with totally different image conditions, is employed for simplicity and quick implementation. This methodology used for early detection of Diabetic Retinopathy (DR)
a. The authors developed a 2D spatial filter to isolate nuclei from images of cardiac cells for the purpose of accurately quantifying cellular density. They tested different filtering methods and found that Gaussian edge detection produced counts most consistent with manual user counts.
b. Statistical analysis of the results showed that obtaining more trained user counts could help optimize the algorithms. While some image counts were within 5% error of user counts, not all images produced statistically significant results, suggesting the need for further algorithm improvements.
c. The authors' nuclear detection method shows promise but still requires optimization, such as improving user training protocols, binary conversion methods, and Gaussian filter parameters to produce more consistent counts across images.
MR Image Compression Based on Selection of Mother Wavelet and Lifting Based W...ijma
Magnetic Resonance (MR) image is a medical image technique required enormous data to be stored and
transmitted for high quality diagnostic application. Various algorithms have been proposed to improve the
performance of the compression scheme. In this paper we extended the commonly used algorithms to image
compression and compared its performance. For an image compression technique, we have linked different
wavelet techniques using traditional mother wavelets and lifting based Cohen-Daubechies-Feauveau
wavelets with the low-pass filters of the length 9 and 7 (CDF 9/7) wavelet transform with Set Partition in
Hierarchical Trees (SPIHT) algorithm. A novel image quality index with highlighting shape of histogram
of the image targeted is introduced to assess image compression quality. The index will be used in place of
existing traditional Universal Image Quality Index (UIQI) “in one go”. It offers extra information about
the distortion between an original image and a compressed image in comparisons with UIQI. The proposed
index is designed based on modelling image compression as combinations of four major factors: loss of
correlation, luminance distortion, contrast distortion and shape distortion. This index is easy to calculate
and applicable in various image processing applications. One of our contributions is to demonstrate the
choice of mother wavelet is very important for achieving superior wavelet compression performances based
on proposed image quality indexes. Experimental results show that the proposed image quality index plays
a significantly role in the quality evaluation of image compression on the open sources “BrainWeb:
Simulated Brain Database (SBD) ”.
Iaetsd multi-view and multi band face recognitionIaetsd Iaetsd
The document discusses multi-view and multi-band face recognition using wavelet transforms. It begins with an abstract describing the challenges of face recognition due to variations in lighting, expression, and aging. It then introduces a multi-band face recognition algorithm using wavelet transforms to extract features from multiple video bands. The experimental results show wavelet transforms take less response time and are more suitable for feature extraction and face matching with high accuracy. It discusses preprocessing images, feature extraction using PCA and wavelet transforms, feature matching, and concludes wavelet transforms help with feature extraction and face matching with high accuracy and less response time.
Identify Defects in Gears Using Digital Image ProcessingIJERD Editor
International Journal of Engineering Research and Development is an international premier peer reviewed open access engineering and technology journal promoting the discovery, innovation, advancement and dissemination of basic and transitional knowledge in engineering, technology and related disciplines.
Retinal blood vessel extraction and optical disc removaleSAT Journals
Abstract Retinal image processing is an important process by which we can detect the blood vessels and this helps us in detecting the DIABETIC RETINOPATHY at a early stage and this is very helpful because the symptoms are not known by anyone unless we have blur eye sight or we get blind. And this mainly occurs in people suffering from high diabetes. So by extracting the blood vessels using the algorithm we can see which blood vessels are actually damaged. So by using the algorithm we can continuously survey the situation and can protect our eye-sight. Keywords: field of view, retinopathy, thresholding, morphology, Otsu's algorithm, MATLAB.
A Comparative Study of Different Models on Image Colorization using Deep Lear...IRJET Journal
The document discusses and compares three different deep learning models for the task of image colorization: 1) A baseline CNN model, 2) The Inception-resnet-v2 model, and 3) A model based on the Caffe framework. It provides background on image colorization techniques and related works, describes the three models and their architectures, and aims to evaluate the performance of the models and determine which approach works best for automatic image colorization.
IRJET-Underwater Image Enhancement by Wavelet Decomposition using FPGAIRJET Journal
This document describes a method for enhancing underwater images using wavelet decomposition and fusion on an FPGA (field programmable gate array). Underwater images often have low contrast and visibility due to light scattering in water. The proposed method performs color correction and contrast enhancement on an input underwater image. It then decomposes the color-corrected and contrast-enhanced images into low and high frequency components using wavelet transforms. Image fusion is performed on the wavelet coefficients to combine the detailed information from both images. The fused image is reconstructed via inverse wavelet transform. Experimental results show the proposed fusion-based approach improves underwater image visibility. Implementing the algorithm on an FPGA provides benefits over general processors for computationally intensive image processing.
The document reviews approaches to image interpolation and super-resolution. It discusses several interpolation methods including polynomial-based, edge-directed, and soft-decision approaches. Edge-directed methods aim to preserve edge sharpness during upsampling by estimating edge orientations or fusing multiple orientations. New edge-directed interpolation uses a Wiener filter to estimate missing pixel values. Soft-decision adaptive interpolation and robust soft-decision interpolation further improve results by modeling image signals within local windows and incorporating outlier weighting. The document provides formulations and comparisons of these methods.
BIG DATA-DRIVEN FAST REDUCING THE VISUAL BLOCK ARTIFACTS OF DCT COMPRESSED IM...IJDKP
1) The document proposes a new simple method to reduce visual block artifacts in images compressed using DCT (used in JPEG) for urban surveillance systems.
2) The method smooths only the connection edges between adjacent blocks while keeping other image areas unchanged.
3) Simulation results show the proposed method achieves better image quality as measured by PSNR compared to median and wiener filters, while using significantly less computational resources.
A Flexible Scheme for Transmission Line Fault Identification Using Image Proc...IJEEE
This paper describes a methodology that aims to find and diagnosing faults in transmission lines exploitation image process technique. The image processing techniques have been widely used to solve problem in process of all areas. In this paper, the methodology conjointly uses a digital image process Wavelet Shrinkage function to fault identification and diagnosis. In other words, the purpose is to extract the faulty image from the source with the separation and the co-ordinates of the transmission lines. The segmentation objective is the image division its set of parts and objects, which distinguishes it among others in the scene, are the key to have an improved result in identification of faults.The experimental results indicate that the proposed method provides promising results and is advantageous both in terms of PSNR and in visual quality.
Fuzzy Logic Based Decision System For PCB Defects CorrectionIJERA Editor
he size, ease of automation, increasing durability and reliability of any circuit being developed. Defect & errors while developments are much obvious. Detecting them is primary part but taking decision for correction of same in testing specimen will be effective or not is much more crucial. Sometimes defect correction in the PCB is much more efficient then reprinting, in terms of time, resource and cost with respect to production. Making decision manually is tiresome & less efficient. So in this paper a novel method of decision making system based on fuzzy logic is proposed which takes decision whether the testing specimen should undergo correction or reprinting. Fuzzy based system takes decision in the way humans do. Results shown for the proposed system are quite promising in decision making.
Fuzzy logic applications for data acquisition systems of practical measurement IJECEIAES
In laboratory works, the error in measurement, reading the measurring devices, similarity of experimental data and lack of understanding of practicum materials are often found. These will lead to the inacurracy and invalid in data obtanined. As an alternative solution, application of fuzzy logic to the data acquisition system using a web server. This research focuses on the design of data acquisition systems with the target of reducing the error rate in measuring experimental data on the laboratory. Data measurement on laboratory practice module is done by taking the analog data resulted from the measurement. Furthermore, the data are converted into digital data via arduino and stored on the server. To get valid data, the server will process the data by using fuzzy logic method. The valid data are integrated into a web server so that it can be accessed as needed. The results showed that the data acquisition system based on fuzzy logic is able to provide recommendation of measurement result on the lab works based on the degree value of membership and truth value. Fuzzy logic will select the measured data with a maximum error percentage of 5% and select the measurement result which has minimum error rate.
SENSOR FAULT IDENTIFICATION IN COMPLEX SYSTEMS | J4RV3I12007Journal For Research
In the process control industries, one of the main works is to identify and channelize the fault sensor. The process of sensor troubleshooting is essential to increase the production flow without any interruption. In this project the level measurement in the conical tank is considered and the capacitive sensor is used to measure the level of the water inside the conical tank. First the standard sensor is placed inside and the level is measured and monitored and the history of data’s are obtained. Using LABVIEW, programming is done for the standard sensor to measure the level. Now the measuring sensor is placed and the measurement of the level is done. Using any of the soft computing techniques, the fault can be identified and it is indicated in the display or alarm.
Disparity Estimation by a Real Time Approximation AlgorithmCSCJournals
This document summarizes an approximation algorithm for real-time disparity estimation of stereo images. The algorithm shrinks the left and right images 3 times to reduce computational time and search area. Disparity is estimated from the shrunk images and then extrapolated to reconstruct the original disparity image. Experimental results on standard stereo images show the algorithm reduces computational time by 76.34% compared to traditional window-based methods, with acceptable accuracy. Some accuracy is lost due to pixel quantization during shrinking and extrapolation, but the fast estimation of dense disparity makes the algorithm useful for applications requiring real-time performance.
An efficient image segmentation approach through enhanced watershed algorithmAlexander Decker
This document proposes an efficient image segmentation approach combining an enhanced watershed algorithm and color histogram analysis. The watershed algorithm is applied to preprocessed images after merging the results with an enhanced edge detection. Over-segmentation issues are addressed through a post-processing step applying color histogram analysis to each segmented region, improving overall performance. The document provides background on image segmentation techniques, reviews related work applying watershed algorithms, and discusses challenges like over-segmentation that watershed approaches can face.
A Novel Dehazing Method for Color Accuracy and Contrast Enhancement Method fo...IRJET Journal
The document proposes a novel dehazing method for color accuracy and a contrast enhancement method for low light intensity images. The dehazing method involves three steps: 1) Region division based on white balance segmentation, 2) Estimation of local atmospheric light in each region, and 3) An iterative dehazing algorithm to remove haze from each region. The contrast enhancement method inverts the input image, applies the dehazing algorithm, and then inverts the dehazed image to produce an enhanced output. Experimental results show the proposed methods can effectively enhance images taken with mobile devices or cameras without color distortion.
Similar to Shape and Level Bottles Detection Using Local Standard Deviation and Hough Transform (20)
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Neural network optimizer of proportional-integral-differential controller par...IJECEIAES
Wide application of proportional-integral-differential (PID)-regulator in industry requires constant improvement of methods of its parameters adjustment. The paper deals with the issues of optimization of PID-regulator parameters with the use of neural network technology methods. A methodology for choosing the architecture (structure) of neural network optimizer is proposed, which consists in determining the number of layers, the number of neurons in each layer, as well as the form and type of activation function. Algorithms of neural network training based on the application of the method of minimizing the mismatch between the regulated value and the target value are developed. The method of back propagation of gradients is proposed to select the optimal training rate of neurons of the neural network. The neural network optimizer, which is a superstructure of the linear PID controller, allows increasing the regulation accuracy from 0.23 to 0.09, thus reducing the power consumption from 65% to 53%. The results of the conducted experiments allow us to conclude that the created neural superstructure may well become a prototype of an automatic voltage regulator (AVR)-type industrial controller for tuning the parameters of the PID controller.
An improved modulation technique suitable for a three level flying capacitor ...IJECEIAES
This research paper introduces an innovative modulation technique for controlling a 3-level flying capacitor multilevel inverter (FCMLI), aiming to streamline the modulation process in contrast to conventional methods. The proposed
simplified modulation technique paves the way for more straightforward and
efficient control of multilevel inverters, enabling their widespread adoption and
integration into modern power electronic systems. Through the amalgamation of
sinusoidal pulse width modulation (SPWM) with a high-frequency square wave
pulse, this controlling technique attains energy equilibrium across the coupling
capacitor. The modulation scheme incorporates a simplified switching pattern
and a decreased count of voltage references, thereby simplifying the control
algorithm.
A review on features and methods of potential fishing zoneIJECEIAES
This review focuses on the importance of identifying potential fishing zones in seawater for sustainable fishing practices. It explores features like sea surface temperature (SST) and sea surface height (SSH), along with classification methods such as classifiers. The features like SST, SSH, and different classifiers used to classify the data, have been figured out in this review study. This study underscores the importance of examining potential fishing zones using advanced analytical techniques. It thoroughly explores the methodologies employed by researchers, covering both past and current approaches. The examination centers on data characteristics and the application of classification algorithms for classification of potential fishing zones. Furthermore, the prediction of potential fishing zones relies significantly on the effectiveness of classification algorithms. Previous research has assessed the performance of models like support vector machines, naïve Bayes, and artificial neural networks (ANN). In the previous result, the results of support vector machine (SVM) were 97.6% more accurate than naive Bayes's 94.2% to classify test data for fisheries classification. By considering the recent works in this area, several recommendations for future works are presented to further improve the performance of the potential fishing zone models, which is important to the fisheries community.
Electrical signal interference minimization using appropriate core material f...IJECEIAES
As demand for smaller, quicker, and more powerful devices rises, Moore's law is strictly followed. The industry has worked hard to make little devices that boost productivity. The goal is to optimize device density. Scientists are reducing connection delays to improve circuit performance. This helped them understand three-dimensional integrated circuit (3D IC) concepts, which stack active devices and create vertical connections to diminish latency and lower interconnects. Electrical involvement is a big worry with 3D integrates circuits. Researchers have developed and tested through silicon via (TSV) and substrates to decrease electrical wave involvement. This study illustrates a novel noise coupling reduction method using several electrical involvement models. A 22% drop in electrical involvement from wave-carrying to victim TSVs introduces this new paradigm and improves system performance even at higher THz frequencies.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Bibliometric analysis highlighting the role of women in addressing climate ch...IJECEIAES
Fossil fuel consumption increased quickly, contributing to climate change
that is evident in unusual flooding and draughts, and global warming. Over
the past ten years, women's involvement in society has grown dramatically,
and they succeeded in playing a noticeable role in reducing climate change.
A bibliometric analysis of data from the last ten years has been carried out to
examine the role of women in addressing the climate change. The analysis's
findings discussed the relevant to the sustainable development goals (SDGs),
particularly SDG 7 and SDG 13. The results considered contributions made
by women in the various sectors while taking geographic dispersion into
account. The bibliometric analysis delves into topics including women's
leadership in environmental groups, their involvement in policymaking, their
contributions to sustainable development projects, and the influence of
gender diversity on attempts to mitigate climate change. This study's results
highlight how women have influenced policies and actions related to climate
change, point out areas of research deficiency and recommendations on how
to increase role of the women in addressing the climate change and
achieving sustainability. To achieve more successful results, this initiative
aims to highlight the significance of gender equality and encourage
inclusivity in climate change decision-making processes.
Voltage and frequency control of microgrid in presence of micro-turbine inter...IJECEIAES
The active and reactive load changes have a significant impact on voltage
and frequency. In this paper, in order to stabilize the microgrid (MG) against
load variations in islanding mode, the active and reactive power of all
distributed generators (DGs), including energy storage (battery), diesel
generator, and micro-turbine, are controlled. The micro-turbine generator is
connected to MG through a three-phase to three-phase matrix converter, and
the droop control method is applied for controlling the voltage and
frequency of MG. In addition, a method is introduced for voltage and
frequency control of micro-turbines in the transition state from gridconnected mode to islanding mode. A novel switching strategy of the matrix
converter is used for converting the high-frequency output voltage of the
micro-turbine to the grid-side frequency of the utility system. Moreover,
using the switching strategy, the low-order harmonics in the output current
and voltage are not produced, and consequently, the size of the output filter
would be reduced. In fact, the suggested control strategy is load-independent
and has no frequency conversion restrictions. The proposed approach for
voltage and frequency regulation demonstrates exceptional performance and
favorable response across various load alteration scenarios. The suggested
strategy is examined in several scenarios in the MG test systems, and the
simulation results are addressed.
Enhancing battery system identification: nonlinear autoregressive modeling fo...IJECEIAES
Precisely characterizing Li-ion batteries is essential for optimizing their
performance, enhancing safety, and prolonging their lifespan across various
applications, such as electric vehicles and renewable energy systems. This
article introduces an innovative nonlinear methodology for system
identification of a Li-ion battery, employing a nonlinear autoregressive with
exogenous inputs (NARX) model. The proposed approach integrates the
benefits of nonlinear modeling with the adaptability of the NARX structure,
facilitating a more comprehensive representation of the intricate
electrochemical processes within the battery. Experimental data collected
from a Li-ion battery operating under diverse scenarios are employed to
validate the effectiveness of the proposed methodology. The identified
NARX model exhibits superior accuracy in predicting the battery's behavior
compared to traditional linear models. This study underscores the
importance of accounting for nonlinearities in battery modeling, providing
insights into the intricate relationships between state-of-charge, voltage, and
current under dynamic conditions.
Smart grid deployment: from a bibliometric analysis to a surveyIJECEIAES
Smart grids are one of the last decades' innovations in electrical energy.
They bring relevant advantages compared to the traditional grid and
significant interest from the research community. Assessing the field's
evolution is essential to propose guidelines for facing new and future smart
grid challenges. In addition, knowing the main technologies involved in the
deployment of smart grids (SGs) is important to highlight possible
shortcomings that can be mitigated by developing new tools. This paper
contributes to the research trends mentioned above by focusing on two
objectives. First, a bibliometric analysis is presented to give an overview of
the current research level about smart grid deployment. Second, a survey of
the main technological approaches used for smart grid implementation and
their contributions are highlighted. To that effect, we searched the Web of
Science (WoS), and the Scopus databases. We obtained 5,663 documents
from WoS and 7,215 from Scopus on smart grid implementation or
deployment. With the extraction limitation in the Scopus database, 5,872 of
the 7,215 documents were extracted using a multi-step process. These two
datasets have been analyzed using a bibliometric tool called bibliometrix.
The main outputs are presented with some recommendations for future
research.
Use of analytical hierarchy process for selecting and prioritizing islanding ...IJECEIAES
One of the problems that are associated to power systems is islanding
condition, which must be rapidly and properly detected to prevent any
negative consequences on the system's protection, stability, and security.
This paper offers a thorough overview of several islanding detection
strategies, which are divided into two categories: classic approaches,
including local and remote approaches, and modern techniques, including
techniques based on signal processing and computational intelligence.
Additionally, each approach is compared and assessed based on several
factors, including implementation costs, non-detected zones, declining
power quality, and response times using the analytical hierarchy process
(AHP). The multi-criteria decision-making analysis shows that the overall
weight of passive methods (24.7%), active methods (7.8%), hybrid methods
(5.6%), remote methods (14.5%), signal processing-based methods (26.6%),
and computational intelligent-based methods (20.8%) based on the
comparison of all criteria together. Thus, it can be seen from the total weight
that hybrid approaches are the least suitable to be chosen, while signal
processing-based methods are the most appropriate islanding detection
method to be selected and implemented in power system with respect to the
aforementioned factors. Using Expert Choice software, the proposed
hierarchy model is studied and examined.
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...IJECEIAES
The power generated by photovoltaic (PV) systems is influenced by
environmental factors. This variability hampers the control and utilization of
solar cells' peak output. In this study, a single-stage grid-connected PV
system is designed to enhance power quality. Our approach employs fuzzy
logic in the direct power control (DPC) of a three-phase voltage source
inverter (VSI), enabling seamless integration of the PV connected to the
grid. Additionally, a fuzzy logic-based maximum power point tracking
(MPPT) controller is adopted, which outperforms traditional methods like
incremental conductance (INC) in enhancing solar cell efficiency and
minimizing the response time. Moreover, the inverter's real-time active and
reactive power is directly managed to achieve a unity power factor (UPF).
The system's performance is assessed through MATLAB/Simulink
implementation, showing marked improvement over conventional methods,
particularly in steady-state and varying weather conditions. For solar
irradiances of 500 and 1,000 W/m2
, the results show that the proposed
method reduces the total harmonic distortion (THD) of the injected current
to the grid by approximately 46% and 38% compared to conventional
methods, respectively. Furthermore, we compare the simulation results with
IEEE standards to evaluate the system's grid compatibility.
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...IJECEIAES
Photovoltaic systems have emerged as a promising energy resource that
caters to the future needs of society, owing to their renewable, inexhaustible,
and cost-free nature. The power output of these systems relies on solar cell
radiation and temperature. In order to mitigate the dependence on
atmospheric conditions and enhance power tracking, a conventional
approach has been improved by integrating various methods. To optimize
the generation of electricity from solar systems, the maximum power point
tracking (MPPT) technique is employed. To overcome limitations such as
steady-state voltage oscillations and improve transient response, two
traditional MPPT methods, namely fuzzy logic controller (FLC) and perturb
and observe (P&O), have been modified. This research paper aims to
simulate and validate the step size of the proposed modified P&O and FLC
techniques within the MPPT algorithm using MATLAB/Simulink for
efficient power tracking in photovoltaic systems.
Adaptive synchronous sliding control for a robot manipulator based on neural ...IJECEIAES
Robot manipulators have become important equipment in production lines, medical fields, and transportation. Improving the quality of trajectory tracking for
robot hands is always an attractive topic in the research community. This is a
challenging problem because robot manipulators are complex nonlinear systems
and are often subject to fluctuations in loads and external disturbances. This
article proposes an adaptive synchronous sliding control scheme to improve trajectory tracking performance for a robot manipulator. The proposed controller
ensures that the positions of the joints track the desired trajectory, synchronize
the errors, and significantly reduces chattering. First, the synchronous tracking
errors and synchronous sliding surfaces are presented. Second, the synchronous
tracking error dynamics are determined. Third, a robust adaptive control law is
designed,the unknown components of the model are estimated online by the neural network, and the parameters of the switching elements are selected by fuzzy
logic. The built algorithm ensures that the tracking and approximation errors
are ultimately uniformly bounded (UUB). Finally, the effectiveness of the constructed algorithm is demonstrated through simulation and experimental results.
Simulation and experimental results show that the proposed controller is effective with small synchronous tracking errors, and the chattering phenomenon is
significantly reduced.
Remote field-programmable gate array laboratory for signal acquisition and de...IJECEIAES
A remote laboratory utilizing field-programmable gate array (FPGA) technologies enhances students’ learning experience anywhere and anytime in embedded system design. Existing remote laboratories prioritize hardware access and visual feedback for observing board behavior after programming, neglecting comprehensive debugging tools to resolve errors that require internal signal acquisition. This paper proposes a novel remote embeddedsystem design approach targeting FPGA technologies that are fully interactive via a web-based platform. Our solution provides FPGA board access and debugging capabilities beyond the visual feedback provided by existing remote laboratories. We implemented a lab module that allows users to seamlessly incorporate into their FPGA design. The module minimizes hardware resource utilization while enabling the acquisition of a large number of data samples from the signal during the experiments by adaptively compressing the signal prior to data transmission. The results demonstrate an average compression ratio of 2.90 across three benchmark signals, indicating efficient signal acquisition and effective debugging and analysis. This method allows users to acquire more data samples than conventional methods. The proposed lab allows students to remotely test and debug their designs, bridging the gap between theory and practice in embedded system design.
Detecting and resolving feature envy through automated machine learning and m...IJECEIAES
Efficiently identifying and resolving code smells enhances software project quality. This paper presents a novel solution, utilizing automated machine learning (AutoML) techniques, to detect code smells and apply move method refactoring. By evaluating code metrics before and after refactoring, we assessed its impact on coupling, complexity, and cohesion. Key contributions of this research include a unique dataset for code smell classification and the development of models using AutoGluon for optimal performance. Furthermore, the study identifies the top 20 influential features in classifying feature envy, a well-known code smell, stemming from excessive reliance on external classes. We also explored how move method refactoring addresses feature envy, revealing reduced coupling and complexity, and improved cohesion, ultimately enhancing code quality. In summary, this research offers an empirical, data-driven approach, integrating AutoML and move method refactoring to optimize software project quality. Insights gained shed light on the benefits of refactoring on code quality and the significance of specific features in detecting feature envy. Future research can expand to explore additional refactoring techniques and a broader range of code metrics, advancing software engineering practices and standards.
Smart monitoring technique for solar cell systems using internet of things ba...IJECEIAES
Rapidly and remotely monitoring and receiving the solar cell systems status parameters, solar irradiance, temperature, and humidity, are critical issues in enhancement their efficiency. Hence, in the present article an improved smart prototype of internet of things (IoT) technique based on embedded system through NodeMCU ESP8266 (ESP-12E) was carried out experimentally. Three different regions at Egypt; Luxor, Cairo, and El-Beheira cities were chosen to study their solar irradiance profile, temperature, and humidity by the proposed IoT system. The monitoring data of solar irradiance, temperature, and humidity were live visualized directly by Ubidots through hypertext transfer protocol (HTTP) protocol. The measured solar power radiation in Luxor, Cairo, and El-Beheira ranged between 216-1000, 245-958, and 187-692 W/m 2 respectively during the solar day. The accuracy and rapidity of obtaining monitoring results using the proposed IoT system made it a strong candidate for application in monitoring solar cell systems. On the other hand, the obtained solar power radiation results of the three considered regions strongly candidate Luxor and Cairo as suitable places to build up a solar cells system station rather than El-Beheira.
An efficient security framework for intrusion detection and prevention in int...IJECEIAES
Over the past few years, the internet of things (IoT) has advanced to connect billions of smart devices to improve quality of life. However, anomalies or malicious intrusions pose several security loopholes, leading to performance degradation and threat to data security in IoT operations. Thereby, IoT security systems must keep an eye on and restrict unwanted events from occurring in the IoT network. Recently, various technical solutions based on machine learning (ML) models have been derived towards identifying and restricting unwanted events in IoT. However, most ML-based approaches are prone to miss-classification due to inappropriate feature selection. Additionally, most ML approaches applied to intrusion detection and prevention consider supervised learning, which requires a large amount of labeled data to be trained. Consequently, such complex datasets are impossible to source in a large network like IoT. To address this problem, this proposed study introduces an efficient learning mechanism to strengthen the IoT security aspects. The proposed algorithm incorporates supervised and unsupervised approaches to improve the learning models for intrusion detection and mitigation. Compared with the related works, the experimental outcome shows that the model performs well in a benchmark dataset. It accomplishes an improved detection accuracy of approximately 99.21%.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Discover the latest insights on Data Driven Maintenance with our comprehensive webinar presentation. Learn about traditional maintenance challenges, the right approach to utilizing data, and the benefits of adopting a Data Driven Maintenance strategy. Explore real-world examples, industry best practices, and innovative solutions like FMECA and the D3M model. This presentation, led by expert Jules Oudmans, is essential for asset owners looking to optimize their maintenance processes and leverage digital technologies for improved efficiency and performance. Download now to stay ahead in the evolving maintenance landscape.
artificial intelligence and data science contents.pptxGauravCar
What is artificial intelligence? Artificial intelligence is the ability of a computer or computer-controlled robot to perform tasks that are commonly associated with the intellectual processes characteristic of humans, such as the ability to reason.
› ...
Artificial intelligence (AI) | Definitio
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
2. Int J Elec& Comp Eng ISSN: 2088-8708
Shape and Level Bottles Detection using Local Standard Deviation and Hough … (Norhashimah M.S.)
5033
statistical histogram and fuzzy c-means which makes the algorithm more complex and prone to misclassified
healthy and defect apple.
For level detection, liquid level inspection system has been developed by [7] based on infinite
symmetrical edge filter (ISEF) detection technique to replace traditional quality inspection perform by a
human operator. The process includes image cropping and normalizing, filtering and detecting. The analysis
results show the proposed technique is better detection of overfill and underfill for liquid level. The proposed
technique is not suitable for the quality image because it will over segment the image. Furthermore, [8]
proposed feature extraction and edge detection algorithm to inspect fill level and cap in bottling machine.
Classification of liquid level and cap closure is done by using neural network (NN) technique. The proposed
technique proves that it can inspect liquid level and cap of bottle accurately. Nevertheless, lower or empty
liquid level cannot be detected and too many pre-processing techniques will reduce the image quality.
Based on literature, two main problems that occur during beverage bottle quality inspection are
highlighted. First, the used of manual inspection is highly prone to human error. Second isan inappropriate
image processing technique chosen is highly prone to under or over segmentation, on the other hand lead to
loss of precision. Hence, this paper proposed a new analysis technique of shape and level for beverages
quality inspection system using local standard deviation and Hough transform. The contrast enhancement
technique provide by LSD will give different contrast level in order to prevent from under and over
segmentation of the image. Thus, the shape of the image can be segmented accurately compared to previous
technique,andpractically appropriated for industrial real-time inspection system [9]. Meanwhile, Hough
transform is proposed because of its robustness to noise and capability to detect line without enough
information, in accordance achieved betteraccuracy in detecting multiple object compared to others previous
technique [10].
2. RESEARCH METHOD
The analysis of shape and level defect detection is done using the MATLAB software. The
framework analysis of shape and level detection is shown in Figure 1. A 100 sample images is used for shape
defect detection and 55 sample images for level defect detection. The sample image is captured using Canon
D3100 digital camera with 12 megapixels. Then, the captured image is pre-processed to eliminate the noise
and enhance the brightness of the image for further analysis.
Figure 1. Framework analysis
2.1. Pre-Processing
Pre-processing is a common operation to standardized the image in order to minimize the
complexity of the algorithm [11]. The noise occur on the image may give complexity during segmentation
process. Besides, the background image is given similar gray level values with certain bottle structures.
Therefore, pre-processing is carried out to correct the image for furtheranalysis.
Shape Level
Sample image
Pre-processing
Segmentation (LSD)
Height Width Area Extent
Hough Transform
Maximum
level
Minimum
level
Level
Decision Tree
Classifier
3. ISSN:2088-8708
Int J Elec & Comp Eng, Vol. 8, No. 6, December 2018 : 5032 - 5040
5034
2.2. Segmentation using Local Standard Deviation
The Local Standard Deviation (LSD) is part of the Adaptive Contrast Enhancement (ACE) function
to segment the shape of the image. ACE is known as an unsharp masking technique where the function is
applied at the unsharp mask to amplify the image which haslow-frequency components. Besides, ACE will
adjust the contrast gain of the image into suitable value. The general equation [12] is defined as
( , ) ( , ) ( , ) ( , ) ( , )x xf i j m i j G i j x i j m i j= + − (1)
where ( , )xm i j= is the local mean, ( , )G i j= is the contrast gain and ( , )x i j= is gray scale value at
any point of the image pixel.
In the ACE, the function of LSD is to enhance the contrast gain of the image from low to high and
high to low [13]. The ringing and noise result may occur if the contrast gain is set manually. ACE function is
inversely proportional to LSD make the background image can be identified and eliminated at the edge of
bottle. The unwanted contrast gain in the image is eliminated by enhancing the contrast gain from low to high
using LSD function [14]. The function of LSD is expressed as
( , ) ( , ) ( , ) ( , )
( , )
x x
x
D
f i j m i j x i j m i j
i j
= + − (2)
where D is a constant value for contrast and ( , )X i j is a function of LSD. From equation (1), the
contrast gain is automatically controlled by the ( , )X i j . Therefore, for the region that has small
( , )xm i j= leads to an unexpectedly high value of LSD and vice versa.
2.3. Hough Transform
Hough transform is applied for the binary image to detect the lines of the water level in the bottle.
The white pixels in the image has created a locus of reference points that accumulated in accumulator array
of Hough space. The voting process is involved for line detection where every pixel in the image votes for all
possible line corresponding to reference point. A line is passing through several feature points and maximum
point which have indicated as reference points in the Hough space. Generally, (𝑥, 𝑦) space is converted into
(𝜌, 𝜃) space which can be expressed as
cos sinx y = + (3)
where 𝜌 is the length of the line from the origin. 𝜃 is the angle of the 𝜌 with respect to the x direction. Figure
2 shows the conversion of 𝑥𝑦 space into (𝜌, 𝜃) space where the lines are created based on several points in
order to find the maximum and minimum value.
Figure 2. Created lines using Hough transform
𝑥 𝑚𝑖𝑛
𝑥
𝑥 𝑚𝑎𝑥
4. Int J Elec& Comp Eng ISSN: 2088-8708
Shape and Level Bottles Detection using Local Standard Deviation and Hough … (Norhashimah M.S.)
5035
2.4. Feature Extraction
In image analysis, feature extraction is required to be input for classification process. A set of
features are extracted from the segmented images based on the shape and level. For the shape features,
parameters such as height, width, area and extent are extracted. Meanwhile, the water level feature is
extracted by finding the maximum and minimum value of water level. These features are important to
classify the defect of the image.
2.4.1. Shape Detection
Four parameters are extracted in shape features, which are height, width, area and extent. Figure 3
shows the dimension of the bounding box in the binary image that is used to obtain the area of the bottle.
Based on Figure 3, the height of the image is determined as 4160 pixels while the width is obtained as 3120
pixels. The area of the bottle is calculated based on the white pixel. In determining the extent value, the area
of the object is divided with the area of the bounding box in order to classify the shape defect of the bottle.
The extent value is calculated as follows
Areaof obje t
Extent
Areaof bound x
c
ing bo
= (4)
The white pixel has represented the shape of the bottle while the black pixel is the background
image as illustrated in Figure 3. Hence, the area of the object is determined by the value of the white pixel. In
addition, the area of the bounding box is obtained from multiplication value of height and width that can be
expressed as
Areaof bounding box height x width= (5)
Figure 3. Bounding box dimension
2.4.2. Level Detection
The water level feature has been extracted using Hough transform that created the tangent line for
the measurement distance between the maximum and minimum values. Figure 4 shows the maximum and
minimum line of the water level. Each level of water has been labeled that is illustrated in Figure 4.
Figure 4. Label for level of water
3120
𝑤
4160
𝑥
0
ℎ
𝑦
5. ISSN:2088-8708
Int J Elec & Comp Eng, Vol. 8, No. 6, December 2018 : 5032 - 5040
5036
The level of water is classified in three conditions, whichare good, overfilled and underfilled. From
Figure 4, the level of water, y is calculated from the distance between maximum (𝑥 𝑚𝑎𝑥) and minimum (𝑥 𝑚𝑖𝑛)
as formulated in equation (6). The 𝑧 𝑚𝑎𝑥 and 𝑧 𝑚𝑖𝑛 value are set based on the good condition value of the
bottle. The condition is good when the level is between the 𝑧 𝑚𝑎𝑥 and 𝑧 𝑚𝑖𝑛, If the level is above than 𝑧 𝑚𝑎𝑥, it
will consider as overfilled. Otherwise, if the level is below than 𝑧 𝑚𝑖𝑛, the condition is underfilled.
max miny x x= − (6)
2.5. Decision Tree Classifier
In order to classify the defect of shape and level, decision tree is implemented in this study. The tree
consists of branches, nodes and leaves that are used for the classification process. Branches of the tree are
indicated as values while nodes are represented as the name of data. The labels of the classare shown by the
leaf [15]. Figure 5illustrates the training set of decision tree process that includes shape and level
classification. Based on Figure 5, the decision tree model learns the extracted features of the bottle, which are
extent valueand level. If the feature is the extent value, the shape of the bottle will be classified either good or
defect condition. When the feature is the level value, the water level is classified into three conditions such as
good, overfilled and underfilled.
Figure 5. Decision tree process
3. RESULTS AND ANALYSIS
Figure 6 shows four different colors of bottles with the same size and level are used in this analysis
as reference image.
Figure 6. Reference image
Local standard deviation (LSD) uses contrast enhancement technique to segment the shape of the
bottle. The segmentation of the bottle is applied to classify the shape condition either good or defect. Figure
Feature Extraction
Extent value Level value
𝑥 ≤Extent ≤ 𝑦
Extent < 𝑥
Extent > 𝑦
Level
Good UnderfilledOverfilled
Level< 𝑥
𝑥 ≤Level≤ 𝑦
Level> 𝑦
Shape
Good Defect
6. Int J Elec& Comp Eng ISSN: 2088-8708
Shape and Level Bottles Detection using Local Standard Deviation and Hough … (Norhashimah M.S.)
5037
7(a) to Figure 7(d) shows the shape analysis process for the defect bottle. The process is started with the
conversion of red, green and blue (RGB) color image in Figure 7(a) into a grayscale image in Figure 7(b).
The conversion process is applied to reduce the complexity from 3-dimensional pixel into 2-dimensional
pixel. The grayscale image is then transformed into standard deviation image as shown in Figure 7(c). In this
process, standard deviation algorithm is applied to segment the shape of the bottle by separating the bottle
and the background image.
The background noise at the edge of the bottle can be removed by increasing the value of LSD
resulting the reduction of contrast gain of the image. Therefore, the shape of the bottle is outlined using the
white line to separate between the shape of the bottle and the background image. The standard deviation
image is converted into a binary image as represented in Figure 7(d), in which the shape of the bottle is
presented in white pixel (1) while the background image in the black pixel (0).
Finally, the parameters extracted from Figure 7(d) are used as an input for the classification process.
Four parameters namely height, width, area, and extent are extracted from the good and defect shape images.
The extent parameter is calculated based on the ratio of the bottle area and the bounding box area that stated
in the equation (4). For the area of bounding box, equation (5) is used by multiplying the height and width
value. As expected, both sample images have similar area of bounding box but different value of the bottle
area. The area value for good bottle was higher compared to defect bottle which then influenced the extent
value.
(a) (b) (c) (d)
Figure 7. Shape detection process: (a) Input stage, (b) pre-processing stage, (c) LSDstage, (d) feature
extraction stage
The implementation of Hough transform makes the maximum and minimum of water level is
estimated using voting process. The level defect detection process is indicated in Figure 8(a) to Figure 8(d).
The sample of reference image in Figure 8(a) is converted into gray scale image. Then, LSD technique is
applied to segment the level of the water that is shown in Figure 8(b). Figure 8(c) illustrates the several lines
created using Hough transform technique to find the maximum and minimum level value based on the white
pixel of the image. The line detection image is shown in Figure 8(d), in which the blue line presents
minimum level and the red line indicates maximum level. The process is repeated for overfilled and
underfilled condition as graphically shows in Figure 9(a) and Figure 9(b).
(a) (b) (c) (d)
Figure 8. Good level image: (a) Sample image, (b) segmented image, (c) Hough transform image, (d) line
detection image
Defect Defect
7. ISSN:2088-8708
Int J Elec & Comp Eng, Vol. 8, No. 6, December 2018 : 5032 - 5040
5038
(a) (b)
Figure 9. Level image: (a) Underfilled, (b) overfilled
In obtaining the threshold value of the water level, the distance of the water level is calculated
between the maximum and minimum lines using the equation (6). Figure 10 shows three misclassified of x
node (red color) that plotted in scatter diagram. In defect condition, there is case where decision tree
misclassified the shape of the bottle as good condition. This case will lead to inaccuracy that inappropriate
for real time application. Figure 11 illustrates the scatter plot diagrams that classify the sample image based
on level condition, which are good, overfilled and underfilled. Based on Figure 11, the decision tree is
misclassified ofx node (blue color) as good condition. The actual class of the misclassified node is
underfilled and overfilled condition.
Figure 10. Scatter plot diagram of shape defect classification using LSD technique
Figure 11. Scatter plot diagram of level defect classification using Hough transform
The comparison in term of accuracy for shape and level detection techniques are shown in Figure
12. From the findings, LSD technique shows the highest percentage with accuracy 97% followed by SHFCM
with 91% and morphological operation with 80%. In this case, the outcome using morphological operation is
error prone, not to mention a large different of 17% compared to LSD. The aim for level detection is
Good
Overfilled
Misclassified
Underfilled
Defect
Good
Misclassified
8. Int J Elec& Comp Eng ISSN: 2088-8708
Shape and Level Bottles Detection using Local Standard Deviation and Hough … (Norhashimah M.S.)
5039
findingan appropriate analysis which can detect level of liquid bottle accurately.Hence, Hough transform has
proven that it can perform absolutely well in detecting liquid level by achieving 93% accuracy using decision
tree classifier.
Figure 12. Performance analysis for shape and level defect detection
4. CONCLUSION
In this paper, the analysis of shape and level of bottle is presented using local standard deviation and
Hough transform. The analysis involved pre-processing, segmentation, feature extraction and classification.
The sample image is pre-processed to eliminate the noise that occurs on the image which gives complexity
during segmentation process. In segmenting the image, LSD technique is applied byenhancing the contrast
gain of the image from low to high. For shape features, extent parameter is calculated to determine the ratio
of the bottle area. Meanwhile, Hough transform is used to extract the features of the water level such as
maximum and minimum level. The decision tree classifier is applied to classify the shape and level either
good or defect. The performance of the proposed techniques is verified in terms of accuracy. The
experimental result shows 97% and 93% accuracy is achieved by shape and level detection. Thus, the
proposed techniques demonstrate the potential to detect the shape and level for beverages product.
ACKNOWLEDGEMENTS
The authors would like to thank the Universiti Teknikal Malaysia Melaka (UTeM), UTeM Zamalah
Scheme, Rehabilitation Engineering & Assistive Technology (REAT) research group under Center of
Robotics & Industrial Automation (CeRIA), Advanced Digital Signal Processing (ADSP) Research
Laboratory and Ministry of Higher Education (MOHE), Malaysia for sponsoring this work under project
GLuar/STEVIA/2016/FKE-CeRIA/l00009 and the use of the existing facilities to complete this project.
REFERENCES
[1] K.B. Kim, H.J. Park and D.H. Song. Vision-based Crack Identification on the Concrete Slab Surface using Fuzzy
Reasoning Rules and Self-Organizing. International Journal of Electrical and Computer Engineering (IJECE),
6(4), pp.1577-1586, 2016.
[2] N.N.S.A. Rahman, Saad, N.M. Saad, A.R. Abdullah, M.R.M. Hassan, M.S.S.M. Basir and N.S.M. Noor.
Automated Real-Time Vision Quality Inspection Monitoring System. Indonesian Journal of Electrical Engineering
and Computer Science, 11(2), pp.775-783, 2018
[3] G. Moradi, M. Shamsi, M. H. Sedaaghi, and S. Moradi, “Apple defect detection using statistical histogram based
Fuzzy C-means algorithm”, Inst. Electr. Electron. Eng., pp. 11–15, 2011.
[4] M. Park, J.S. Jin, S.L. Au, S. Luo, and Y. Cui, “Automated Defect Inspection System by Pattern Recognition”,
Proc. 5th Int. Conf. Image Graph. ICIG 2009, vol. 2, no. 2, pp. 768–773, 2009.
[5] M.A.M. Fuad, M.R. Ab Ghani., R. Ghazali, M.F. Sulaima, M.H. Jali, T. Sutikno, T.A. Izzuddin and Z. Jano. “A
Review on Methods of Identifying and Counting Aedes Aegypti Larvae using Image Segmentation
Technique”. TELKOMNIKA (Telecommunication Computing Electronics and Control), 15(3), pp.1199-1206, 2017.
[6] S. Ramli, M.M. Mustafa, A. Hussain, and D.A. Wahab, “Plastic Bottle Shape Classification Using Partial Erosion-
based Approach”, Mod. Appl. Sci., vol. 6, no. 4, pp. 77–83, 2012.
[7] K.J. Pithadiya, C.K. Modi, and J.D. Chauhan, “Machine Vision Based Liquid Level Inspection System using ISEF
Edge detection Technique”, Int. Conf. Work. Emerg. Trends Technol. (ICWET), no. Icwet, pp. 601–605, 2010.
[8] L. Yazdi, A.S. Prabuwono, and E. Golkar, “Feature extraction algorithm for fill level and cap inspection in bottling
machine”, Proc. 2011 Int. Conf. Pattern Anal. Intell. Robot. ICPAIR 2011, vol. 1, no. June, pp. 47–52, 2011.
80
91
97
93
0 20 40 60 80 100
Morphological Operation…
SHFCM [3]
LSD
Hough Transform
Percentage (%)
Accuracy of Shape and Level Detection Techniques
Shape Level
9. ISSN:2088-8708
Int J Elec & Comp Eng, Vol. 8, No. 6, December 2018 : 5032 - 5040
5040
[9] Y. Yang and Z. Zhou, “Research and Implementation of Image Enhancement Algorithm Based on Local Mean and
Standard Deviation”, IEEE Symp. Electr. Electron. Eng., pp. 375–378, 2012.
[10] O. Barinova, V. Lempitsky, and P. Kohli, “On detection of multiple object instances using Hough transform”, IEEE
Trans. Pattern Anal. Mach. Intell., vol. 34, no. 9, pp. 1773–1784, 2010.
[11] R. Perera and S. Premasiri, “Hardware Implementation of Essential Pre-Processing & Morphological Operations in
Image Processing”, Natl. Conf. Technol. Manag., pp. 3–7, 2017.
[12] J. Xie, Y. Zhou and L. Ding. Local standard deviation spectral clustering. In Big Data and Smart Computing
(BigComp), 2018 IEEE International Conference on (pp. 242-250). IEEE. 2018, January.
[13] B.S. Prakoso, I.K. Timotius, and I. Setyawan, “Palmprint Identification for User Verification based on Line
Detection and Local Standard Deviation”, 2014 1st Int. Conf. Inf. Technol. Comput. Electr. Eng. Green Technol. Its
Appl. a Better Futur. ICITACEE 2014 - Proc., pp. 155–159, 2014.
[14] A. Kaur and C. Singh. Contrast enhancement for cephalometric images using wavelet-based modified adaptive
histogram equalization. Applied Soft Computing, 51, pp.180-191, 2017.
[15] V.S. Tallapragada, D.M. Reddy, P.S. Kiran and D.V. Reddy. A Novel Medical Image Segmentation and
Classification using Combined Feature Set and Decision Tree Classifier. International Journal of Research in
Engineering and Technology, 4(9), pp.83-86, 2016.
[16] S.A. Ludwig, S. Picek and D. Jakobovic. Classification of Cancer Data: Analyzing Gene Expression Data Using a
Fuzzy Decision Tree Algorithm. In Operations Research Applications in Health Care Management (pp. 327-347).
Springer, Cham, 2018.
BIOGRAPHIES OF AUTHORS
Nor Nabilah Syazana binti Abdul Rahman has received her B. Eng. from Universiti Teknikal
Malaysia in 2016. She is currently pursuing her Master Eng. in Universiti Teknikal Malaysia.
Her research areas are in image processing and computer vision for product quality inspection
system.
Dr. Norhashimah binti Mohd Saad is currently working as a senior lecturer in Department
Computer, FKEKK, UTeM. She finished her study in Bachelor of Engineering, Master of
Engineering and PhD in Medical Image Processing from UTM, Malaysia.
Associate Prof. Dr. Abdul Rahim bin Abdullah has received his B. Eng., Master Eng., PhD
Degree from Universiti Teknologi Malaysia in 2001, 2004 and 2011 in Electrical Engineering
and Digital Signal Processing respectively. He is currently an Associate Professor with the
Department of Electrical Engineering for Universiti Teknikal Malaysia Melaka (UTeM).