Biometrics is science of measuring and statistically analyzing biological data. Biometric system establishes identity of a person based on unique physical or behavioral characteristic possessed by an individual. Behavioral biometrics measures characteristics which are acquired naturally over time. Physical biometrics measures inherent physical characteristics on an individual. Over the last few decades enormous attention is drawn towards ocular biometrics. Cues provided by ocular region have led to exploration of newer traits. Feasibility of periocular region as a useful biometric trait has been explored recently. With the promising results of preliminary examination, research towards periocular region is currently gaining lot of
prominence. Researchers have analyzed various techniques of feature extraction and classification in the periocular region. This paper investigates the effect of using Lower Central
Periocular Region (LCPR) for identification. The results obtained are comparable with those acquired for full periocular texture features with an advantage of reduced periocular area
PERIOCULAR RECOGNITION USING REDUCED FEATURESijcsit
Biometrics is science of measuring and statistically analyzing biological data. Biometric system establishes identity of a person based on unique physical or behavioural characteristic possessed by an individual.Behavioural biometrics measures characteristics which are acquired naturally over time. Physical biometrics measures inherent physical characteristics on a n individual. Over the last few decades enormous attention is drawn towards ocular biometrics. Cues provided by ocular region have led to exploration of newer traits. Feasibility of periocular region as a useful biometric trait has been explored recently. With the promising results of preliminary examination, research towards periocular region is currently gaining lot of prominence. Researchers have analyzed various techniques of feature extraction and classification in the periocular region. The current paper investigates the effect of using Lower Central Periocular Region (LCPR) for identification. The results obtained are comparable with those acquired for the entire periocular region with an advantage of reduced periocular area
The Biometric Algorithm based on Fusion of DWT Frequency Components of Enhanc...CSCJournals
The biometrics are used to authenticate a person effectively compared to conventional methods of identification. In this paper we propose the biometric algorithm based on fusion of Discrete Wavelet Transform(DWT) frequency components of enhanced iris image.The iris template is extracted from an eye image by considering horizontal pixels in an iris part.The iris template contrast is enhanced using Adaptive Histogram Equalization (AHE) and Histogram Equalization (HE).The DWT is applied on enhanced iris template.The features are formed by straight line fusion of low and high frequency coefficients of DWT.The Euclidian distance is used to compare final test features with database features. It is observed that the performance parameters are better in the case of proposed algorithm compared to existing algorithms.
IRDO: Iris Recognition by fusion of DTCWT and OLBPIJERA Editor
This document proposes a new iris recognition method called IRDO that fuses Dual Tree Complex Wavelet Transform (DTCWT) and Overlapping Local Binary Pattern (OLBP) features. DTCWT is used to extract micro-texture features from the iris, while OLBP enhances the extraction of edge features. Fusing these two methods results in improved matching performance and classification accuracy compared to state-of-the-the-art techniques. The proposed IRDO method achieves higher iris recognition rates as measured by Total Success Rate and Equal Error Rate.
IRJET- Performance Analysis of Learning Algorithms for Automated Detection of...IRJET Journal
This document presents research on using machine learning algorithms to detect glaucoma from fundus images. The researchers extracted various texture and intensity features from fundus images using methods like gray level co-occurrence matrix, histogram of oriented gradients, wavelet transforms, and shearlet transforms. They used feature selection to reduce the number of features before classifying the images as normal or glaucomatous using support vector machines and K-nearest neighbors algorithms. The best accuracy of 97.5% was achieved using wavelet features. The researchers evaluated the classification performance using metrics like accuracy, sensitivity, specificity, and predictive values to assess the ability of the extracted features to detect glaucoma.
EFFECTIVENESS OF FEATURE DETECTION OPERATORS ON THE PERFORMANCE OF IRIS BIOME...IJNSA Journal
Iris Recognition is a highly efficient biometric identification system with great possibilities for future in the
security systems area.Its robustness and unobtrusiveness, as opposed tomost of the currently deployed
systems, make it a good candidate to replace most of thesecurity systems around. By making use of the
distinctiveness of iris patterns, iris recognition systems obtain a unique mapping for each person.
Identification of this person is possible by applying appropriate matching algorithm.In this paper,
Daugman’s Rubber Sheet model is employed for irisnormalization and unwrapping, descriptive statistical
analysis of different feature detection operators is performed, features extracted is encoded using Haar
wavelets and for classification hammingdistance as a matching algorithm is used. The system was tested on
the UBIRIS database. The edge detection algorithm, Canny, is found to be the best one to extract most of
the iris texture. The success rate of feature detection using canny is 81%, False Accept Rate is 9% and
False Reject Rate is 10%.
This document presents a new approach for human identification using sclera recognition. It begins with background on sclera and challenges with sclera recognition. It then describes the proposed methodology which includes sclera segmentation, feature extraction using Gabor filtering, and recognition using Bayesian classification. Experimental results show the false accept and reject rates for the approach. It concludes that sclera recognition is promising for human identification and can achieve accuracy comparable to iris recognition in visible light. The proposed approach uses Bayesian classification for recognition, which is more effective than previous matching score methods.
Subtraction radiography and morphometric analysis in periodonticsR Viswa Chandra
“A Simple Method to Assess Bone Fill through Digital Subtraction Technique and Morphometric Analysis”- Guest lecture as a part of Dr NTRUHS Zonal CDE programme at Army College of Dental Sciences, Hyderabad, India on 18/1/2014.
Shift Invariant Ear Feature Extraction using Dual Tree Complex Wavelet Transf...IDES Editor
Since last 10 years various methods have been used
for ear recognition. This paper describes the automatic
localization of an ear and its segmentation from the side pose
of face image. In this paper, authors have proposed a novel
approach of feature extraction of iris image using 2D Dual
Tree Complex Wavelet Transform (2D-DT-CWT) which
provides six sub-bands in 06 different orientations, against 3
orientations in DWT. DT-CWT being complex it exhibit the
property of shift invariance. Ear features vectors are obtained
by computing mean, standard deviation, energy and entropy
of these six sub-bands DT-CWT and three sub-bands of DWT
Canberra distance and Euclidian distance are used for
matching. The accuracy of recognition is achieved above 97
%.
PERIOCULAR RECOGNITION USING REDUCED FEATURESijcsit
Biometrics is science of measuring and statistically analyzing biological data. Biometric system establishes identity of a person based on unique physical or behavioural characteristic possessed by an individual.Behavioural biometrics measures characteristics which are acquired naturally over time. Physical biometrics measures inherent physical characteristics on a n individual. Over the last few decades enormous attention is drawn towards ocular biometrics. Cues provided by ocular region have led to exploration of newer traits. Feasibility of periocular region as a useful biometric trait has been explored recently. With the promising results of preliminary examination, research towards periocular region is currently gaining lot of prominence. Researchers have analyzed various techniques of feature extraction and classification in the periocular region. The current paper investigates the effect of using Lower Central Periocular Region (LCPR) for identification. The results obtained are comparable with those acquired for the entire periocular region with an advantage of reduced periocular area
The Biometric Algorithm based on Fusion of DWT Frequency Components of Enhanc...CSCJournals
The biometrics are used to authenticate a person effectively compared to conventional methods of identification. In this paper we propose the biometric algorithm based on fusion of Discrete Wavelet Transform(DWT) frequency components of enhanced iris image.The iris template is extracted from an eye image by considering horizontal pixels in an iris part.The iris template contrast is enhanced using Adaptive Histogram Equalization (AHE) and Histogram Equalization (HE).The DWT is applied on enhanced iris template.The features are formed by straight line fusion of low and high frequency coefficients of DWT.The Euclidian distance is used to compare final test features with database features. It is observed that the performance parameters are better in the case of proposed algorithm compared to existing algorithms.
IRDO: Iris Recognition by fusion of DTCWT and OLBPIJERA Editor
This document proposes a new iris recognition method called IRDO that fuses Dual Tree Complex Wavelet Transform (DTCWT) and Overlapping Local Binary Pattern (OLBP) features. DTCWT is used to extract micro-texture features from the iris, while OLBP enhances the extraction of edge features. Fusing these two methods results in improved matching performance and classification accuracy compared to state-of-the-the-art techniques. The proposed IRDO method achieves higher iris recognition rates as measured by Total Success Rate and Equal Error Rate.
IRJET- Performance Analysis of Learning Algorithms for Automated Detection of...IRJET Journal
This document presents research on using machine learning algorithms to detect glaucoma from fundus images. The researchers extracted various texture and intensity features from fundus images using methods like gray level co-occurrence matrix, histogram of oriented gradients, wavelet transforms, and shearlet transforms. They used feature selection to reduce the number of features before classifying the images as normal or glaucomatous using support vector machines and K-nearest neighbors algorithms. The best accuracy of 97.5% was achieved using wavelet features. The researchers evaluated the classification performance using metrics like accuracy, sensitivity, specificity, and predictive values to assess the ability of the extracted features to detect glaucoma.
EFFECTIVENESS OF FEATURE DETECTION OPERATORS ON THE PERFORMANCE OF IRIS BIOME...IJNSA Journal
Iris Recognition is a highly efficient biometric identification system with great possibilities for future in the
security systems area.Its robustness and unobtrusiveness, as opposed tomost of the currently deployed
systems, make it a good candidate to replace most of thesecurity systems around. By making use of the
distinctiveness of iris patterns, iris recognition systems obtain a unique mapping for each person.
Identification of this person is possible by applying appropriate matching algorithm.In this paper,
Daugman’s Rubber Sheet model is employed for irisnormalization and unwrapping, descriptive statistical
analysis of different feature detection operators is performed, features extracted is encoded using Haar
wavelets and for classification hammingdistance as a matching algorithm is used. The system was tested on
the UBIRIS database. The edge detection algorithm, Canny, is found to be the best one to extract most of
the iris texture. The success rate of feature detection using canny is 81%, False Accept Rate is 9% and
False Reject Rate is 10%.
This document presents a new approach for human identification using sclera recognition. It begins with background on sclera and challenges with sclera recognition. It then describes the proposed methodology which includes sclera segmentation, feature extraction using Gabor filtering, and recognition using Bayesian classification. Experimental results show the false accept and reject rates for the approach. It concludes that sclera recognition is promising for human identification and can achieve accuracy comparable to iris recognition in visible light. The proposed approach uses Bayesian classification for recognition, which is more effective than previous matching score methods.
Subtraction radiography and morphometric analysis in periodonticsR Viswa Chandra
“A Simple Method to Assess Bone Fill through Digital Subtraction Technique and Morphometric Analysis”- Guest lecture as a part of Dr NTRUHS Zonal CDE programme at Army College of Dental Sciences, Hyderabad, India on 18/1/2014.
Shift Invariant Ear Feature Extraction using Dual Tree Complex Wavelet Transf...IDES Editor
Since last 10 years various methods have been used
for ear recognition. This paper describes the automatic
localization of an ear and its segmentation from the side pose
of face image. In this paper, authors have proposed a novel
approach of feature extraction of iris image using 2D Dual
Tree Complex Wavelet Transform (2D-DT-CWT) which
provides six sub-bands in 06 different orientations, against 3
orientations in DWT. DT-CWT being complex it exhibit the
property of shift invariance. Ear features vectors are obtained
by computing mean, standard deviation, energy and entropy
of these six sub-bands DT-CWT and three sub-bands of DWT
Canberra distance and Euclidian distance are used for
matching. The accuracy of recognition is achieved above 97
%.
This document discusses a study that aims to classify glaucomatous images based on wavelet features with high accuracy. It analyzes energy distributions from several wavelet filters to extract texture features from retinal images. These features are ranked and selected before being introduced to classifiers like support vector machine, sequential minimal optimization, random forest, and naive Bayes. The proposed system is expected to achieve over 95% accuracy in glaucoma classification, outperforming existing techniques.
An improved bovine_iris_segmentation_methodNoura Gomaa
The document proposes an improved method for bovine iris segmentation that detects the inner and outer edges of the iris through dynamic contour tracking and least squares ellipse fitting. It then normalizes and enhances the segmented iris region. Experimental results show the algorithm effectively segments the iris region and promotes iris feature extraction and matching, with benefits for meat food safety management.
A Survey on Retinal Area Detector From Scanning Laser Ophthalmoscope (SLO) Im...IRJET Journal
This document summarizes a survey on methods for automatically detecting the true retinal area from scanning laser ophthalmoscope (SLO) images, which is an important first step for computer-aided diagnosis of retinal diseases. Distinguishing the retinal area from artifacts in SLO images is challenging. The proposed system uses a machine learning approach involving extracting features from super-pixels of the image, selecting important features, and training a classifier to identify the retinal area versus artifacts. This automated detection of the retinal area could help diagnose retinal disorders more efficiently and at scale compared to manual examination methods.
International Journal of Engineering Research and DevelopmentIJERD Editor
This document summarizes research applying the fuzzy c-means (FCM) technique to iris eye segmentation. FCM is used to segment 25 iris images from the iris region of interest. The accuracy of segmentation is evaluated using misclassification ratio (MCR) calculations at different FCM levels. Results show segmentation improves with increasing FCM levels, with blue irises showing clearer segmentation than other colors. MCR values decrease from 28.25% at FCM level 1 to 6.11% at FCM level 3 for one image, demonstrating increased segmentation accuracy with multiple applications of FCM.
A fast specular reflection removal based on pixels properties methodjournalBEEI
This document presents a new method for fast and accurate removal of specular reflections from iris images. Specular reflections can reduce the accuracy of iris segmentation algorithms. The proposed method uses pixel properties to locate the pupil region, which is then used to remove specular reflections via flood-fill. It achieves the fastest execution time compared to previous methods, as well as the highest segmentation accuracy and structural similarity index (SSIM) scores. This indicates the proposed method is both fast and accurate for implementation in iris recognition systems.
Hybrid Domain based Face Recognition using DWT, FFT and Compressed CLBPCSCJournals
The characteristics of human body parts and behaviour are measured with biometrics, which are used to authenticate a person. In this paper, we propose Hybrid Domain based Face Recognition using DWT, FFT and Compressed CLBP. The face images are preprocessed to enhance sharpness of images using Discrete Wavelet Transform (DWT) and Laplacian filter. The Compound Local Binary Pattern (CLBP) is applied on sharpened preprocessed face image to compute magnitude and sign components. The histogram is applied on CLBP components to compress number of features. The Fast Fourier Transformation (FFT) is applied on preprocessed image and compute magnitudes. The histogram features and FFT magnitude features are fused to generate final feature. The Euclidian Distance (ED) is used to compare final features of test face images with data base face images to compute performance parameters. It is observed that the percentage recognition rate is high in the case of proposed algorithm compared to existing algorithms.
Tomographic fundus features in Pseudoxanthoma Elasticum Abdallah Ellabban
This study analyzed optical coherence tomography (OCT) scans of 52 eyes from 27 patients with pseudoxanthoma elasticum (PXE) and compared features between those with choroidal neovascularization (CNV) secondary to angioid streaks (AS) vs CNV secondary to age-related macular degeneration (AMD). Unique lesions were more common in PXE patients, including outer retinal tubulation (ORT) seen in 70.5% of eyes with CNV secondary to AS vs 34.1% of AMD eyes, and Bruch's membrane undulation seen in 70.8% vs 11.4% respectively. ORT appeared as round or ovoid structures beneath the outer plexiform layer.
This document presents a new iris segmentation method for iris recognition systems. The proposed method uses Canny edge detection and Hough transform to locate the iris boundary after finding the pupil boundary using image gray levels. Experiments on the CASIA iris image database of 756 images show the method can accurately detect the iris boundary in 99.2% of images. This is an improvement over other existing segmentation techniques. The key steps of the proposed method are preprocessing, segmentation using Canny edge detection and Hough transform, normalization using the rubber sheet model, feature encoding with Gabor wavelets, and matching with Hamming distance.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Micro CT settings for caries detection: how to optimizeIJERA Editor
Some important items that can influence micro CT image were reviewed in this study. Different settings were
optimized for the assessment of early caries lesions. There are several researches on bone using micro CT but not
too much on dental hard tissues when assessing mineral loss. Different kinds of micro CT devices and
technologies are taking place today, each requiring unique settings, and this consists one of the greatest obstacles
for the use of micro CT on dental hard tissues. Achieving the settings for an ideal dental image is therefore a
challenge. The purpose of this study was to evaluate different micro CT settings to optimize the assessment of
early caries lesions aiming the integrity of the dental specimen thus, making possible to reuse it for further
studies. Three teeth with early caries lesions were submitted to different micro CT settings and different
reconstruction settings, aiming a better image. The final image was compared visually through different densities
and attenuation coefficients. The best setting for teeth tissues was achieved regarding contrast, definition, noise
reduction and the larger difference between sound enamel and early lesions attenuation coefficient.
Driving support systems, such as car navigation systems are becoming common and they
support driver in several aspects. Non-intrusive method of detecting Fatigue and drowsiness
based on eye-blink count and eye directed instruction controlhelps the driver to prevent from
collision caused by drowsy driving. Eye detection and tracking under various conditions such as
illumination, background, face alignment and facial expression makes the problem
complex.Neural Network based algorithm is proposed in this paper to detect the eyes efficiently.
In the proposed algorithm, first the neural Network is trained to reject the non-eye regionbased
on images with features of eyes and the images with features of non-eye using Gabor filter and Support Vector Machines to reduce the dimension and classify efficiently. In the algorithm, first the face is segmented using L*a*btransform color space, then eyes are detected using HSV and Neural Network approach. The algorithm is tested on nearly 100 images of different persons under different conditions and the results are satisfactory with success rate of 98%.The Neural Network is trained with 50 non-eye images and 50 eye images with different angles using Gabor filter. This paper is a part of research work on “Development of Non-Intrusive system for realtime Monitoring and Prediction of Driver Fatigue and drowsiness” project sponsored by Department of Science & Technology, Govt. of India, New Delhi at Vignan Institute of Technology and Sciences, Vignan Hills, Hyderabad.
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The Indian Dental Academy is the Leader in continuing dental education , training dentists in all aspects of dentistry and
offering a wide range of dental certified courses in different formats.for more details please visit
www.indiandentalacademy.com
This document discusses the principles of cone beam computed tomography (CBCT). It describes how CBCT uses an x-ray source and detector that rotate around the patient to acquire multiple 2D projections, which are then used to reconstruct 3D volumetric images using computational algorithms. Key aspects of CBCT imaging discussed include the hardware components involved in image acquisition, technical parameters that influence image quality and radiation dose, and different scan modes available. The document emphasizes that clinicians should understand how various acquisition factors impact the diagnostic value and radiation exposure of CBCT exams.
CBCT provides 3D images of the jaws and teeth using low-dose x-rays and computer reconstruction. It allows visualization of bone quality, morphology, and proximity to anatomical structures for applications in dental implant planning, oral surgery, endodontics, orthodontics, and more. CBCT is particularly useful when traditional 2D imaging is limited by anatomical superimposition or inability to assess 3D bone characteristics. Some key uses of CBCT include implant planning, assessment of impacted teeth, maxillofacial trauma, airway imaging, and evaluation of periodontal defects.
IRJET- Survey on Face Detection MethodsIRJET Journal
The document reviews 15 papers on various face detection methods published between 2013 and 2018. It finds that the most popular feature extraction method is skin color segmentation, which achieves detection rates of 88-98%. The Viola-Jones method typically detects face regions as well as other body parts at a rate of 80-90%. Common face detection methods reviewed include skin color segmentation, Viola-Jones, Haar features, 3D mean shift, and Cascaded Head and Shoulder Detection. OpenCV, Python or MATLAB are typically used to implement real-time face detection systems.
This document summarizes various approaches for restoring functional vision at all distances, like a young adult, after cataract or presbyopia surgery. It focuses on accommodative intraocular lenses (AIOLs), which aim to provide accommodation through optical or positional changes. However, past AIOL studies showed limitations in near vision outcomes due to poor methodology and commercial bias. Future AIOLs would need to be independent of the capsular bag, with outcomes proven in large, long-term studies using standardized tests to measure accommodation and distinguish it from pseudo accommodation. While multifocal IOLs provide functional vision at all distances, they require neuroadaptation and always disperse some light between foci, rather than using it all efficiently.
Glaucoma Screening Test By Segmentation of Optical Disc& Cup Segmentation Usi...IJERA Editor
Glaucoma is one of the most common causes of blindness and it is becoming even more important considering
the ageing society. Because healing of died retinal nerve fibers is not possible early detection and prevention is
essential. Robust, automated mass-screening will help to extend the symptom-free life of affected patients. We
devised a novel, automated, appearance based glaucoma classification system that does not depend on
segmentation based measurements. Our purely data-driven approach is applicable in large-scale screening
examinations. The proposed segmentation methods have been evaluated in a database of 650 images with optic
disc and optic cup boundaries manually marked by trained professionals. Our expected Experimental results
may be average overlapping error of 9.5% and 24.1% in optic disc and optic cup segmentation, respectively.
This document summarizes the potential applications of cone beam computed tomography (CBCT) in forensic science based on a review of clinical and scientific literature. CBCT can be used for age and sex estimation, frontal sinus analysis for identification, and 3D facial reconstruction. Studies show CBCT allows for more accurate age estimation compared to 2D radiographs. Measurements of bones like the mastoid and mandible from CBCT scans can determine sex with over 80% accuracy. Comparison of frontal sinus patterns through CBCT provides reliable evidence for identification. While CBCT has advantages like portability and accuracy, limitations include artifacts from metals and limited soft tissue contrast. Further research is still needed to improve 3D reconstruction techniques from CBCT data for forensic
EFFECTIVENESS OF FEATURE DETECTION OPERATORS ON THE PERFORMANCE OF IRIS BIOME...IJNSA Journal
Iris Recognition is a highly efficient biometric identification system with great possibilities for future in the security systems area.Its robustness and unobtrusiveness, as opposed tomost of the currently deployed systems, make it a good candidate to replace most of thesecurity systems around. By making use of the distinctiveness of iris patterns, iris recognition systems obtain a unique mapping for each person. Identification of this person is possible by applying appropriate matching algorithm.In this paper, Daugman’s Rubber Sheet model is employed for irisnormalization and unwrapping, descriptive statistical analysis of different feature detection operators is performed, features extracted is encoded using Haar wavelets and for classification hammingdistance as a matching algorithm is used. The system was tested on the UBIRIS database. The edge detection algorithm, Canny, is found to be the best one to extract most of the iris texture. The success rate of feature detection using canny is 81%, False Accept Rate is 9% and False Reject Rate is 10%.
IRIS BIOMETRIC RECOGNITION SYSTEM EMPLOYING CANNY OPERATORcsitconf
Biometrics has become important in security applications. In comparison with many other
biometric features, iris recognition has very high recognition accuracy because it depends on
iris which is located in a place that still stable throughout human life and the probability to find
two identical iris's is close to zero. The identification system consists of several stages including
segmentation stage which is the most serious and critical one. The current segmentation
methods still have limitation in localizing the iris due to circular shape consideration of the
pupil. In this research, Daugman method is done to investigate the segmentation techniques.
Eyelid detection is another step that has been included in this study as a part of segmentation
stage to localize the iris accurately and remove unwanted area that might be included. The
obtained iris region is encoded using haar wavelets to construct the iris code, which contains
the most discriminating feature in the iris pattern. Hamming distance is used for comparison of
iris templates in the recognition stage. The dataset which is used for the study is UBIRIS
database. A comparative study of different edge detector operator is performed. It is observed
that canny operator is best suited to extract most of the edges to generate the iris code for
comparison. Recognition rate of 89% and rejection rate of 95% is achieved.
IRIS BIOMETRIC RECOGNITION SYSTEM EMPLOYING CANNY OPERATORcscpconf
Biometrics has become important in security applications. In comparison with many other biometric features, iris recognition has very high recognition accuracy because it depends on
iris which is located in a place that still stable throughout human life and the probability to find two identical iris's is close to zero. The identification system consists of several stages including
segmentation stage which is the most serious and critical one. The current segmentation methods still have limitation in localizing the iris due to circular shape consideration of the
pupil. In this research, Daugman method is done to investigate the segmentation techniques. Eyelid detection is another step that has been included in this study as a part of segmentation
stage to localize the iris accurately and remove unwanted area that might be included. The obtained iris region is encoded using haar wavelets to construct the iris code, which contains
the most discriminating feature in the iris pattern. Hamming distance is used for comparison of iris templates in the recognition stage. The dataset which is used for the study is UBIRIS database. A comparative study of different edge detector operator is performed. It is observed that canny operator is best suited to extract most of the edges to generate the iris code for comparison. Recognition rate of 89% and rejection rate of 95% is achieved.
This document discusses a study that aims to classify glaucomatous images based on wavelet features with high accuracy. It analyzes energy distributions from several wavelet filters to extract texture features from retinal images. These features are ranked and selected before being introduced to classifiers like support vector machine, sequential minimal optimization, random forest, and naive Bayes. The proposed system is expected to achieve over 95% accuracy in glaucoma classification, outperforming existing techniques.
An improved bovine_iris_segmentation_methodNoura Gomaa
The document proposes an improved method for bovine iris segmentation that detects the inner and outer edges of the iris through dynamic contour tracking and least squares ellipse fitting. It then normalizes and enhances the segmented iris region. Experimental results show the algorithm effectively segments the iris region and promotes iris feature extraction and matching, with benefits for meat food safety management.
A Survey on Retinal Area Detector From Scanning Laser Ophthalmoscope (SLO) Im...IRJET Journal
This document summarizes a survey on methods for automatically detecting the true retinal area from scanning laser ophthalmoscope (SLO) images, which is an important first step for computer-aided diagnosis of retinal diseases. Distinguishing the retinal area from artifacts in SLO images is challenging. The proposed system uses a machine learning approach involving extracting features from super-pixels of the image, selecting important features, and training a classifier to identify the retinal area versus artifacts. This automated detection of the retinal area could help diagnose retinal disorders more efficiently and at scale compared to manual examination methods.
International Journal of Engineering Research and DevelopmentIJERD Editor
This document summarizes research applying the fuzzy c-means (FCM) technique to iris eye segmentation. FCM is used to segment 25 iris images from the iris region of interest. The accuracy of segmentation is evaluated using misclassification ratio (MCR) calculations at different FCM levels. Results show segmentation improves with increasing FCM levels, with blue irises showing clearer segmentation than other colors. MCR values decrease from 28.25% at FCM level 1 to 6.11% at FCM level 3 for one image, demonstrating increased segmentation accuracy with multiple applications of FCM.
A fast specular reflection removal based on pixels properties methodjournalBEEI
This document presents a new method for fast and accurate removal of specular reflections from iris images. Specular reflections can reduce the accuracy of iris segmentation algorithms. The proposed method uses pixel properties to locate the pupil region, which is then used to remove specular reflections via flood-fill. It achieves the fastest execution time compared to previous methods, as well as the highest segmentation accuracy and structural similarity index (SSIM) scores. This indicates the proposed method is both fast and accurate for implementation in iris recognition systems.
Hybrid Domain based Face Recognition using DWT, FFT and Compressed CLBPCSCJournals
The characteristics of human body parts and behaviour are measured with biometrics, which are used to authenticate a person. In this paper, we propose Hybrid Domain based Face Recognition using DWT, FFT and Compressed CLBP. The face images are preprocessed to enhance sharpness of images using Discrete Wavelet Transform (DWT) and Laplacian filter. The Compound Local Binary Pattern (CLBP) is applied on sharpened preprocessed face image to compute magnitude and sign components. The histogram is applied on CLBP components to compress number of features. The Fast Fourier Transformation (FFT) is applied on preprocessed image and compute magnitudes. The histogram features and FFT magnitude features are fused to generate final feature. The Euclidian Distance (ED) is used to compare final features of test face images with data base face images to compute performance parameters. It is observed that the percentage recognition rate is high in the case of proposed algorithm compared to existing algorithms.
Tomographic fundus features in Pseudoxanthoma Elasticum Abdallah Ellabban
This study analyzed optical coherence tomography (OCT) scans of 52 eyes from 27 patients with pseudoxanthoma elasticum (PXE) and compared features between those with choroidal neovascularization (CNV) secondary to angioid streaks (AS) vs CNV secondary to age-related macular degeneration (AMD). Unique lesions were more common in PXE patients, including outer retinal tubulation (ORT) seen in 70.5% of eyes with CNV secondary to AS vs 34.1% of AMD eyes, and Bruch's membrane undulation seen in 70.8% vs 11.4% respectively. ORT appeared as round or ovoid structures beneath the outer plexiform layer.
This document presents a new iris segmentation method for iris recognition systems. The proposed method uses Canny edge detection and Hough transform to locate the iris boundary after finding the pupil boundary using image gray levels. Experiments on the CASIA iris image database of 756 images show the method can accurately detect the iris boundary in 99.2% of images. This is an improvement over other existing segmentation techniques. The key steps of the proposed method are preprocessing, segmentation using Canny edge detection and Hough transform, normalization using the rubber sheet model, feature encoding with Gabor wavelets, and matching with Hamming distance.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Micro CT settings for caries detection: how to optimizeIJERA Editor
Some important items that can influence micro CT image were reviewed in this study. Different settings were
optimized for the assessment of early caries lesions. There are several researches on bone using micro CT but not
too much on dental hard tissues when assessing mineral loss. Different kinds of micro CT devices and
technologies are taking place today, each requiring unique settings, and this consists one of the greatest obstacles
for the use of micro CT on dental hard tissues. Achieving the settings for an ideal dental image is therefore a
challenge. The purpose of this study was to evaluate different micro CT settings to optimize the assessment of
early caries lesions aiming the integrity of the dental specimen thus, making possible to reuse it for further
studies. Three teeth with early caries lesions were submitted to different micro CT settings and different
reconstruction settings, aiming a better image. The final image was compared visually through different densities
and attenuation coefficients. The best setting for teeth tissues was achieved regarding contrast, definition, noise
reduction and the larger difference between sound enamel and early lesions attenuation coefficient.
Driving support systems, such as car navigation systems are becoming common and they
support driver in several aspects. Non-intrusive method of detecting Fatigue and drowsiness
based on eye-blink count and eye directed instruction controlhelps the driver to prevent from
collision caused by drowsy driving. Eye detection and tracking under various conditions such as
illumination, background, face alignment and facial expression makes the problem
complex.Neural Network based algorithm is proposed in this paper to detect the eyes efficiently.
In the proposed algorithm, first the neural Network is trained to reject the non-eye regionbased
on images with features of eyes and the images with features of non-eye using Gabor filter and Support Vector Machines to reduce the dimension and classify efficiently. In the algorithm, first the face is segmented using L*a*btransform color space, then eyes are detected using HSV and Neural Network approach. The algorithm is tested on nearly 100 images of different persons under different conditions and the results are satisfactory with success rate of 98%.The Neural Network is trained with 50 non-eye images and 50 eye images with different angles using Gabor filter. This paper is a part of research work on “Development of Non-Intrusive system for realtime Monitoring and Prediction of Driver Fatigue and drowsiness” project sponsored by Department of Science & Technology, Govt. of India, New Delhi at Vignan Institute of Technology and Sciences, Vignan Hills, Hyderabad.
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The Indian Dental Academy is the Leader in continuing dental education , training dentists in all aspects of dentistry and
offering a wide range of dental certified courses in different formats.for more details please visit
www.indiandentalacademy.com
This document discusses the principles of cone beam computed tomography (CBCT). It describes how CBCT uses an x-ray source and detector that rotate around the patient to acquire multiple 2D projections, which are then used to reconstruct 3D volumetric images using computational algorithms. Key aspects of CBCT imaging discussed include the hardware components involved in image acquisition, technical parameters that influence image quality and radiation dose, and different scan modes available. The document emphasizes that clinicians should understand how various acquisition factors impact the diagnostic value and radiation exposure of CBCT exams.
CBCT provides 3D images of the jaws and teeth using low-dose x-rays and computer reconstruction. It allows visualization of bone quality, morphology, and proximity to anatomical structures for applications in dental implant planning, oral surgery, endodontics, orthodontics, and more. CBCT is particularly useful when traditional 2D imaging is limited by anatomical superimposition or inability to assess 3D bone characteristics. Some key uses of CBCT include implant planning, assessment of impacted teeth, maxillofacial trauma, airway imaging, and evaluation of periodontal defects.
IRJET- Survey on Face Detection MethodsIRJET Journal
The document reviews 15 papers on various face detection methods published between 2013 and 2018. It finds that the most popular feature extraction method is skin color segmentation, which achieves detection rates of 88-98%. The Viola-Jones method typically detects face regions as well as other body parts at a rate of 80-90%. Common face detection methods reviewed include skin color segmentation, Viola-Jones, Haar features, 3D mean shift, and Cascaded Head and Shoulder Detection. OpenCV, Python or MATLAB are typically used to implement real-time face detection systems.
This document summarizes various approaches for restoring functional vision at all distances, like a young adult, after cataract or presbyopia surgery. It focuses on accommodative intraocular lenses (AIOLs), which aim to provide accommodation through optical or positional changes. However, past AIOL studies showed limitations in near vision outcomes due to poor methodology and commercial bias. Future AIOLs would need to be independent of the capsular bag, with outcomes proven in large, long-term studies using standardized tests to measure accommodation and distinguish it from pseudo accommodation. While multifocal IOLs provide functional vision at all distances, they require neuroadaptation and always disperse some light between foci, rather than using it all efficiently.
Glaucoma Screening Test By Segmentation of Optical Disc& Cup Segmentation Usi...IJERA Editor
Glaucoma is one of the most common causes of blindness and it is becoming even more important considering
the ageing society. Because healing of died retinal nerve fibers is not possible early detection and prevention is
essential. Robust, automated mass-screening will help to extend the symptom-free life of affected patients. We
devised a novel, automated, appearance based glaucoma classification system that does not depend on
segmentation based measurements. Our purely data-driven approach is applicable in large-scale screening
examinations. The proposed segmentation methods have been evaluated in a database of 650 images with optic
disc and optic cup boundaries manually marked by trained professionals. Our expected Experimental results
may be average overlapping error of 9.5% and 24.1% in optic disc and optic cup segmentation, respectively.
This document summarizes the potential applications of cone beam computed tomography (CBCT) in forensic science based on a review of clinical and scientific literature. CBCT can be used for age and sex estimation, frontal sinus analysis for identification, and 3D facial reconstruction. Studies show CBCT allows for more accurate age estimation compared to 2D radiographs. Measurements of bones like the mastoid and mandible from CBCT scans can determine sex with over 80% accuracy. Comparison of frontal sinus patterns through CBCT provides reliable evidence for identification. While CBCT has advantages like portability and accuracy, limitations include artifacts from metals and limited soft tissue contrast. Further research is still needed to improve 3D reconstruction techniques from CBCT data for forensic
EFFECTIVENESS OF FEATURE DETECTION OPERATORS ON THE PERFORMANCE OF IRIS BIOME...IJNSA Journal
Iris Recognition is a highly efficient biometric identification system with great possibilities for future in the security systems area.Its robustness and unobtrusiveness, as opposed tomost of the currently deployed systems, make it a good candidate to replace most of thesecurity systems around. By making use of the distinctiveness of iris patterns, iris recognition systems obtain a unique mapping for each person. Identification of this person is possible by applying appropriate matching algorithm.In this paper, Daugman’s Rubber Sheet model is employed for irisnormalization and unwrapping, descriptive statistical analysis of different feature detection operators is performed, features extracted is encoded using Haar wavelets and for classification hammingdistance as a matching algorithm is used. The system was tested on the UBIRIS database. The edge detection algorithm, Canny, is found to be the best one to extract most of the iris texture. The success rate of feature detection using canny is 81%, False Accept Rate is 9% and False Reject Rate is 10%.
IRIS BIOMETRIC RECOGNITION SYSTEM EMPLOYING CANNY OPERATORcsitconf
Biometrics has become important in security applications. In comparison with many other
biometric features, iris recognition has very high recognition accuracy because it depends on
iris which is located in a place that still stable throughout human life and the probability to find
two identical iris's is close to zero. The identification system consists of several stages including
segmentation stage which is the most serious and critical one. The current segmentation
methods still have limitation in localizing the iris due to circular shape consideration of the
pupil. In this research, Daugman method is done to investigate the segmentation techniques.
Eyelid detection is another step that has been included in this study as a part of segmentation
stage to localize the iris accurately and remove unwanted area that might be included. The
obtained iris region is encoded using haar wavelets to construct the iris code, which contains
the most discriminating feature in the iris pattern. Hamming distance is used for comparison of
iris templates in the recognition stage. The dataset which is used for the study is UBIRIS
database. A comparative study of different edge detector operator is performed. It is observed
that canny operator is best suited to extract most of the edges to generate the iris code for
comparison. Recognition rate of 89% and rejection rate of 95% is achieved.
IRIS BIOMETRIC RECOGNITION SYSTEM EMPLOYING CANNY OPERATORcscpconf
Biometrics has become important in security applications. In comparison with many other biometric features, iris recognition has very high recognition accuracy because it depends on
iris which is located in a place that still stable throughout human life and the probability to find two identical iris's is close to zero. The identification system consists of several stages including
segmentation stage which is the most serious and critical one. The current segmentation methods still have limitation in localizing the iris due to circular shape consideration of the
pupil. In this research, Daugman method is done to investigate the segmentation techniques. Eyelid detection is another step that has been included in this study as a part of segmentation
stage to localize the iris accurately and remove unwanted area that might be included. The obtained iris region is encoded using haar wavelets to construct the iris code, which contains
the most discriminating feature in the iris pattern. Hamming distance is used for comparison of iris templates in the recognition stage. The dataset which is used for the study is UBIRIS database. A comparative study of different edge detector operator is performed. It is observed that canny operator is best suited to extract most of the edges to generate the iris code for comparison. Recognition rate of 89% and rejection rate of 95% is achieved.
A robust iris recognition method on adverse conditionsijcseit
As a stable biometric system, iris has recently attracted great attention among the researchers. However,
research is still needed to provide appropriate solutions to ensure the resistance of the system against error
factors. The present study has tried to apply a mask to the image so that the unexpected factors affecting
the location of the iris can be removed. So, pupil localization will be faster and robust. Then to locate the
exact location of the iris, a simple stage of boundary displacement due to the Canny edge detector has been
applied. Then, with searching left and right IRIS edge point, outer radios of IRIS will be detect. Through
the process of extracting the iris features, it has been sought to obtain the distinctive iris texture features by
using a discrete stationary wavelets transform 2-D (DSWT2). Using DSWT2 tool and symlet 4 wavelet,
distinctive features are extracted. To reduce the computational cost, the features obtained from the
application of the wavelet have been investigated and a feature selection procedure, using similarity
criteria, has been implemented. Finally, the iris matching has been performed using a semi-correlation
criterion. The accuracy of the proposed method for localization on CASIA-v1, CASIA-v3 is 99.73%,
98.24% and 97.04%, respectively. The accuracy of the feature extraction proposed method for CASIA3 iris
images database is 97.82%, which confirms the efficiency of the proposed method.
International Journal of Computer Science, Engineering and Information Techno...ijcseit
International Journal of Computer Science, Engineering and Information Technology (IJCSEIT) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Computer Science, Engineering and Information Technology. The Journal looks for significant contributions to all major fields of the Computer Science and Information Technology in theoretical and practical aspects. The aim of the Journal is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Effective segmentation of sclera, iris and pupil in noisy eye imagesTELKOMNIKA JOURNAL
In today’s sensitive environment, for personal authentication, iris recognition is the most attentive
technique among the various biometric technologies. One of the key steps in the iris recognition system is
the accurate iris segmentation from its surrounding noises including pupil and sclera of a captured
eye-image. In our proposed method, initially input image is preprocessed by using bilateral filtering.
After the preprocessing of images contour based features such as, brightness, color and texture features
are extracted. Then entropy is measured based on the extracted contour based features to effectively
distinguishing the data in the images. Finally, the convolution neural network (CNN) is used for
the effective sclera, iris and pupil parts segmentations based on the entropy measure. The proposed
results are analyzed to demonstrate the better performance of the proposed segmentation method than
the existing methods.
Feature Extraction Techniques for Ear Biometrics: A SurveyShashank Dhariwal
This document surveys feature extraction techniques for ear biometrics. It discusses several approaches: Iannarelli's landmark-based measurements; Voronoi diagrams; principal component analysis (PCA); compression networks; force field transforms; iterative closest point (ICP) algorithms; local surface patches (LSP); and acoustic approaches. Each technique is explained and its strengths/weaknesses are noted. The document concludes that while ear biometrics techniques have matured, issues like occlusion, symmetry, and individuality need more research, and that ear biometrics will likely be used more in multi-modal recognition systems.
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...IJERD Editor
The document describes a proposed algorithm called Fusion of Hybrid Domain features for Iris Recognition (FHDIR).
The algorithm pre-processes iris images by resizing, binarization, cropping and splitting them. It then applies Fast Fourier Transform (FFT) to the left half of the iris image to extract features and applies Principal Component Analysis (PCA) to the right half to extract features. These feature sets are then fused using arithmetic addition to generate a final feature vector. Test iris features are compared to the database using Euclidean Distance for identification.
The proposed algorithm is evaluated on the CASIA iris database and is found to have better performance than existing algorithms in terms of false rejection rate, false acceptance rate, and true
Enhance iris segmentation method for person recognition based on image proces...TELKOMNIKA JOURNAL
The limitation of traditional iris recognition systems to process iris images captured in unconstraint environments is a breakthrough. Automatic iris recognition has to face unpredictable variations of iris images in real-world applications. For example, the most challenging problems are related to the severe noise effects that are inherent to these unconstrained iris recognition systems, varying illumination, obstruction of the upper or lower eyelids, the eyelash overlap with the iris region, specular highlights on pupils which come from a spot of light during captured the image, and decentralization of iris image which caused by the person’s gaze. Iris segmentation is one of the most important processes in iris recognition. Due to the different types of noise in the eye image, the segmentation result may be erroneous. To solve this problem, this paper develops an efficient iris segmentation algorithm using image processing techniques. Firstly, the outer boundary segmentation of the iris problem is solved. Then the pupil boundary is detected. Testes are done on the Chinese Academy of Sciences’ Institute of Automation (CASIA) database. Experimental results indicate that the proposed algorithm is efficient and effective in terms of iris segmentation and reduction of time processing. The accuracy results for both datasets (CASIA-V1 and V4) are 100% and 99.16 respectively.
To Improve the Recognition Rate with High Security with Ear/Iris Biometric Re...IOSR Journals
This document presents a new technique for improving the recognition rate and security of ear/iris biometric recognition systems using feature extraction and matching. It proposes extracting wavelet-based features from ear images and iris images to more accurately model anatomical structures. The technique uses stochastic clustering to derive a model of ear or iris parts from training images. Recognition performance is evaluated on test sets from the XM2VTS database. Results show the new hybrid method that incorporates wavelet analysis and robust matching achieves promising recognition rates for occluded ear and iris images, confirming the validity of the approach.
Efficient Small Template Iris Recognition System Using Wavelet TransformCSCJournals
This document presents an efficient iris recognition system using wavelet transform that achieves a small template size of 364 bits. The system first segments the iris from eye images and applies normalization to account for variations in scale. It then proposes a method to remove the eyelid and eyelash regions by masking parts of the iris. Next, image enhancement increases contrast before feature extraction using wavelet transform. The document evaluates different combinations of wavelet coefficients and quantization levels to determine the most effective approach. Experimental results demonstrate false accept and reject rates of 0% and 1% respectively.
Iris recognition is a method of biometric identification.
Biometric identification provides automatic recognition of an
individual based on the unique feature of physiological
characteristics or behavioral characteristic. Iris recognition is a
method of recognizing a person by analyzing the iris pattern.
This survey paper covers the different iris recognition techniques
and methods.
This document discusses various soft computing techniques for iris recognition, specifically focusing on two neural network approaches: Competitive neural network Learning Vector Quantization (LVQ) and Adaptive Resonance Associative Map (ARAM). It provides an overview of iris recognition as a biometric method, summarizes preprocessing steps like localization, segmentation, and normalization of iris images. It also describes feature extraction and matching steps. Finally, it defines artificial neural networks and discusses how LVQ and ARAM can be used for pattern matching in iris recognition applications.
The paper holding a presentation of a system, which is recognizing peoples through their iris print and that by using Linear
Discriminant Analysis method. Which is characterized by the classification of a set of things in groups, these groups are observing a
group the features that describe the thing, and is characterized by finding a relationship which give rise to differences in the
dimensions of the iris image data from different varieties, and differences between the images in the same class and are less. This
Prototype proves a high efficiency on the process of classifying the patterns, the algorithms was applied and tested on MMU database,
and it gives good results with a ratio reaching up to 74%.
Iris recognition based on 2D Gabor filterIJECEIAES
Iris recognition is a type of biometrics technology that is based on physiological features of the human body. The objective of this research is to recognize and identify iris among many irises that are stored in a visual database. This study employed a left and right iris biometric framework for inclusion decision processing by combining image processing and artificial bee colony. The proposed approach was evaluated on a visual database of 280 colored iris pictures. The database was then divided into 28 clusters. Images were preprocessed and texture features were extracted based Gabor filters to capture both local and global details within an iris. The technique begins by comparing the attributes of the online-obtained iris picture with those of the visual database. This technique either generates a reject or approve message. The consequences of the intended work reflect the output’s accuracy and integrity. This is due to the careful selection of attributes, besides the deployment of an artificial bee colony and data clustering, which decreased complexity and eventually increased identification rate to 100%. We demonstrate that the proposed method achieves state-of-the-art performance and that our recommended procedures outperform existing iris recognition systems.
A Robust Approach in Iris Recognition for Person AuthenticationIOSR Journals
The document describes a robust approach for iris recognition used for person authentication. It proposes using eight main stages: 1) scanning the iris image, 2) converting it to grayscale, 3) applying median filters to reduce noise, 4) detecting the pupil center, 5) using canny edge detection to identify iris and pupil edges, 6) determining the iris and pupil radii, 7) localizing the iris, and 8) unrolling the iris texture. It then uses k-means clustering to compare images and match them to authenticate individuals in a database. The approach aims to improve on previous iris recognition methods by more accurately detecting non-circular iris and pupil shapes.
The document describes a novel method for localizing the pupil in eye gaze tracking systems. The proposed method fuses existing pupil localization techniques, including Hough transform, gray projection, and coarse positioning. This fusion approach aims to improve localization accuracy while lowering false detections. The method first preprocesses the eye image using filtering, edge detection, and thresholding/binarization. It then applies the fused techniques of Hough transform, gray projection, and coarse positioning to the preprocessed image to accurately detect the pupil boundary and center coordinates in a robust manner. The results are expected to provide better pupil localization compared to existing individual techniques.
A Literature Review on Iris Segmentation Techniques for Iris Recognition SystemsIOSR Journals
This document reviews various techniques for iris segmentation in iris recognition systems. It discusses 8 techniques: (1) Integrodifferential operator, (2) Hough transform, (3) Masek method, (4) Fuzzy clustering algorithm, (5) Pulling and Pushing method, (6) Eight-neighbor connection based clustering, (7) Segmentation approach based on Fourier spectral density, and (8) Circular Gabor Filter. Each technique achieves some level of segmentation accuracy but also has disadvantages like high computational time, low accuracy, or poor performance on noisy images. The document concludes that a unified framework approach provides the highest overall segmentation accuracy for robustly segmenting iris images.
This document reviews various techniques for iris segmentation in iris recognition systems. It discusses integrodifferential operator and Hough transform approaches, as well as the Masek, fuzzy clustering, and pulling and pushing methods. Each approach has advantages and disadvantages. The Masek method achieves circular iris and pupil localization but has lower accuracy and speed. Fuzzy clustering provides better segmentation for non-cooperative iris recognition but requires an extensive search. The pulling and pushing method aims to develop a more accurate and rapid iris segmentation algorithm.
Similar to AN EXPLORATION OF PERIOCULAR REGION WITH REDUCED REGION FOR AUTHENTICATION : REALM OF OCCULT (20)
ANALYSIS OF LAND SURFACE DEFORMATION GRADIENT BY DINSAR cscpconf
The progressive development of Synthetic Aperture Radar (SAR) systems diversify the exploitation of the generated images by these systems in different applications of geoscience. Detection and monitoring surface deformations, procreated by various phenomena had benefited from this evolution and had been realized by interferometry (InSAR) and differential interferometry (DInSAR) techniques. Nevertheless, spatial and temporal decorrelations of the interferometric couples used, limit strongly the precision of analysis results by these techniques. In this context, we propose, in this work, a methodological approach of surface deformation detection and analysis by differential interferograms to show the limits of this technique according to noise quality and level. The detectability model is generated from the deformation signatures, by simulating a linear fault merged to the images couples of ERS1 / ERS2 sensors acquired in a region of the Algerian south.
4D AUTOMATIC LIP-READING FOR SPEAKER'S FACE IDENTIFCATIONcscpconf
A novel based a trajectory-guided, concatenating approach for synthesizing high-quality image real sample renders video is proposed . The lips reading automated is seeking for modeled the closest real image sample sequence preserve in the library under the data video to the HMM predicted trajectory. The object trajectory is modeled obtained by projecting the face patterns into an KDA feature space is estimated. The approach for speaker's face identification by using synthesise the identity surface of a subject face from a small sample of patterns which sparsely each the view sphere. An KDA algorithm use to the Lip-reading image is discrimination, after that work consisted of in the low dimensional for the fundamental lip features vector is reduced by using the 2D-DCT.The mouth of the set area dimensionality is ordered by a normally reduction base on the PCA to obtain the Eigen lips approach, their proposed approach by[33]. The subjective performance results of the cost function under the automatic lips reading modeled , which wasn’t illustrate the superior performance of the
method.
MOVING FROM WATERFALL TO AGILE PROCESS IN SOFTWARE ENGINEERING CAPSTONE PROJE...cscpconf
Universities offer software engineering capstone course to simulate a real world-working environment in which students can work in a team for a fixed period to deliver a quality product. The objective of the paper is to report on our experience in moving from Waterfall process to Agile process in conducting the software engineering capstone project. We present the capstone course designs for both Waterfall driven and Agile driven methodologies that highlight the structure, deliverables and assessment plans.To evaluate the improvement, we conducted a survey for two different sections taught by two different instructors to evaluate students’ experience in moving from traditional Waterfall model to Agile like process. Twentyeight students filled the survey. The survey consisted of eight multiple-choice questions and an open-ended question to collect feedback from students. The survey results show that students were able to attain hands one experience, which simulate a real world-working environment. The results also show that the Agile approach helped students to have overall better design and avoid mistakes they have made in the initial design completed in of the first phase of the capstone project. In addition, they were able to decide on their team capabilities, training needs and thus learn the required technologies earlier which is reflected on the final product quality
PROMOTING STUDENT ENGAGEMENT USING SOCIAL MEDIA TECHNOLOGIEScscpconf
This document discusses using social media technologies to promote student engagement in a software project management course. It describes the course and objectives of enhancing communication. It discusses using Facebook for 4 years, then switching to WhatsApp based on student feedback, and finally introducing Slack to enable personalized team communication. Surveys found students engaged and satisfied with all three tools, though less familiar with Slack. The conclusion is that social media promotes engagement but familiarity with the tool also impacts satisfaction.
A SURVEY ON QUESTION ANSWERING SYSTEMS: THE ADVANCES OF FUZZY LOGICcscpconf
In real world computing environment with using a computer to answer questions has been a human dream since the beginning of the digital era, Question-answering systems are referred to as intelligent systems, that can be used to provide responses for the questions being asked by the user based on certain facts or rules stored in the knowledge base it can generate answers of questions asked in natural , and the first main idea of fuzzy logic was to working on the problem of computer understanding of natural language, so this survey paper provides an overview on what Question-Answering is and its system architecture and the possible relationship and
different with fuzzy logic, as well as the previous related research with respect to approaches that were followed. At the end, the survey provides an analytical discussion of the proposed QA models, along or combined with fuzzy logic and their main contributions and limitations.
DYNAMIC PHONE WARPING – A METHOD TO MEASURE THE DISTANCE BETWEEN PRONUNCIATIONS cscpconf
Human beings generate different speech waveforms while speaking the same word at different times. Also, different human beings have different accents and generate significantly varying speech waveforms for the same word. There is a need to measure the distances between various words which facilitate preparation of pronunciation dictionaries. A new algorithm called Dynamic Phone Warping (DPW) is presented in this paper. It uses dynamic programming technique for global alignment and shortest distance measurements. The DPW algorithm can be used to enhance the pronunciation dictionaries of the well-known languages like English or to build pronunciation dictionaries to the less known sparse languages. The precision measurement experiments show 88.9% accuracy.
INTELLIGENT ELECTRONIC ASSESSMENT FOR SUBJECTIVE EXAMS cscpconf
In education, the use of electronic (E) examination systems is not a novel idea, as Eexamination systems have been used to conduct objective assessments for the last few years. This research deals with randomly designed E-examinations and proposes an E-assessment system that can be used for subjective questions. This system assesses answers to subjective questions by finding a matching ratio for the keywords in instructor and student answers. The matching ratio is achieved based on semantic and document similarity. The assessment system is composed of four modules: preprocessing, keyword expansion, matching, and grading. A survey and case study were used in the research design to validate the proposed system. The examination assessment system will help instructors to save time, costs, and resources, while increasing efficiency and improving the productivity of exam setting and assessments.
TWO DISCRETE BINARY VERSIONS OF AFRICAN BUFFALO OPTIMIZATION METAHEURISTICcscpconf
African Buffalo Optimization (ABO) is one of the most recent swarms intelligence based metaheuristics. ABO algorithm is inspired by the buffalo’s behavior and lifestyle. Unfortunately, the standard ABO algorithm is proposed only for continuous optimization problems. In this paper, the authors propose two discrete binary ABO algorithms to deal with binary optimization problems. In the first version (called SBABO) they use the sigmoid function and probability model to generate binary solutions. In the second version (called LBABO) they use some logical operator to operate the binary solutions. Computational results on two knapsack problems (KP and MKP) instances show the effectiveness of the proposed algorithm and their ability to achieve good and promising solutions.
DETECTION OF ALGORITHMICALLY GENERATED MALICIOUS DOMAINcscpconf
In recent years, many malware writers have relied on Dynamic Domain Name Services (DDNS) to maintain their Command and Control (C&C) network infrastructure to ensure a persistence presence on a compromised host. Amongst the various DDNS techniques, Domain Generation Algorithm (DGA) is often perceived as the most difficult to detect using traditional methods. This paper presents an approach for detecting DGA using frequency analysis of the character distribution and the weighted scores of the domain names. The approach’s feasibility is demonstrated using a range of legitimate domains and a number of malicious algorithmicallygenerated domain names. Findings from this study show that domain names made up of English characters “a-z” achieving a weighted score of < 45 are often associated with DGA. When a weighted score of < 45 is applied to the Alexa one million list of domain names, only 15% of the domain names were treated as non-human generated.
GLOBAL MUSIC ASSET ASSURANCE DIGITAL CURRENCY: A DRM SOLUTION FOR STREAMING C...cscpconf
The document proposes a blockchain-based digital currency and streaming platform called GoMAA to address issues of piracy in the online music streaming industry. Key points:
- GoMAA would use a digital token on the iMediaStreams blockchain to enable secure dissemination and tracking of streamed content. Content owners could control access and track consumption of released content.
- Original media files would be converted to a Secure Portable Streaming (SPS) format, embedding watermarks and smart contract data to indicate ownership and enable validation on the blockchain.
- A browser plugin would provide wallets for fans to collect GoMAA tokens as rewards for consuming content, incentivizing participation and addressing royalty discrepancies by recording
IMPORTANCE OF VERB SUFFIX MAPPING IN DISCOURSE TRANSLATION SYSTEMcscpconf
This document discusses the importance of verb suffix mapping in discourse translation from English to Telugu. It explains that after anaphora resolution, the verbs must be changed to agree with the gender, number, and person features of the subject or anaphoric pronoun. Verbs in Telugu inflect based on these features, while verbs in English only inflect based on number and person. Several examples are provided that demonstrate how the Telugu verb changes based on whether the subject or pronoun is masculine, feminine, neuter, singular or plural. Proper verb suffix mapping is essential for generating natural and coherent translations while preserving the context and meaning of the original discourse.
EXACT SOLUTIONS OF A FAMILY OF HIGHER-DIMENSIONAL SPACE-TIME FRACTIONAL KDV-T...cscpconf
In this paper, based on the definition of conformable fractional derivative, the functional
variable method (FVM) is proposed to seek the exact traveling wave solutions of two higherdimensional
space-time fractional KdV-type equations in mathematical physics, namely the
(3+1)-dimensional space–time fractional Zakharov-Kuznetsov (ZK) equation and the (2+1)-
dimensional space–time fractional Generalized Zakharov-Kuznetsov-Benjamin-Bona-Mahony
(GZK-BBM) equation. Some new solutions are procured and depicted. These solutions, which
contain kink-shaped, singular kink, bell-shaped soliton, singular soliton and periodic wave
solutions, have many potential applications in mathematical physics and engineering. The
simplicity and reliability of the proposed method is verified.
AUTOMATED PENETRATION TESTING: AN OVERVIEWcscpconf
The document discusses automated penetration testing and provides an overview. It compares manual and automated penetration testing, noting that automated testing allows for faster, more standardized and repeatable tests but has limitations in developing new exploits. It also reviews some current automated penetration testing methodologies and tools, including those using HTTP/TCP/IP attacks, linking common scanning tools, a Python-based tool targeting databases, and one using POMDPs for multi-step penetration test planning under uncertainty. The document concludes that automated testing is more efficient than manual for known vulnerabilities but cannot replace manual testing for discovering new exploits.
CLASSIFICATION OF ALZHEIMER USING fMRI DATA AND BRAIN NETWORKcscpconf
Since the mid of 1990s, functional connectivity study using fMRI (fcMRI) has drawn increasing
attention of neuroscientists and computer scientists, since it opens a new window to explore
functional network of human brain with relatively high resolution. BOLD technique provides
almost accurate state of brain. Past researches prove that neuro diseases damage the brain
network interaction, protein- protein interaction and gene-gene interaction. A number of
neurological research paper also analyse the relationship among damaged part. By
computational method especially machine learning technique we can show such classifications.
In this paper we used OASIS fMRI dataset affected with Alzheimer’s disease and normal
patient’s dataset. After proper processing the fMRI data we use the processed data to form
classifier models using SVM (Support Vector Machine), KNN (K- nearest neighbour) & Naïve
Bayes. We also compare the accuracy of our proposed method with existing methods. In future,
we will other combinations of methods for better accuracy.
VALIDATION METHOD OF FUZZY ASSOCIATION RULES BASED ON FUZZY FORMAL CONCEPT AN...cscpconf
The document proposes a new validation method for fuzzy association rules based on three steps: (1) applying the EFAR-PN algorithm to extract a generic base of non-redundant fuzzy association rules using fuzzy formal concept analysis, (2) categorizing the extracted rules into groups, and (3) evaluating the relevance of the rules using structural equation modeling, specifically partial least squares. The method aims to address issues with existing fuzzy association rule extraction algorithms such as large numbers of extracted rules, redundancy, and difficulties with manual validation.
PROBABILITY BASED CLUSTER EXPANSION OVERSAMPLING TECHNIQUE FOR IMBALANCED DATAcscpconf
In many applications of data mining, class imbalance is noticed when examples in one class are
overrepresented. Traditional classifiers result in poor accuracy of the minority class due to the
class imbalance. Further, the presence of within class imbalance where classes are composed of
multiple sub-concepts with different number of examples also affect the performance of
classifier. In this paper, we propose an oversampling technique that handles between class and
within class imbalance simultaneously and also takes into consideration the generalization
ability in data space. The proposed method is based on two steps- performing Model Based
Clustering with respect to classes to identify the sub-concepts; and then computing the
separating hyperplane based on equal posterior probability between the classes. The proposed
method is tested on 10 publicly available data sets and the result shows that the proposed
method is statistically superior to other existing oversampling methods.
CHARACTER AND IMAGE RECOGNITION FOR DATA CATALOGING IN ECOLOGICAL RESEARCHcscpconf
Data collection is an essential, but manpower intensive procedure in ecological research. An
algorithm was developed by the author which incorporated two important computer vision
techniques to automate data cataloging for butterfly measurements. Optical Character
Recognition is used for character recognition and Contour Detection is used for imageprocessing.
Proper pre-processing is first done on the images to improve accuracy. Although
there are limitations to Tesseract’s detection of certain fonts, overall, it can successfully identify
words of basic fonts. Contour detection is an advanced technique that can be utilized to
measure an image. Shapes and mathematical calculations are crucial in determining the precise
location of the points on which to draw the body and forewing lines of the butterfly. Overall,
92% accuracy were achieved by the program for the set of butterflies measured.
SOCIAL MEDIA ANALYTICS FOR SENTIMENT ANALYSIS AND EVENT DETECTION IN SMART CI...cscpconf
Smart cities utilize Internet of Things (IoT) devices and sensors to enhance the quality of the city
services including energy, transportation, health, and much more. They generate massive
volumes of structured and unstructured data on a daily basis. Also, social networks, such as
Twitter, Facebook, and Google+, are becoming a new source of real-time information in smart
cities. Social network users are acting as social sensors. These datasets so large and complex
are difficult to manage with conventional data management tools and methods. To become
valuable, this massive amount of data, known as 'big data,' needs to be processed and
comprehended to hold the promise of supporting a broad range of urban and smart cities
functions, including among others transportation, water, and energy consumption, pollution
surveillance, and smart city governance. In this work, we investigate how social media analytics
help to analyze smart city data collected from various social media sources, such as Twitter and
Facebook, to detect various events taking place in a smart city and identify the importance of
events and concerns of citizens regarding some events. A case scenario analyses the opinions of
users concerning the traffic in three largest cities in the UAE
SOCIAL NETWORK HATE SPEECH DETECTION FOR AMHARIC LANGUAGEcscpconf
The anonymity of social networks makes it attractive for hate speech to mask their criminal
activities online posing a challenge to the world and in particular Ethiopia. With this everincreasing
volume of social media data, hate speech identification becomes a challenge in
aggravating conflict between citizens of nations. The high rate of production, has become
difficult to collect, store and analyze such big data using traditional detection methods. This
paper proposed the application of apache spark in hate speech detection to reduce the
challenges. Authors developed an apache spark based model to classify Amharic Facebook
posts and comments into hate and not hate. Authors employed Random forest and Naïve Bayes
for learning and Word2Vec and TF-IDF for feature selection. Tested by 10-fold crossvalidation,
the model based on word2vec embedding performed best with 79.83%accuracy. The
proposed method achieve a promising result with unique feature of spark for big data.
GENERAL REGRESSION NEURAL NETWORK BASED POS TAGGING FOR NEPALI TEXTcscpconf
This article presents Part of Speech tagging for Nepali text using General Regression Neural
Network (GRNN). The corpus is divided into two parts viz. training and testing. The network is
trained and validated on both training and testing data. It is observed that 96.13% words are
correctly being tagged on training set whereas 74.38% words are tagged correctly on testing
data set using GRNN. The result is compared with the traditional Viterbi algorithm based on
Hidden Markov Model. Viterbi algorithm yields 97.2% and 40% classification accuracies on
training and testing data sets respectively. GRNN based POS Tagger is more consistent than the
traditional Viterbi decoding technique.
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
A Visual Guide to 1 Samuel | A Tale of Two HeartsSteve Thomason
These slides walk through the story of 1 Samuel. Samuel is the last judge of Israel. The people reject God and want a king. Saul is anointed as the first king, but he is not a good king. David, the shepherd boy is anointed and Saul is envious of him. David shows honor while Saul continues to self destruct.
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
Walmart Business+ and Spark Good for Nonprofits.pdfTechSoup
"Learn about all the ways Walmart supports nonprofit organizations.
You will hear from Liz Willett, the Head of Nonprofits, and hear about what Walmart is doing to help nonprofits, including Walmart Business and Spark Good. Walmart Business+ is a new offer for nonprofits that offers discounts and also streamlines nonprofits order and expense tracking, saving time and money.
The webinar may also give some examples on how nonprofits can best leverage Walmart Business+.
The event will cover the following::
Walmart Business + (https://business.walmart.com/plus) is a new shopping experience for nonprofits, schools, and local business customers that connects an exclusive online shopping experience to stores. Benefits include free delivery and shipping, a 'Spend Analytics” feature, special discounts, deals and tax-exempt shopping.
Special TechSoup offer for a free 180 days membership, and up to $150 in discounts on eligible orders.
Spark Good (walmart.com/sparkgood) is a charitable platform that enables nonprofits to receive donations directly from customers and associates.
Answers about how you can do more with Walmart!"
Chapter wise All Notes of First year Basic Civil Engineering.pptxDenish Jangid
Chapter wise All Notes of First year Basic Civil Engineering
Syllabus
Chapter-1
Introduction to objective, scope and outcome the subject
Chapter 2
Introduction: Scope and Specialization of Civil Engineering, Role of civil Engineer in Society, Impact of infrastructural development on economy of country.
Chapter 3
Surveying: Object Principles & Types of Surveying; Site Plans, Plans & Maps; Scales & Unit of different Measurements.
Linear Measurements: Instruments used. Linear Measurement by Tape, Ranging out Survey Lines and overcoming Obstructions; Measurements on sloping ground; Tape corrections, conventional symbols. Angular Measurements: Instruments used; Introduction to Compass Surveying, Bearings and Longitude & Latitude of a Line, Introduction to total station.
Levelling: Instrument used Object of levelling, Methods of levelling in brief, and Contour maps.
Chapter 4
Buildings: Selection of site for Buildings, Layout of Building Plan, Types of buildings, Plinth area, carpet area, floor space index, Introduction to building byelaws, concept of sun light & ventilation. Components of Buildings & their functions, Basic concept of R.C.C., Introduction to types of foundation
Chapter 5
Transportation: Introduction to Transportation Engineering; Traffic and Road Safety: Types and Characteristics of Various Modes of Transportation; Various Road Traffic Signs, Causes of Accidents and Road Safety Measures.
Chapter 6
Environmental Engineering: Environmental Pollution, Environmental Acts and Regulations, Functional Concepts of Ecology, Basics of Species, Biodiversity, Ecosystem, Hydrological Cycle; Chemical Cycles: Carbon, Nitrogen & Phosphorus; Energy Flow in Ecosystems.
Water Pollution: Water Quality standards, Introduction to Treatment & Disposal of Waste Water. Reuse and Saving of Water, Rain Water Harvesting. Solid Waste Management: Classification of Solid Waste, Collection, Transportation and Disposal of Solid. Recycling of Solid Waste: Energy Recovery, Sanitary Landfill, On-Site Sanitation. Air & Noise Pollution: Primary and Secondary air pollutants, Harmful effects of Air Pollution, Control of Air Pollution. . Noise Pollution Harmful Effects of noise pollution, control of noise pollution, Global warming & Climate Change, Ozone depletion, Greenhouse effect
Text Books:
1. Palancharmy, Basic Civil Engineering, McGraw Hill publishers.
2. Satheesh Gopi, Basic Civil Engineering, Pearson Publishers.
3. Ketki Rangwala Dalal, Essentials of Civil Engineering, Charotar Publishing House.
4. BCP, Surveying volume 1
The chapter Lifelines of National Economy in Class 10 Geography focuses on the various modes of transportation and communication that play a vital role in the economic development of a country. These lifelines are crucial for the movement of goods, services, and people, thereby connecting different regions and promoting economic activities.
Level 3 NCEA - NZ: A Nation In the Making 1872 - 1900 SML.pptHenry Hollis
The History of NZ 1870-1900.
Making of a Nation.
From the NZ Wars to Liberals,
Richard Seddon, George Grey,
Social Laboratory, New Zealand,
Confiscations, Kotahitanga, Kingitanga, Parliament, Suffrage, Repudiation, Economic Change, Agriculture, Gold Mining, Timber, Flax, Sheep, Dairying,
Beyond Degrees - Empowering the Workforce in the Context of Skills-First.pptxEduSkills OECD
Iván Bornacelly, Policy Analyst at the OECD Centre for Skills, OECD, presents at the webinar 'Tackling job market gaps with a skills-first approach' on 12 June 2024
2. 242 Computer Science & Information Technology (CS & IT)
1.1 Periocular Region
The word periocular is a combination of peri (meaning, the vicinity) and ocular (meaning related
to eye). Periocular region refers to skin and anatomical features including birthmarks, eyelids,
eyelashes contained within a specified region surrounding an eye. Texture of periocular region
contains significant amount of discriminating information. Acquisition of periocular region is
reliable over a wide range of distances. The region achieves a good trade-off between iris and
face [3].
The periocular features are classified into level-one and level-two features. Level-one features
include upper/lower eye folds, upper/lower eyelids, wrinkles and moles as shown in Fig. 1. Level-
two features are more detailed and include skin textures, pores, hair follicles, skin tags and other
dermatological features [4].
Fig.1. Significant features present in periocular region
1.2 Challenging Concepts of Periocular Region
Periocular region provides certain challenges in seeking distinguishing features invariant to
transformation in scale, orientation, occlusion, distance and time. Analysis of discriminative
powers of features extracted is affected by physical and emotional states of user. The periocular
region undergoes deformation in texture and shape due to change in facial expressions, aging,
blinking or variations in pose. The noise introduced during acquisition of periocular features is
presence of certain artificial elements for instance spectacles, intrusion of hair and eye-lashes.
The noise die to these non-optics is termed as occlusion. Information content in periocular region
is affected largely by harsh illuminations and other unconstrained conditions.
2. STATE-OF-THE-ART
Different regions of periocular area are exhaustively examined in research using various
mathematical tools. To eliminate effect of iris texture, co-ordinates of pupil is used to place a
circular mask at center of pupil. Further, in some studies an elliptical mask of neutral color is
placed in middle of periocular region image to eliminate effect of texture in iris and surrounding
sclera area [1, 2 and 9]. Enormous studies are available in literature that has used characteristics
of eyebrows in periocular regions along with texture information [2, 10, 11, 12 and 13]. Jameson
Merkow, Brendan Jou and Marios Savvides achieve classification rates of 83% using LBP
features in ROI defined by four bounds [8]. Reports from Usang Park, Arun Ross and Anil K Jain
using GO, SIFT and LBP indicate up to 81% accuracy rate without eyebrows and 80% with the
inclusion of eyebrows. Phillip E Miller, Rawls, Pundlik and Woodard experiments have
exhibited recognition rates of 85% using the elliptically masked sclera in periocular region
without eyebrows. Fusion of iris and periocular biometrics in Non-ideal imagery is also explored
in their studies [14]. Gender classification using periocular region achieve 95% accuracy through
3. Computer Science & Information Technology (CS & IT) 243
studies of Jamie, Miller, Pundlik and Woodard. Genetic based feature extraction by Adams et al.
claims up to 86% accuracy with reduced features [15]. Samarth Bharadwaj, Bhatt, Vatsa and
Richa Singh experiment with periocular regions including the eyebrows through GIST and
CLBP. Robust LBP feature sets for periocular biometric identification is investigated by Juefei
Xu etal. using Walsh masks, cosine transforms, LBP and Gabor filters [16]. Juefei-Xu et al.
presented a frame work for age invariant face recognition using periocular region and gained
100% rank-1identification rate and 98% verification rate [24]. Hollingsworth, Darnell, Miller,
Woodard, Bowyer and Flynn have compared human and machine verification results of visible
light and near-infra red light images. The experiments have obtained similar results of 88%
accuracy for visible light images and 79% for NIR images in both cases [26]. Juefei Xu and
Savvides proposed an acquisition and recognition system based on periocular biometric using
COTS PTZ camera to tackle difficulty posed in highly unconstrained environment. Periocular
matching in all facial manners have achieved 60.7% verification rate with 0.1% false accept rate
[27].
3. PROPOSED SYSTEM
Periocular authentication system with reduced region of interest is proposed, which relies on
features extracted in LCPR. Chan-Vese active contour segmentation is performed to extract ROI.
Triangulation method of partitioning these contours ensures natural edges of eye are embedded
into the entropy of identifiers. The study also considers direction of largest variance components.
Principal Component Analysis (PCA) of LCPR offers appreciable authentication results.
3.1 Pre-Processing
Images obtained from UBIRIS.v2 database are resized to 300 x 400. The RGB images are
converted to grayscale. The grayscale function transforms a 24-bit, three channel color image to
an 8-bit, single channel grayscale image by forming weighted sum of red, green and clue
components. The texture variations of the skin in the periocular region are embedded in the
variations of intensity values/ histogram equalization is performed on resulting grayscale images
to transfer gray levels so that histogram of resulting image is equalized to be a constant. Equation
1 gives transformation for histogram equalization.
ܵ = ܶሺݎሻ = ∑ ሺݎሻ
ୀ = ∑
ೕ
ୀ ݇ = 0,1,2, … … … ܮ − 1 (1)
where T is the transformation applied on input pixels rk. Pr(rj) is the probability of occurrence of
pixel rj, n refers to the total number of pixels in the image and nj represents the number of pixels
with rj intensity level.
3.2 Region of Interest
The size of periocular region to be cropped from images is an important consideration. Some
researchers use tight periocular areas in contrast to larger regions around eye by few others.
Considering threats posed by artificial manipulation of texture information in ROI discussed in
section 3, the strength of LCPR is examined. LCPR achieves good verification results with fewer
features. This region is defined directly below eye edge and reaches up to start of cheek bone.
The rich texture assures comparable results to that using entire periocular region. Intrusion of iris,
4. 244 Computer Science & Information Technology (CS & IT)
sclera of eye and eyelashes is totally eliminated. The width of ROI is determined using end points
of eye. These anchor points are detected using corner detection techniques.
3. 3 Active Contour Segmentation
Localization of iris is achieved using automatic segmentation software. Active contour
segmentation technique developed by Chan and Vese is used [5]. The method does not require
solutions of PDEs and are therefore fast. The method is able to automatically handle the
topological changes [22]. This geometric active contour model begins with a contour in image
plane defining an initial segmentation. The algorithm then evolves contour via a level set method
in such a way that it stops on boundaries of foreground region as shown in Fig. 2. In level set
formulation, “mean curvature flow”- like evolving active contour stops on the desired boundary
[17]. Level set function is defined as a value of gray level image at each pixel minus threshold.
The asset of level set method is its ability to handle topological changes in edge contour, aiding to
embed eye shape or contour information into periocular feature set.
(a) (b) (c)
Fig.2. (a) Input image (b) Segmentation using Active Contour methods (c) LCPR ROI
3.4 Delaunay Triangulation
Triangular segmentations of image plane have been proposed by Fisher [6]. The segmented
periocular region is abstracted in terms of Delaunay triangulation as shown in Fig. 3. A Delaunay
triangulation is a unique construction that no vertex from any triangle may lie within circum-
circle of any other Delaunay triangle [7 and 23]. Satisfactory results are obtained using
triangulation technique of partitioning in face recognition algorithms earlier [18]. The experiment
uses image templates formed by pricking triangular meshes as graph nodes for periocular region
authentication. Triangulation ensures extraction of geometric features of eye shape by preserving
the topological features.
(a) (b)
Fig.3. (a) Input image (b) Delaunay triangulation of corresponding periocular region
3.5 Verification of periocular patterns using PCA
Principal Component Analysis (PCA) is used for identifying patterns in data. PCA is information-
preserving statistical method useful for dimensionality reduction and feature extraction [19].
5. Computer Science & Information Technology (CS & IT) 245
PCA projects data along directions where data varies most. These directions are determined by
eigenvectors of covariance matrix corresponding to largest eigenvalues. Direction of Eigen
vectors provides useful information of our data by extracting lines that characterize data. The
lines pass through middle of data such that it draws a line of best fit. The eigenvector with highest
eigenvalue is the principle components of data set [21].
3.5.1 Computation of Eigen Periocular Images
Periocular images in training set are represented as vectors. The average value of M training
images is calculated using equation 2.
߰ =
1
ܯ
ℾ݊
ெ
ୀଵ
(2)
Equation 3 determines difference of each training image from the mean image.
∅ = ℾ − ߰ (3)
Eigen vectors and eigenvalues are computed for the covariance matrix given by equation 4.
Components of eigenvalues are arranged in order of significance from highest to lowest. Ordering
eigenvectors in descending eigenvalues creates an ordered orthogonal basis with first eigenvector
having direction of largest variance of data. Therefore, first set of Eigen images store class-
specific information [16]. The eigenvalues are selected such that a maximum value given by
equation 5.
ߣ =
1
ܯ
൫ݑ
்
∅൯
ଶ
ெ
ୀଵ
(5)
where ݑ
்
ݑ = ߜ = 1 if l = k and 0 otherwise. The vectors uk are referred to as “Eigen
periocular images”. Eigen periocular images obtained for a sub set of experimental data is shown
in Fig. 4
Fig.4. (a) Set of Original images (b) Corresponding periocular Eigen images
6. 246 Computer Science & Information Technology (CS & IT)
3.5.2 Verification Using Minimum Distance Classifier
Minimum distance classifier using Euclidean distance is used to classify a test periocular image.
It minimizes distance between image data and class in multi-feature space. The distance is
defined as an index of similarity. Let x be feature vector for unknown input, and let m1, m2…mc
be templates. Then, error in matching x against mk is given by || x - mk||. The minimum distance
classifier computes this equation for k =1 to c and chooses class for which distance is minimum.
4. RESULTS
The scope of utility of UBIRIS.v2 database involves performance evaluation in the design of
automatic authentication systems in civil and commercial applications. The major purpose of
database is to constitute a new tool to evaluate feasibility of recognition under far from ideal
imaging conditions [20, 25]. 261 subjects were imaged on-the-move, at-a-distance in visible wave
length using Canon EOS 5D. 54.4% were male and 45.6% were female among 11102 images.
Several challenges in scale, occlusion and illumination are exhibited. Occlusions included in
images are flapped eyelids, eyelashes, hair and spectacles. Imaging was conducted at variable
lighting conditions to emulate unconstrained environment.
A subset of periocular images from UBIRIS.v2 database which include variations in illumination
and pose are used for experimentation. Images with blinking are not considered. The subjects
with spectacles are omitted. The images are first segmented and then triangulated to employ the
natural edges of the eye, along with periocular texture for authentication. The number of training
samples M =10 was considered. The characteristic features in periocular images are extracted into
Eigen periocular images using PCA mathematical tool.
Performance of the system is evaluated using standard biometric measures. The system achieves
and acceptance rate of 90%. False Rejection Rate (Type I Error) of 5% and False Acceptance
Rate (Type II Error) of 5% results into total Rejection rate of 10% is witnessed. The results
obtained are comparable to those of other periocular verification systems studied. The experiment
uses tighter regions of periocular area against the larger ROI of classical periocular authentication
system.
5. CONCLUSIONS
The discriminative powers of LCPR in a periocular biometric system are investigated. The
characteristics of the region are extracted using active contour segmentation and Delaunay
triangulation in a way to imbibe the eye contour into the entropy of periocular. The proposed
technique uses reduced region of interest and achieves False Rejection Rate of 5%, False
Acceptance Rate of 5% using UBIRIS.v2 database through PCA mathematical tool.
ACKNOWLEDGEMENTS
This research uses UBIRIS.v2 database facilitated by SOCIA Lab, Soft Computing and Image
Analysis Group, Department of Computer Science, University of Beira Interior, 6201-001,
Covilha, Portugal.
7. Computer Science & Information Technology (CS & IT) 247
REFERENCES
[1] Unsang Park, Jillela, Ross, Anil. K. Jain, “Periocular Biometrics in the Visible Spectrum” in IEEE
Transactions on Information Forensics and Security, Vol. 6, No. 1, March 2011
[2] U. Park, A. Ross and A.K.Jain, “Periocular biometrics in the visibility spectrum: A feasibility study”,
in proc. Biometrics: Theory, Applications and Systems (BTAS), , pp. 153 – 158, 2009.
[3] D. Woodard, S. Pundlik, P. Miller, Jamie R. Lyle, “Appearance-based periocular features in the
context of face and non-ideal iris recognition”, SIViP (2011) 5:443–455, 2011.
[4] P. Miller, A Rawls, S. pundlik, and D. Woodard, “Personal Identification using Periocular skin
texture” in Proc. ACM 25th symposium on Applied Computing (SAC2010), pages 1496 – 1500,
2010.
[5] T. Chan and L. Vese, “Active contour without edges”, IEEE transactions on image processing 10(2)
(2001), pp. 266-277
[6] Y. Fisher, Ed. (1995), Fractal Image Compression – Theory and Application, New York: Springer-
Verlag.
[7] A. Bowyer (1981), “Computing Dirichlet tessellations”, The Computer Journal, Vol. 24, No.2, 162-
172
[8] Jameson Merkow, Brendan Jou, Marios Savvides, “An Exploration of Gender Identification using
only the periocular region” in IEEE Transactions, 2010.
[9] Karen Hollingsworth, Kevin W. Bowyer, and Patrick J. Flynn, “Identifying useful features for
recognition in Near Infrared Periocular Images”, in IEEE Transactions, 2010
[10] Jamie. R. Lyle, P. Miller, S. Pundlik, D. Woodard, “Soft Biometric Classification using Periocular
Region Features”, in IEEE Transactions, 2010.
[11] D. Woodard, S. Pundlik, Jamie. R. Lyle, P. Miller, “Periocular Region Appearance Cues for
Biometric Identification”, in IEEE Transactions, 2010.
[12] Samarth Bharadwaj, Himanshu. S. Bhatt, Mayank Vatsa and Richa Singh, “Periocular Biometrics:
When Iris Recognition Fails”, in IEEE Transactions, 2010.
[13] Juefei Xu, Miriam Cha, Joseph L. Heyman, Shreyas Venugopalan, Ramzi Abiantun and Marios
Savvides, “Robust local Binary Pattern Feature sets for Periocular Biometric Identification” in IEEE
Transactions, 2010.
[14] D.L. Woodard, S. Pundlik, P. Miller, R. Jillela and A. Ross, “On the fusion of periocular and iris
biometrics in non-ideal imagery”, in Proc. Int. Conf. on Pattern Recognition, 2010
[15] J. Adams, D.L.Woodard, G.Dozier, P.Miller, K.Bryant and G.Glenn, “Genetic based Type II feature
extraction for periocular Biometric Recognition: Less is More ” in Proc. Int. Conf. on Pattern
Recognition,2010.
[16] Juefei Xu, Miriam Cha, Joseph L. Heyman, Shreyas Venugopalan, Ramzi Abiantun and Marios
Savvides, “Robust local Binary Pattern Feature sets for Periocular Biometric Identification” in IEEE
Transactions, 2010
[17] T. Chan and L. Vese, “Active contour Model without edges”, Lecture Notes in Computer Science,
1999, Volume 1682/1999, 141-151, DOI: 10.1007/3-540-48236-9_13
[18] John Y Chiang, R C Wang, Yun-Lung Chang, “The Application of Triangulation to Face
Recognition”.
[19] Wenyi Zhao, Arvindh, Rama, Daniel, John, “Discriminant Analysis of Principal components for face
recognition”.
[20] Hugo Proenca, Silvio Filipe, Ricardo Santos, João Oliveira and Luís A. Alexandre; The UBIRIS.v2:
A Database of Visible Wavelength Iris Images Captured On-The-Move and At-A Distance, IEEE
Transactions on Pattern Analysis and Machine Intelligence, 2009.
[21] Koray, Volkan, “PCA for gender estimation: Which Eigenvectors contribute?”, ICPR 2002
[22] Yongsheng Pan, Douglas, Seddik, “Efficient Implementation of the Chan Vese Models without
solving PDEs”, Multimedia Signal Processing, 2006 IEEE, pages 350-354
[23] Bebis G, Deaconu, “Finger Print identification using Delaunay triangulation’, IEEE conference
publication, Information Intelligence and Systems, 1999.
8. 248 Computer Science & Information Technology (CS & IT)
[24] Juefei Xu, Luu, Savvides, Bui, Suen, “Investigating age invariant face recognition based on periocular
biometrics”, IEEE conference publication, Biometrics (IJCB), 2011
[25] Padole, Proenca, “Periocular recognition: Analysis of performance degradation factors”, IEEE
conference publication, Biometrics (ICB), 2012 5th IAPR International conference on Biometrics
(ICB).
[26] Hollingsworth, Darnell, Miller, Woodard, Bowyer, Flynn, “Human and Machine performance on
Periocular Biometrics under Near-Infrared light and visible light”, IEEE transactions on information
forensics and security, 2012
[27] Juefei Xu, Savvides, “Unconstrained periocular biometric acquisition and recognition using COTS
PTZ camera for uncooperative and non-cooperative subjects”, Applications of Computer Vision
(WACV), IEEE 2012