This document summarizes a research paper on angle oriented face recognition using discrete cosine transforms (DCT).
[1] It proposes an algorithm that first normalizes input faces for size and angle to match a database, then extracts local features using DCT and normalization techniques.
[2] DCT is discussed as it closely approximates the optimal Karhunen-Loeve transform while being computationally efficient. Similarity matching is done using Euclidean distance or cosine similarity measures.
[3] The basic algorithm involves face normalization, DCT feature extraction, and recognition by comparing features to the database. Experimental results showed the proposed approach led to more reliable detection than threshold-based methods.
COMPRESSION BASED FACE RECOGNITION USING DWT AND SVMsipij
The biometric is used to identify a person effectively and employ in almost all applications of day to day
activities. In this paper, we propose compression based face recognition using Discrete Wavelet Transform
(DWT) and Support Vector Machine (SVM). The novel concept of converting many images of single person
into one image using averaging technique is introduced to reduce execution time and memory. The DWT is
applied on averaged face image to obtain approximation (LL) and detailed bands. The LL band coefficients
are given as input to SVM to obtain Support vectors (SV’s). The LL coefficients of DWT and SV’s are fused
based on arithmetic addition to extract final features. The Euclidean Distance (ED) is used to compare test
image features with database image features to compute performance parameters. It is observed that, the
proposed algorithm is better in terms of performance compared to existing algorithms.
Fourier mellin transform based face recognitioniaemedu
This document presents a face recognition algorithm based on Fourier Mellin Transform. It begins with an introduction to face recognition and challenges of illumination and pose variations. It then describes extracting illumination invariant features by computing depth maps from input images using a shape from shading algorithm. Fourier Mellin Transform is applied to the depth maps to extract features. Experiments on the ORL database showed the approach achieved 100% recognition with 4 training images and 95.7% recognition with 3 training images, demonstrating robustness to illumination and pose variations.
Fusion based multimodal authentication in biometrics using context sensitive ...csandit
This document discusses a novel approach called Context-Sensitive Exponent Associative Memory Model (CSEAM) for multimodal biometric authentication using face and fingerprint patterns. The approach involves three stages: 1) Fusing the face and fingerprint patterns using Principal Component Analysis, 2) Applying SVD decomposition to generate keys from the fused data and preprocessed face pattern, 3) Encoding the generated keys using the CSEAM model, which uses exponential Kronecker product. The encoded key is then stored for verification by comparing chosen samples against the stored key using the same CSEAM model. The approach aims to provide different levels of security for biometric patterns and authentication in multimodal biometrics applications.
Comparative Analysis of Hand Gesture Recognition TechniquesIJERA Editor
During past few years, human hand gesture for interaction with computing devices has continues to be active area of research. In this paper survey of hand gesture recognition is provided. Hand Gesture Recognition is contained three stages: Pre-processing, Feature Extraction or matching and Classification or recognition. Each stage contains different methods and techniques. In this paper define small description of different methods used for hand gesture recognition in existing system with comparative analysis of all method with its benefits and drawbacks are provided.
This document summarizes an international journal article that proposes a two-phase algorithm for face recognition in the frequency domain using discrete cosine transform (DCT) and discrete Fourier transform (DFT). The algorithm works in two phases: the first phase uses Euclidean distance to determine the K nearest neighbor training samples of a test sample. The second phase represents the test sample as a linear combination of the K nearest neighbors and classifies the sample based on which class representation has the smallest deviation from the test sample. Experimental results on FERET and ORL face databases show the two-phase algorithm based on DCT and DFT outperforms other methods like two-phase sparse representation and PCA/LDA in terms of classification accuracy.
MAGNETIC RESONANCE BRAIN IMAGE SEGMENTATIONVLSICS Design
Segmentation of tissues and structures from medical images is the first step in many image analysis applications developed for medical diagnosis. With the growing research on medical image segmentation, it is essential to categorize the research outcomes and provide researchers with an overview of the existing segmentation techniques in medical images. In this paper, different image segmentation methods applied on magnetic resonance brain images are reviewed. The selection of methods includes sources from image processing journals, conferences, books, dissertations and thesis. The conceptual details of the methods are explained and mathematical details are avoided for simplicity. Both broad and detailed categorizations of reviewed segmentation techniques are provided. The state of art research is provided with emphasis on developed techniques and image properties used by them. The methods defined are not always mutually independent. Hence, their inter relationships are also stated. Finally, conclusions are drawn summarizing commonly used techniques and their complexities in application.
The document discusses appearance-based face recognition using PCA and LDA algorithms. It summarizes the steps of each algorithm and compares their performance on preprocessed face images from the Faces94 database. Image preprocessing techniques like grayscale conversion and modified histogram equalization are applied before PCA and LDA to enhance image quality and improve recognition rates. The paper aims to study PCA and LDA with respect to recognition accuracy and dimensionality.
Faster Training Algorithms in Neural Network Based Approach For Handwritten T...CSCJournals
Handwritten text and character recognition is a challenging task compared to recognition of handwritten numeral and computer printed text due to its large variety in nature. As practical pattern recognition problems uses bulk data and there is a one step self sufficient deterministic theory to resolve recognition problems by calculating inverse of Hessian Matrix and multiplication the inverse matrix it with first order local gradient vector. But in practical cases when neural network is large the inversing operation of the Hessian Matrix is not manageable and another condition must be satisfied the Hessian Matrix must be positive definite which may not be satishfied. In these cases some repetitive recursive models are taken. In several research work in past decade it was experienced that Neural Network based approach provides most reliable performance in handwritten character and text recognition but recognition performance depends upon some important factors like no of training samples, reliable features and no of features per character, training time, variety of handwriting etc. Important features from different types of handwriting are collected and are fed to the neural network for training. It is true that more no of features increases test efficiency but it takes longer time to converge the error curve. To reduce this training time effectively proper train algorithm should be chosen so that the system provides best train and test efficiency in least possible time that is to provide the system fastest intelligence. We have used several second order conjugate gradient algorithms for training of neural network. We have found that Scaled Conjugate Gradient Algorithm , a second order training algorithm as the fastest for training of neural network for our application. Training using SCG takes minimum time with excellent test efficiency. A scanned handwritten text is taken as input and character level segmentation is done. Some important and reliable features from each character are extracted and used as input to a neural network for training. When the error level reaches into a satisfactory level (10 -12 ) weights are accepted for testing a test script. Finally a lexicon matching algorithm solves the minor misclassification problems.
COMPRESSION BASED FACE RECOGNITION USING DWT AND SVMsipij
The biometric is used to identify a person effectively and employ in almost all applications of day to day
activities. In this paper, we propose compression based face recognition using Discrete Wavelet Transform
(DWT) and Support Vector Machine (SVM). The novel concept of converting many images of single person
into one image using averaging technique is introduced to reduce execution time and memory. The DWT is
applied on averaged face image to obtain approximation (LL) and detailed bands. The LL band coefficients
are given as input to SVM to obtain Support vectors (SV’s). The LL coefficients of DWT and SV’s are fused
based on arithmetic addition to extract final features. The Euclidean Distance (ED) is used to compare test
image features with database image features to compute performance parameters. It is observed that, the
proposed algorithm is better in terms of performance compared to existing algorithms.
Fourier mellin transform based face recognitioniaemedu
This document presents a face recognition algorithm based on Fourier Mellin Transform. It begins with an introduction to face recognition and challenges of illumination and pose variations. It then describes extracting illumination invariant features by computing depth maps from input images using a shape from shading algorithm. Fourier Mellin Transform is applied to the depth maps to extract features. Experiments on the ORL database showed the approach achieved 100% recognition with 4 training images and 95.7% recognition with 3 training images, demonstrating robustness to illumination and pose variations.
Fusion based multimodal authentication in biometrics using context sensitive ...csandit
This document discusses a novel approach called Context-Sensitive Exponent Associative Memory Model (CSEAM) for multimodal biometric authentication using face and fingerprint patterns. The approach involves three stages: 1) Fusing the face and fingerprint patterns using Principal Component Analysis, 2) Applying SVD decomposition to generate keys from the fused data and preprocessed face pattern, 3) Encoding the generated keys using the CSEAM model, which uses exponential Kronecker product. The encoded key is then stored for verification by comparing chosen samples against the stored key using the same CSEAM model. The approach aims to provide different levels of security for biometric patterns and authentication in multimodal biometrics applications.
Comparative Analysis of Hand Gesture Recognition TechniquesIJERA Editor
During past few years, human hand gesture for interaction with computing devices has continues to be active area of research. In this paper survey of hand gesture recognition is provided. Hand Gesture Recognition is contained three stages: Pre-processing, Feature Extraction or matching and Classification or recognition. Each stage contains different methods and techniques. In this paper define small description of different methods used for hand gesture recognition in existing system with comparative analysis of all method with its benefits and drawbacks are provided.
This document summarizes an international journal article that proposes a two-phase algorithm for face recognition in the frequency domain using discrete cosine transform (DCT) and discrete Fourier transform (DFT). The algorithm works in two phases: the first phase uses Euclidean distance to determine the K nearest neighbor training samples of a test sample. The second phase represents the test sample as a linear combination of the K nearest neighbors and classifies the sample based on which class representation has the smallest deviation from the test sample. Experimental results on FERET and ORL face databases show the two-phase algorithm based on DCT and DFT outperforms other methods like two-phase sparse representation and PCA/LDA in terms of classification accuracy.
MAGNETIC RESONANCE BRAIN IMAGE SEGMENTATIONVLSICS Design
Segmentation of tissues and structures from medical images is the first step in many image analysis applications developed for medical diagnosis. With the growing research on medical image segmentation, it is essential to categorize the research outcomes and provide researchers with an overview of the existing segmentation techniques in medical images. In this paper, different image segmentation methods applied on magnetic resonance brain images are reviewed. The selection of methods includes sources from image processing journals, conferences, books, dissertations and thesis. The conceptual details of the methods are explained and mathematical details are avoided for simplicity. Both broad and detailed categorizations of reviewed segmentation techniques are provided. The state of art research is provided with emphasis on developed techniques and image properties used by them. The methods defined are not always mutually independent. Hence, their inter relationships are also stated. Finally, conclusions are drawn summarizing commonly used techniques and their complexities in application.
The document discusses appearance-based face recognition using PCA and LDA algorithms. It summarizes the steps of each algorithm and compares their performance on preprocessed face images from the Faces94 database. Image preprocessing techniques like grayscale conversion and modified histogram equalization are applied before PCA and LDA to enhance image quality and improve recognition rates. The paper aims to study PCA and LDA with respect to recognition accuracy and dimensionality.
Faster Training Algorithms in Neural Network Based Approach For Handwritten T...CSCJournals
Handwritten text and character recognition is a challenging task compared to recognition of handwritten numeral and computer printed text due to its large variety in nature. As practical pattern recognition problems uses bulk data and there is a one step self sufficient deterministic theory to resolve recognition problems by calculating inverse of Hessian Matrix and multiplication the inverse matrix it with first order local gradient vector. But in practical cases when neural network is large the inversing operation of the Hessian Matrix is not manageable and another condition must be satisfied the Hessian Matrix must be positive definite which may not be satishfied. In these cases some repetitive recursive models are taken. In several research work in past decade it was experienced that Neural Network based approach provides most reliable performance in handwritten character and text recognition but recognition performance depends upon some important factors like no of training samples, reliable features and no of features per character, training time, variety of handwriting etc. Important features from different types of handwriting are collected and are fed to the neural network for training. It is true that more no of features increases test efficiency but it takes longer time to converge the error curve. To reduce this training time effectively proper train algorithm should be chosen so that the system provides best train and test efficiency in least possible time that is to provide the system fastest intelligence. We have used several second order conjugate gradient algorithms for training of neural network. We have found that Scaled Conjugate Gradient Algorithm , a second order training algorithm as the fastest for training of neural network for our application. Training using SCG takes minimum time with excellent test efficiency. A scanned handwritten text is taken as input and character level segmentation is done. Some important and reliable features from each character are extracted and used as input to a neural network for training. When the error level reaches into a satisfactory level (10 -12 ) weights are accepted for testing a test script. Finally a lexicon matching algorithm solves the minor misclassification problems.
An Information Maximization approach of ICA for Gender ClassificationIDES Editor
In this paper, a novel and successful method for
gender classification from human faces using dimensionality
reduction technique is proposed. Independent Component
Analysis (ICA) is one of such techniques. In the current
scheme, a thrust is given on the different algorithms and
architectures of ICA. An information maximization ICA is
discussed with its two architecture and compared with the two
architectures of fast ICA. Support Vector Machine (SVM) is
used as a classifier for the separation of male and female
classes. All experiments are done on FERET database. Results
are obtained for the different combinations of train and test
database sizes. For larger
training set SVM is performing with an accuracy of 98%. The
accuracy values are varied for change in size of testing set and
the proposed system performs with an average accuracy of
96%. An improvement in performance is achieved using class
discriminability which performs with 100% accuracy.
Review and comparison of tasks scheduling in cloud computingijfcstjournal
Recently, there has been a dramatic increase in the popularity of cloud computing systems that rent
computing resources on-demand, bill on a pay-as-you-go basis, and multiplex many users on the same
physical infrastructure. It is a virtual pool of resources which are provided to users via Internet. It gives
users virtually unlimited pay-per-use computing resources without the burden of managing the underlying
infrastructure. One of the goals is to use the resources efficiently and gain maximum profit. Scheduling is a
critical problem in Cloud computing, because a cloud provider has to serve many users in Cloud
computing system. So scheduling is the major issue in establishing Cloud computing systems. The
scheduling algorithms should order the jobs in a way where balance between improving the performance
and quality of service and at the same time maintaining the efficiency and fairness among the jobs. This
paper introduces and explores some of the methods provided for in cloud computing has been scheduled.
Finally the waiting time and time to implement some of the proposed algorithm is evaluated
Comparison of Different Methods for Fusion of Multimodal Medical ImagesIRJET Journal
This document compares different methods for fusing multimodal medical images, including PCA, DCT, SWT, and DWT. It provides an overview of each method, including formulations, process flow diagrams, algorithms, and advantages/disadvantages. PCA uses eigenvectors to reveal internal data structure and remove redundancy. DCT expresses image blocks as sums of cosine functions. SWT is a translation-invariant modification of DWT that does not decimate coefficients. DWT decomposes images into coarse and detailed frequency subbands using wavelet transforms. The document reviews each method for fusing medical images from different modalities to extract complementary information.
FACE RECOGNITION USING DIFFERENT LOCAL FEATURES WITH DIFFERENT DISTANCE TECHN...IJCSEIT Journal
A face recognition system using different local features with different distance measures is proposed in this
paper. Proposed method is fast and gives accurate detection. Feature vector is based on Eigen values,
Eigen vectors, and diagonal vectors of sub images. Images are partitioned into sub images to detect local
features. Sub partitions are rearranged into vertically and horizontally matrices. Eigen values, Eigenvector
and diagonal vectors are computed for these matrices. Global feature vector is generated for face
recognition. Experiments are performed on benchmark face YALE database. Results indicate that the
proposed method gives better recognition performance in terms of average recognized rate and retrieval
time compared to the existing methods.
A novel embedded hybrid thinning algorithm forprjpublications
The document proposes a hybrid thinning algorithm that combines the Stentiford and Zhang-Suen thinning algorithms. It compares the hybrid algorithm to the original Stentiford and Zhang-Suen algorithms on an input image. The hybrid algorithm more accurately thins the image to a single pixel width but does not improve time complexity compared to the original algorithms. The hybrid approach uses four templates across two sub-iterations to identify and remove pixels based on connectivity values until no more can be removed. Experimental results show the hybrid algorithm more effectively increases image contrast than the original thinning algorithms.
This document discusses principal component analysis (PCA) for face recognition. It begins with an introduction to face recognition and PCA. PCA works by calculating eigenvectors from a set of face images, which represent the principal components that account for the most variance in the image data. These eigenvectors are called "eigenfaces" and can be used to reconstruct the face images. The document then discusses how the system is implemented, including preparing a face database, normalizing the training images, calculating the eigenfaces/principal components, projecting the face images into this reduced space, and recognizing faces by calculating distances between projected test images and training images.
A reliable gait features are required to extract the gait sequences from an images. In this paper suggested a
simple method for gait identification which is based on moments. Moment values are extracted on different
number of frames of Gray Scale and Silhouette images of CASIA database. These moment values are
considered as feature values. Fuzzy logic and Nearest Neighbor Classifier are used for classification. Both
achieved higher recognition.
This document summarizes a research paper on color image segmentation using k-means clustering. It discusses how k-means clustering can be used to group color image pixels into a set number of classes without using training data. The clustering groups similar color pixels to obtain segmentation. This avoids calculating features for every pixel and provides efficient segmentation based on color similarity. The document outlines the k-means clustering process used and how it segments an image into distinct colored regions to extract important objects.
Successive Geometric Center Based Dynamic Signature RecognitionDr. Vinayak Bharadi
The document summarizes research on signature recognition using successive geometric centers, grid, and texture features. It discusses extracting features from dynamic signatures captured using a digitizer tablet. Successive geometric centers are extracted from segmented regions of the signature at different depths. Grid features provide pixel density information across a segmented grid. Texture features capture pressure pattern transitions. The features are evaluated for signature recognition and verification performance based on metrics like true acceptance and rejection rates. The goal is to analyze the proposed method and improve over existing systems.
Medical image analysis and processing using a dual transformeSAT Journals
Abstract The demand for images in medical field has increased drastically over the years. The need for reducing the storage space has resulted in image compression. This paper presents a dual transform for medical image compression algorithm. The experimental results determines how the compression ratio (CR), peak signal to noise ratio (PSNR) and SNR (signal to noise ratio) of different compression algorithms responds to dual transform algorithm. Keywords: DCT, SPIHT, Haar Wavelet, Linear approximation transform, image compression, Singular Value Decomposition (SVD).
This document presents a dual transform method for medical image compression that uses both singular value decomposition (SVD) and Haar wavelet transform. It compares the proposed dual transform method to existing Haar wavelet-SPIHT and DCT-SPIHT compression methods on 3 medical images. The dual transform method achieved higher compression ratios and PSNR values at 0.4 bits per pixel compared to the other methods, indicating better preservation of image quality at higher compression. The dual transform is thus concluded to be suitable for compressing medical images where no deterioration of image quality is acceptable.
Frequency Domain Blockiness and Blurriness Meter for Image Quality AssessmentCSCJournals
Image and video compression introduces distortions (artefacts) to the coded image. The most prominent artefacts added are blockiness and blurriness. Many existing quality meters are normally distortion-specific. This paper proposes an objective quality meter for quantifying the combined blockiness and blurriness distortions in frequency domain. The model first applies edge detection and cancellation, then spatial masking to mimic the characteristics of the human visual system. Blockiness is then estimated by transforming image into frequency domain, followed by finding the ratio of harmonics to other AC components. Blurriness is determined by comparing the high frequency coefficients of the reference and coded images due to the fact that blurriness reduces the high frequency coefficients. Then, both blockiness and blurriness distortions are combined for a single quality metric. The meter is tested on blocky and blurred images from the LIVE image database, with a correlation coefficient of 95-96%.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
The presentation gives an introduction to the origins of EduCamp, the key principles, helpful rules and challenges in organisating such an event. Finally some survey results from the 3rd EduCamp in Ilmenau are presented.
Reading the literature and keeping up to dateSarah Purcell
This document provides guidance on finding and engaging with academic literature. It discusses locating literature through library databases, books, and journals. It offers tips for choosing and effectively reading textbooks and journal articles, such as evaluating the authors and date, reading introductions and conclusions, and keeping notes. The document stresses evaluating all sources and provides criteria for assessing authority, accuracy, and bias. It emphasizes the importance of record keeping and summarizing sources for future reference in academic writing.
패션, 뷰티, 라이프스타일 부문 글로벌 디지털 콘텐츠 허브 : 패션인코리아 소개Evan Ryu
패션, 뷰티, 라이프스타일 부문의 국내, 해외 디지털 미디어 콘텐츠 프로젝트 패션인코리아 입니다. 국내 문화산업의 디지털, 온라인, SNS를 통한 글로벌 홍보 및 오프라인 비즈니스와 연계된 융합 마케팅 실무 수행, 컨설팅 등을 추진하고 있습니다. 패션, 문화, 라이프스타일 산업 부문의 디지털 마케터 양성 전문교육도 추진하고 있습니다.
Kamela Kettles is proposing a senior project to raise money for a family with multiple special needs children by organizing several fundraisers. She plans to create a non-profit organization and bank account to collect donations. Her fundraisers will include selling t-shirts and hosting an awareness walk. She has budgeted $200 and will seek donations from local companies. Her project facilitator, Magen Hickey, works at a pediatric therapy center and has 5 years of experience assisting special needs children.
Rheingold U - an experiment in distance teachingRheingold U
Rheingold U. is a totally online learning community, offering courses that usually run for five weeks, with five live sessions and ongoing asynchronous discussions through forums, blogs, wikis, mindmaps, and social bookmarks. In my thirty years of experience online and my six years teaching students face to face and online at University of California, Berkeley and Stanford University, I've learned that magic can happen when a skilled facilitator works collaboratively with a group of motivated students. The technology affords but does not guarantee peer to peer learning and collaborative inquiry. That's where I come in.
CIS14: Kantara - Enabling Trusted and Secure Online Access to Government of C...CloudIDSummit
The document summarizes the Government of Canada's strategy to enable trusted and secure online access to government services through identity federation. It discusses establishing a federated identity system that allows validation of citizen identities across government departments and jurisdictions. This will provide a seamless experience for citizens to access multiple online services using a single authenticated identity. The strategy involves initial credential federation followed by longer term identity federation, in collaboration with private sector identity providers and other levels of government. The goal is to improve convenience for citizens while reducing costs for governments.
An Information Maximization approach of ICA for Gender ClassificationIDES Editor
In this paper, a novel and successful method for
gender classification from human faces using dimensionality
reduction technique is proposed. Independent Component
Analysis (ICA) is one of such techniques. In the current
scheme, a thrust is given on the different algorithms and
architectures of ICA. An information maximization ICA is
discussed with its two architecture and compared with the two
architectures of fast ICA. Support Vector Machine (SVM) is
used as a classifier for the separation of male and female
classes. All experiments are done on FERET database. Results
are obtained for the different combinations of train and test
database sizes. For larger
training set SVM is performing with an accuracy of 98%. The
accuracy values are varied for change in size of testing set and
the proposed system performs with an average accuracy of
96%. An improvement in performance is achieved using class
discriminability which performs with 100% accuracy.
Review and comparison of tasks scheduling in cloud computingijfcstjournal
Recently, there has been a dramatic increase in the popularity of cloud computing systems that rent
computing resources on-demand, bill on a pay-as-you-go basis, and multiplex many users on the same
physical infrastructure. It is a virtual pool of resources which are provided to users via Internet. It gives
users virtually unlimited pay-per-use computing resources without the burden of managing the underlying
infrastructure. One of the goals is to use the resources efficiently and gain maximum profit. Scheduling is a
critical problem in Cloud computing, because a cloud provider has to serve many users in Cloud
computing system. So scheduling is the major issue in establishing Cloud computing systems. The
scheduling algorithms should order the jobs in a way where balance between improving the performance
and quality of service and at the same time maintaining the efficiency and fairness among the jobs. This
paper introduces and explores some of the methods provided for in cloud computing has been scheduled.
Finally the waiting time and time to implement some of the proposed algorithm is evaluated
Comparison of Different Methods for Fusion of Multimodal Medical ImagesIRJET Journal
This document compares different methods for fusing multimodal medical images, including PCA, DCT, SWT, and DWT. It provides an overview of each method, including formulations, process flow diagrams, algorithms, and advantages/disadvantages. PCA uses eigenvectors to reveal internal data structure and remove redundancy. DCT expresses image blocks as sums of cosine functions. SWT is a translation-invariant modification of DWT that does not decimate coefficients. DWT decomposes images into coarse and detailed frequency subbands using wavelet transforms. The document reviews each method for fusing medical images from different modalities to extract complementary information.
FACE RECOGNITION USING DIFFERENT LOCAL FEATURES WITH DIFFERENT DISTANCE TECHN...IJCSEIT Journal
A face recognition system using different local features with different distance measures is proposed in this
paper. Proposed method is fast and gives accurate detection. Feature vector is based on Eigen values,
Eigen vectors, and diagonal vectors of sub images. Images are partitioned into sub images to detect local
features. Sub partitions are rearranged into vertically and horizontally matrices. Eigen values, Eigenvector
and diagonal vectors are computed for these matrices. Global feature vector is generated for face
recognition. Experiments are performed on benchmark face YALE database. Results indicate that the
proposed method gives better recognition performance in terms of average recognized rate and retrieval
time compared to the existing methods.
A novel embedded hybrid thinning algorithm forprjpublications
The document proposes a hybrid thinning algorithm that combines the Stentiford and Zhang-Suen thinning algorithms. It compares the hybrid algorithm to the original Stentiford and Zhang-Suen algorithms on an input image. The hybrid algorithm more accurately thins the image to a single pixel width but does not improve time complexity compared to the original algorithms. The hybrid approach uses four templates across two sub-iterations to identify and remove pixels based on connectivity values until no more can be removed. Experimental results show the hybrid algorithm more effectively increases image contrast than the original thinning algorithms.
This document discusses principal component analysis (PCA) for face recognition. It begins with an introduction to face recognition and PCA. PCA works by calculating eigenvectors from a set of face images, which represent the principal components that account for the most variance in the image data. These eigenvectors are called "eigenfaces" and can be used to reconstruct the face images. The document then discusses how the system is implemented, including preparing a face database, normalizing the training images, calculating the eigenfaces/principal components, projecting the face images into this reduced space, and recognizing faces by calculating distances between projected test images and training images.
A reliable gait features are required to extract the gait sequences from an images. In this paper suggested a
simple method for gait identification which is based on moments. Moment values are extracted on different
number of frames of Gray Scale and Silhouette images of CASIA database. These moment values are
considered as feature values. Fuzzy logic and Nearest Neighbor Classifier are used for classification. Both
achieved higher recognition.
This document summarizes a research paper on color image segmentation using k-means clustering. It discusses how k-means clustering can be used to group color image pixels into a set number of classes without using training data. The clustering groups similar color pixels to obtain segmentation. This avoids calculating features for every pixel and provides efficient segmentation based on color similarity. The document outlines the k-means clustering process used and how it segments an image into distinct colored regions to extract important objects.
Successive Geometric Center Based Dynamic Signature RecognitionDr. Vinayak Bharadi
The document summarizes research on signature recognition using successive geometric centers, grid, and texture features. It discusses extracting features from dynamic signatures captured using a digitizer tablet. Successive geometric centers are extracted from segmented regions of the signature at different depths. Grid features provide pixel density information across a segmented grid. Texture features capture pressure pattern transitions. The features are evaluated for signature recognition and verification performance based on metrics like true acceptance and rejection rates. The goal is to analyze the proposed method and improve over existing systems.
Medical image analysis and processing using a dual transformeSAT Journals
Abstract The demand for images in medical field has increased drastically over the years. The need for reducing the storage space has resulted in image compression. This paper presents a dual transform for medical image compression algorithm. The experimental results determines how the compression ratio (CR), peak signal to noise ratio (PSNR) and SNR (signal to noise ratio) of different compression algorithms responds to dual transform algorithm. Keywords: DCT, SPIHT, Haar Wavelet, Linear approximation transform, image compression, Singular Value Decomposition (SVD).
This document presents a dual transform method for medical image compression that uses both singular value decomposition (SVD) and Haar wavelet transform. It compares the proposed dual transform method to existing Haar wavelet-SPIHT and DCT-SPIHT compression methods on 3 medical images. The dual transform method achieved higher compression ratios and PSNR values at 0.4 bits per pixel compared to the other methods, indicating better preservation of image quality at higher compression. The dual transform is thus concluded to be suitable for compressing medical images where no deterioration of image quality is acceptable.
Frequency Domain Blockiness and Blurriness Meter for Image Quality AssessmentCSCJournals
Image and video compression introduces distortions (artefacts) to the coded image. The most prominent artefacts added are blockiness and blurriness. Many existing quality meters are normally distortion-specific. This paper proposes an objective quality meter for quantifying the combined blockiness and blurriness distortions in frequency domain. The model first applies edge detection and cancellation, then spatial masking to mimic the characteristics of the human visual system. Blockiness is then estimated by transforming image into frequency domain, followed by finding the ratio of harmonics to other AC components. Blurriness is determined by comparing the high frequency coefficients of the reference and coded images due to the fact that blurriness reduces the high frequency coefficients. Then, both blockiness and blurriness distortions are combined for a single quality metric. The meter is tested on blocky and blurred images from the LIVE image database, with a correlation coefficient of 95-96%.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
The presentation gives an introduction to the origins of EduCamp, the key principles, helpful rules and challenges in organisating such an event. Finally some survey results from the 3rd EduCamp in Ilmenau are presented.
Reading the literature and keeping up to dateSarah Purcell
This document provides guidance on finding and engaging with academic literature. It discusses locating literature through library databases, books, and journals. It offers tips for choosing and effectively reading textbooks and journal articles, such as evaluating the authors and date, reading introductions and conclusions, and keeping notes. The document stresses evaluating all sources and provides criteria for assessing authority, accuracy, and bias. It emphasizes the importance of record keeping and summarizing sources for future reference in academic writing.
패션, 뷰티, 라이프스타일 부문 글로벌 디지털 콘텐츠 허브 : 패션인코리아 소개Evan Ryu
패션, 뷰티, 라이프스타일 부문의 국내, 해외 디지털 미디어 콘텐츠 프로젝트 패션인코리아 입니다. 국내 문화산업의 디지털, 온라인, SNS를 통한 글로벌 홍보 및 오프라인 비즈니스와 연계된 융합 마케팅 실무 수행, 컨설팅 등을 추진하고 있습니다. 패션, 문화, 라이프스타일 산업 부문의 디지털 마케터 양성 전문교육도 추진하고 있습니다.
Kamela Kettles is proposing a senior project to raise money for a family with multiple special needs children by organizing several fundraisers. She plans to create a non-profit organization and bank account to collect donations. Her fundraisers will include selling t-shirts and hosting an awareness walk. She has budgeted $200 and will seek donations from local companies. Her project facilitator, Magen Hickey, works at a pediatric therapy center and has 5 years of experience assisting special needs children.
Rheingold U - an experiment in distance teachingRheingold U
Rheingold U. is a totally online learning community, offering courses that usually run for five weeks, with five live sessions and ongoing asynchronous discussions through forums, blogs, wikis, mindmaps, and social bookmarks. In my thirty years of experience online and my six years teaching students face to face and online at University of California, Berkeley and Stanford University, I've learned that magic can happen when a skilled facilitator works collaboratively with a group of motivated students. The technology affords but does not guarantee peer to peer learning and collaborative inquiry. That's where I come in.
CIS14: Kantara - Enabling Trusted and Secure Online Access to Government of C...CloudIDSummit
The document summarizes the Government of Canada's strategy to enable trusted and secure online access to government services through identity federation. It discusses establishing a federated identity system that allows validation of citizen identities across government departments and jurisdictions. This will provide a seamless experience for citizens to access multiple online services using a single authenticated identity. The strategy involves initial credential federation followed by longer term identity federation, in collaboration with private sector identity providers and other levels of government. The goal is to improve convenience for citizens while reducing costs for governments.
The document provides examples and explanations for using the past continuous and future going to tenses in English. It includes sample sentences using these tenses, such as "When I was on my way home, I saw an accident." It also lists key words that are used with these tenses, such as "when", "while", and "going to". There are exercises for students to practice transforming verbs into the correct past or future tense. The document concludes with lyrics to the song "You're Going to Lose That Girl".
Virtualization allows a single computer to run multiple virtual machines simultaneously. This allows developers to easily create and restore test environments. It also enables demonstrators to maintain separate demo environments. Virtual machine snapshots can be easily saved and shared between computers, benefiting developers, demonstrators, and home users. However, virtualization performance declines as more virtual machines are run simultaneously on a single computer.
When it started in 2007, EMFCompare 1.x was designed to compare models that could fit entirely in memory. Since then, EMF has been used to design bigger and bigger models, to the point that they can sometimes barely fit entirely in a laptop's memory. EMFCompare 1.x is irrelevant to compare such big models because its comparison engine needs to handle 2 or 3 versions (three-way diff) of the models under comparison.
To be able to work with such large models, models are often split in multiple resources to form a set of strongly connected components in a way that a single component can fit entirely in memory. Yet EMFCompare 1.x cannot handle strategies adapted to these models such as not loading the entire model in memory or loading it piece after piece.
EMFCompare 2 is a rewrite from scratch with scalability in minds. It now has a smart scope feature to leverage the above strategies. It only loads the fragments susceptible to have changed and then compares only these parts. This way, EMFCompare 2 is able to compare models with millions of elements in a number of steps proportional to the number of differences.
During this talk, we will introduce you to the new framework and how we now are able to scale to millions. This will be shown with a lot of demo in support. We will also show you the brand new user interface that has been revamped to scale along with the new engine.
The document discusses several key facts about DNA:
- DNA can store vast amounts of information in a very small space within cells. The DNA in a single human could stretch to the sun and back over 600,000 times.
- While DNA is 99.9% identical between all humans, the 0.1% difference results in our unique characteristics. This difference amounts to around 3 million nucleotides.
- DNA is a highly efficient storage system, able to hold 25 gigabytes of data per inch. This shows DNA is more advanced than computer storage technologies.
- DNA replication allows DNA to make copies of itself in a semi-conservative process where the original strands remain intact and act as templates for new strands.
Brahmandihi village in Odisha is famous for its pottery work, which is the main source of income for the villagers. The pottery process involves using a stone wheel to shape wet black clay into pots, toys, and other articles like coin boxes and lamps. The shaped clay items are left to dry and then fired in a furnace before being supplied to the local market. Pottery from Brahmandihi is known for its amazing designs and is often used for lamps during the Diwali festival.
This document contains sample personalized letters that can be purchased from the website topsantaletters.com. It includes options for a classroom letter from Santa addressed to a teacher, as well as a family letter addressed to parents. Both letters provide updates on Santa and his preparations for Christmas Eve deliveries. The website offers personalized Santa letters that can be customized with the child's name and other details for $9.95 each.
Sunglasses or sun glasses are a form of protective eyewear designed primarily to prevent bright sunlight and high-energy visible light from damaging or discomforting the eyes.
Healthy Lifestyles Presentation to BOE: August 2014Lynn McMullin
The document summarizes the proposed policy on school nutrition and physical activity. It provides background on parent survey responses calling for healthier options and less junk food. Classroom celebrations were noted to frequently include unhealthy foods like cupcakes and donuts. The proposed policy aims to offer healthier celebration options and food choices while still allowing celebrations. It is presented as thoughtful, research-based, and focused on student health and well-being rather than being punitive. The policy does not ban food but provides guidelines and resources for healthier options.
Globalization is influencing higher education trends in South Korea. South Korean universities are increasingly adopting Western-style curriculum and programs to attract more international students and compete globally. However, this is also contributing to "brain drain" as many Korean students choose to study and work abroad after graduation. The effects of globalization in higher education are creating both educational and cultural changes in South Korean classrooms.
This document discusses the tension teachers can feel between embracing active learning approaches like games and play in the classroom, and pressures to adhere strictly to standards and assessments.
It notes that teachers have expressed a willingness to use games but reluctance to do so publicly due to fears over their professional reputation and ability to implement changes. Research is presented showing active learning and play can enhance cognitive development and comprehension, but this contrasts with some views of what professional teaching should entail.
The document analyzes teacher artifacts and responses showing they feel pulled between a "Jekyll" identity of strictly following mandates, and a "Hyde" identity of modifying instruction to better engage students, creating two different classroom approaches and identities.
Ka-32 helicopters have been in series production since 1986 and come in several versions for different missions. They have a unique coaxial rotor design with two main contra-rotating rotors, powered by two turboshaft engines. This compact design allows the helicopter to carry heavy external loads. Ka-32 helicopters have found applications in transportation, search and rescue, firefighting, and cargo lifting. They have demonstrated high performance, payload capacity, and versatility in performing different missions in challenging conditions.
'A NEW GENERATION' is the theme for the December 2012 ECO CAMP. Together with 45 boys and staff we had a wonderful camp. The boys had an amazing time during all the program. The camp had many highlights; the catering, the accommodation, the camp house and its surroundings, the outdoor activities, together being in Gods presence
7.[46 53]similarity of inference face matching on angle oriented face recogni...Alexander Decker
This document discusses angle oriented face recognition using discrete cosine transforms. It proposes a face recognition algorithm that extracts local information using angle oriented discrete cosine transforms and normalization techniques. The face matching classification is done using Euclidean distance, Manhattan distance, and cosine distance methods. The algorithm was tested on a database with variable illumination and facial expressions. Angle oriented discrete cosine transforms incorporated neighborhood pixel information to increase reliability of face detection compared to other methods. Recognition rates were higher using face matching methods compared to other approaches.
11.similarity of inference face matching on angle oriented face recognitionAlexander Decker
This document presents research on an angle-oriented face recognition algorithm that uses discrete cosine transforms (DCT) for feature extraction. The proposed algorithm first normalizes input face images by resizing them and rotating them to match the pose of database images. DCT is then used to extract features from local blocks of the images. Various distance measures, including Euclidean distance, Manhattan distance, and cosine similarity, are used to match input image features to those in the database. Experimental results on two face image databases show recognition rates over 90% when using Manhattan distance matching for images rotated clockwise and counterclockwise at different angles. The study demonstrates the effectiveness of the proposed algorithm at recognizing faces with variations in pose and orientation.
Face Recognition based on STWT and DTCWT using two dimensional Q-shift Filters IJERA Editor
The Biometrics is used to recognize a person effectively compared to traditional methods of identification. In this paper, we propose a Face recognition based on Single Tree Wavelet Transform (STWT) and Dual Tree Complex Wavelet transform (DTCWT). The Face Images are preprocessed to enhance quality of the image and resize. DTCWT and STWT are applied on face images to extract features. The Euclidian distance is used to compare features of database image with test face images to compute performance parameters. The performance of STWT is compared with DTCWT. It is observed that the DTCWT gives better results compared to STWT technique.
An Illumination Invariant Face Recognition by Selection of DCT CoefficientsCSCJournals
The face recognition is nowadays popular in social networks and smart phones. The face recognition is more difficult for poor illumination images. The objective of the work is to create an illumination invariant face recognition system using 2D Discrete Cosine Transform and Contrast Limited Adaptive Histogram Equalization (CLAHE). Contrast Limited Adaptive Histogram Equalization is used for enhancing the poor contrast medical images. The proposed method selects 75% to 100% DCT coefficients and set the high frequency to zero. It resizes the image based on the selection percentage, and then inversed DCT is applied. Then, CLAHE is applied to adjust the contrast. The resized images reduce the computational complexity. The image obtained is illumination invariant face image and termed as ‘En-DCT’ image. The fisher face subspace method is applied on the ‘En-DCT’ image to extract the features. The matching face image is obtained using cosine similarity. The face recognition accuracy is tested on AR database. The face recognition is tested with 75% to 100% DCT coefficients and finds the best range. The performance measures recognition rate, 1% FAR (False Acceptance Rate) and Equal Error Rate (EER) are computed. The high recognition rate results prove that the proposed method is an efficient method for illumination invariant face recognition
HOL, GDCT AND LDCT FOR PEDESTRIAN DETECTIONcsandit
In this paper, we present and analyze different approaches implemented here to resolve pedestrian detection problem. Histograms of Oriented Laplacian (HOL) is a descriptor of
characteristic, it aims to highlight objects in digital images, Discrete Cosine Transform DCT with its two version global (GDCT) and local (LDCT), it changes image's pixel into frequencies coefficients and then we use them as a characteristics in the process. We implemented
independently these methods and tried to combine it and used there outputs in a classifier, the new generated classifier has proved it efficiency in certain cases. The performance of those
methods and their combination is tested on most popular Dataset in pedestrian detection, which
are INRIA and Daimler.
HOL, GDCT AND LDCT FOR PEDESTRIAN DETECTIONcscpconf
The document presents and analyzes different approaches for pedestrian detection, including Histograms of Oriented Laplacian (HOL), Global Discrete Cosine Transform (GDCT), and Local Discrete Cosine Transform (LDCT). HOL is used to highlight objects in images while GDCT and LDCT convert images into frequency coefficients that are then used as characteristics for classification. The performance of these individual methods and their combination is tested on popular pedestrian detection datasets like INRIA and Daimler. Experimental results show the new classifier generated by combining the method outputs proves more efficient in certain cases.
Weighted Performance comparison of DWT and LWT with PCA for Face Image Retrie...cscpconf
This paper compares the performance of face image retrieval system based on discrete wavelet
transforms and Lifting wavelet transforms with principal component analysis (PCA). These
techniques are implemented and their performances are investigated using frontal facial images
from the ORL database. The Discrete Wavelet Transform is effective in representing image
features and is suitable in Face image retrieval, it still encounters problems especially in
implementation; e.g. Floating point operation and decomposition speed. We use the advantages
of lifting scheme, a spatial approach for constructing wavelet filters, which provides feasible
alternative for problems facing its classical counterpart. Lifting scheme has such intriguing
properties as convenient construction, simple structure, integer-to-integer transform, low
computational complexity as well as flexible adaptivity, revealing its potentials in Face image
retrieval. Comparing to PCA and DWT with PCA, Lifting wavelet transform with PCA gives
less computation and DWT-PCA gives high retrieval rate.. Especially ‘sym2’ wavelet
outperforms well comparing to all other wavelets.
Gesture Recognition using Principle Component Analysis & Viola-Jones AlgorithmIJMER
Gesture recognition pertains to recognizing meaningful expressions of motion by a human,
involving the hands, arms, face, head, and/or body. It is of utmost importance in designing an intelligent
and efficient human–computer interface. The applications of gesture recognition are manifold, ranging
from sign language through medical rehabilitation to virtual reality. In this paper, we provide a survey on
gesture recognition with particular emphasis on hand gestures and facial expressions. Applications
involving wavelet transform and principal component analysis for face and hand gesture recognition on
digital images
Near Reversible Data Hiding Scheme for images using DCTIJERA Editor
This document presents a near-reversible data hiding scheme for images using discrete cosine transform (DCT). In the proposed scheme, data is embedded in the non-zero AC coefficients of DCT blocks in a way that minimizes modifications to the original coefficients, improving visual quality. During embedding, two mathematical functions are used to modify coefficients by amounts closer to their original values compared to other methods. Experimental results on test images show the proposed scheme achieves better visual quality than existing schemes while maintaining data hiding capacity and reversibility.
EV-SIFT - An Extended Scale Invariant Face Recognition for Plastic Surgery Fa...IJECEIAES
This paper presents a new technique called Entropy based SIFT (EV-SIFT) for accurate face recognition after the plastic surgery. The corresponding feature extracts the key points and volume of the scale-space structure for which the information rate is determined. This provides least effect on uncertain variations in the face since the entropy is the higher order statistical feature. The corresponding EV-SIFT features are applied to the Support vector machine for classification. The normal SIFT feature extracts the key points based on the contrast of the image and the V- SIFT feature extracts the key points based on the volume of the structure. However, the EV- SIFT method provides both the contrast and volume information. Thus EV-SIFT provide better performance when compared with PCA, normal SIFT and VSIFT based feature extraction.
AN ILLUMINATION INVARIANT FACE RECOGNITION USING 2D DISCRETE COSINE TRANSFORM...ijcsit
Automatic face recognition performance is affected due to the head rotations and tilt, lighting intensity and
angle, facial expressions, aging and partial occlusion of face using Hats, scarves, glasses etc.In this paper,
illumination normalization of face images is done by combining 2D Discrete Cosine Transform and
Contrast Limited Adaptive Histogram Equalization. The proposed method selects certain percentage of
DCT coefficients and rest is set to 0. Then, inverse DCT is applied which is followed by logarithm
transform and CLAHE. Thesesteps create illumination invariant face image, termed as ‘DCT CLAHE’
image. The fisher face subspace method extracts features from ‘DCT CLAHE’ imageand features are
matched with cosine similarity. The proposed method is tested in AR database and performance measures
like recognition rate, Verification rate at 1% FAR and Equal Error Rate are computed. The experimental
results shows high recognition rate in AR database.
Image Registration for Recovering Affine Transformation Using Nelder Mead Sim...CSCJournals
This paper proposes a parallel approach for the Vector Quantization (VQ) problem in image processing. VQ deals with codebook generation from the input training data set and replacement of any arbitrary data with the nearest codevector. Most of the efforts in VQ have been directed towards designing parallel search algorithms for the codebook, and little has hitherto been done in evolving a parallelized procedure to obtain an optimum codebook. This parallel algorithm addresses the problem of designing an optimum codebook using the traditional LBG type of vector quantization algorithm for shared memory systems and for the efficient usage of parallel processors. Using the codebook formed from a training set, any arbitrary input data is replaced with the nearest codevector from the codebook. The effectiveness of the proposed algorithm is indicated.
Fourier mellin transform based face recognitioniaemedu
This document presents a face recognition algorithm based on Fourier Mellin Transform. It begins with an introduction to face recognition and challenges of illumination and pose variations. It then describes extracting illumination invariant features by computing depth maps from input images using a shape from shading algorithm. Fourier Mellin Transform is applied to the depth maps to extract features. Experiments on the ORL database showed the approach achieved 100% recognition with 4 training images and 95.7% recognition with 3 training images, demonstrating robustness to illumination and pose variations.
This document summarizes a research paper on a Fourier Mellin transform based face recognition algorithm. The algorithm extracts illumination invariant features from depth maps of face images obtained through a shape from shading algorithm. These features are transformed using Fourier Mellin transform and classified using k-NN classification based on L1 norm distance. Experiments on the ORL database showed the transformed shape features are robust to illumination and pose variations, achieving better recognition performance compared to other algorithms. The document provides background on challenges in face recognition and reviews existing algorithms before describing the proposed Fourier Mellin transform based approach.
This document presents a face recognition algorithm based on Fourier Mellin Transform. It begins with an introduction to face recognition and challenges of illumination and pose variations. It then describes extracting illumination invariant features by computing depth maps from input images using a shape from shading algorithm. Fourier Mellin Transform is applied to the depth maps to extract features. Experiments on the ORL database showed the approach achieved 100% recognition with 4 training images and 95.7% recognition with 3 training images, demonstrating robustness to illumination and pose variations.
Coherence enhancement diffusion using robust orientation estimationcsandit
In this paper, a new robust orientation estimation for Coherence Enhancement Diffusion (CED)
is proposed. In CED, proper scale selection is very important as the gradient vector at that
scale reflects the orientation of local ridge. For this purpose, a new scheme is proposed in
which pre calculated orientation, by using orientation diffusion, is used to find the correct true
local scale. From the experiments it is found that the proposed scheme is working much better
in noisy environment as compared to the traditional Coherence Enhancement Diffusion.
Orientation Spectral Resolution Coding for Pattern RecognitionIOSRjournaljce
In the approach of pattern recognition, feature descriptions are of greater importance. Features are represented in spatial domain and transformed domain. Wherein, spatial domain features are of lower representation, transformed domains are finer and more informative. In the transformed domain representation, features are represented using spectral coding using advanced transformation technique such as wavelet transformation. However, the feature extraction approach considers the band coefficients; the orientation variation is not considered. In this paper towards inherent orientation variation among each spectral band is derived, and the approach of orientation filtration is made for effective feature representation. The obtained result illustrates an improvement in the recognition accuracy, in comparison to conventional retrieval system.
Performance Comparison of PCA,DWT-PCA And LWT-PCA for Face Image RetrievalCSEIJJournal
This paper compares the performance of face image retrieval system based on discrete wavelet transforms
and Lifting wavelet transforms with principal component analysis (PCA). These techniques are
implemented and their performances are investigated using frontal facial images from the ORL database.
The Discrete Wavelet Transform is effective in representing image features and is suitable in Face image
retrieval, it still encounters problems especially in implementation; e.g. Floating point operation and
decomposition speed. We use the advantages of lifting scheme, a spatial approach for constructing wavelet
filters, which provides feasible alternative for problems facing its classical counterpart. Lifting scheme has
such intriguing properties as convenient construction, simple structure, integer-to-integer transform, low
computational complexity as well as flexible adaptivity, revealing its potentials in Face image retrieval.
Comparing to PCA and DWT with PCA, Lifting wavelet transform with PCA gives less computation and
DWT-PCA gives high retrieval rate..
Similar to Similarity of inference face matching on angle oriented (20)
Abnormalities of hormones and inflammatory cytokines in women affected with p...Alexander Decker
Women with polycystic ovary syndrome (PCOS) have elevated levels of hormones like luteinizing hormone and testosterone, as well as higher levels of insulin and insulin resistance compared to healthy women. They also have increased levels of inflammatory markers like C-reactive protein, interleukin-6, and leptin. This study found these abnormalities in the hormones and inflammatory cytokines of women with PCOS ages 23-40, indicating that hormone imbalances associated with insulin resistance and elevated inflammatory markers may worsen infertility in women with PCOS.
A usability evaluation framework for b2 c e commerce websitesAlexander Decker
This document presents a framework for evaluating the usability of B2C e-commerce websites. It involves user testing methods like usability testing and interviews to identify usability problems in areas like navigation, design, purchasing processes, and customer service. The framework specifies goals for the evaluation, determines which website aspects to evaluate, and identifies target users. It then describes collecting data through user testing and analyzing the results to identify usability problems and suggest improvements.
A universal model for managing the marketing executives in nigerian banksAlexander Decker
This document discusses a study that aimed to synthesize motivation theories into a universal model for managing marketing executives in Nigerian banks. The study was guided by Maslow and McGregor's theories. A sample of 303 marketing executives was used. The results showed that managers will be most effective at motivating marketing executives if they consider individual needs and create challenging but attainable goals. The emerged model suggests managers should provide job satisfaction by tailoring assignments to abilities and monitoring performance with feedback. This addresses confusion faced by Nigerian bank managers in determining effective motivation strategies.
A unique common fixed point theorems in generalized dAlexander Decker
This document presents definitions and properties related to generalized D*-metric spaces and establishes some common fixed point theorems for contractive type mappings in these spaces. It begins by introducing D*-metric spaces and generalized D*-metric spaces, defines concepts like convergence and Cauchy sequences. It presents lemmas showing the uniqueness of limits in these spaces and the equivalence of different definitions of convergence. The goal of the paper is then stated as obtaining a unique common fixed point theorem for generalized D*-metric spaces.
A trends of salmonella and antibiotic resistanceAlexander Decker
This document provides a review of trends in Salmonella and antibiotic resistance. It begins with an introduction to Salmonella as a facultative anaerobe that causes nontyphoidal salmonellosis. The emergence of antimicrobial-resistant Salmonella is then discussed. The document proceeds to cover the historical perspective and classification of Salmonella, definitions of antimicrobials and antibiotic resistance, and mechanisms of antibiotic resistance in Salmonella including modification or destruction of antimicrobial agents, efflux pumps, modification of antibiotic targets, and decreased membrane permeability. Specific resistance mechanisms are discussed for several classes of antimicrobials.
A transformational generative approach towards understanding al-istifhamAlexander Decker
This document discusses a transformational-generative approach to understanding Al-Istifham, which refers to interrogative sentences in Arabic. It begins with an introduction to the origin and development of Arabic grammar. The paper then explains the theoretical framework of transformational-generative grammar that is used. Basic linguistic concepts and terms related to Arabic grammar are defined. The document analyzes how interrogative sentences in Arabic can be derived and transformed via tools from transformational-generative grammar, categorizing Al-Istifham into linguistic and literary questions.
A time series analysis of the determinants of savings in namibiaAlexander Decker
This document summarizes a study on the determinants of savings in Namibia from 1991 to 2012. It reviews previous literature on savings determinants in developing countries. The study uses time series analysis including unit root tests, cointegration, and error correction models to analyze the relationship between savings and variables like income, inflation, population growth, deposit rates, and financial deepening in Namibia. The results found inflation and income have a positive impact on savings, while population growth negatively impacts savings. Deposit rates and financial deepening were found to have no significant impact. The study reinforces previous work and emphasizes the importance of improving income levels to achieve higher savings rates in Namibia.
A therapy for physical and mental fitness of school childrenAlexander Decker
This document summarizes a study on the importance of exercise in maintaining physical and mental fitness for school children. It discusses how physical and mental fitness are developed through participation in regular physical exercises and cannot be achieved solely through classroom learning. The document outlines different types and components of fitness and argues that developing fitness should be a key objective of education systems. It recommends that schools ensure pupils engage in graded physical activities and exercises to support their overall development.
A theory of efficiency for managing the marketing executives in nigerian banksAlexander Decker
This document summarizes a study examining efficiency in managing marketing executives in Nigerian banks. The study was examined through the lenses of Kaizen theory (continuous improvement) and efficiency theory. A survey of 303 marketing executives from Nigerian banks found that management plays a key role in identifying and implementing efficiency improvements. The document recommends adopting a "3H grand strategy" to improve the heads, hearts, and hands of management and marketing executives by enhancing their knowledge, attitudes, and tools.
This document discusses evaluating the link budget for effective 900MHz GSM communication. It describes the basic parameters needed for a high-level link budget calculation, including transmitter power, antenna gains, path loss, and propagation models. Common propagation models for 900MHz that are described include Okumura model for urban areas and Hata model for urban, suburban, and open areas. Rain attenuation is also incorporated using the updated ITU model to improve communication during rainfall.
A synthetic review of contraceptive supplies in punjabAlexander Decker
This document discusses contraceptive use in Punjab, Pakistan. It begins by providing background on the benefits of family planning and contraceptive use for maternal and child health. It then analyzes contraceptive commodity data from Punjab, finding that use is still low despite efforts to improve access. The document concludes by emphasizing the need for strategies to bridge gaps and meet the unmet need for effective and affordable contraceptive methods and supplies in Punjab in order to improve health outcomes.
A synthesis of taylor’s and fayol’s management approaches for managing market...Alexander Decker
1) The document discusses synthesizing Taylor's scientific management approach and Fayol's process management approach to identify an effective way to manage marketing executives in Nigerian banks.
2) It reviews Taylor's emphasis on efficiency and breaking tasks into small parts, and Fayol's focus on developing general management principles.
3) The study administered a survey to 303 marketing executives in Nigerian banks to test if combining elements of Taylor and Fayol's approaches would help manage their performance through clear roles, accountability, and motivation. Statistical analysis supported combining the two approaches.
A survey paper on sequence pattern mining with incrementalAlexander Decker
This document summarizes four algorithms for sequential pattern mining: GSP, ISM, FreeSpan, and PrefixSpan. GSP is an Apriori-based algorithm that incorporates time constraints. ISM extends SPADE to incrementally update patterns after database changes. FreeSpan uses frequent items to recursively project databases and grow subsequences. PrefixSpan also uses projection but claims to not require candidate generation. It recursively projects databases based on short prefix patterns. The document concludes by stating the goal was to find an efficient scheme for extracting sequential patterns from transactional datasets.
A survey on live virtual machine migrations and its techniquesAlexander Decker
This document summarizes several techniques for live virtual machine migration in cloud computing. It discusses works that have proposed affinity-aware migration models to improve resource utilization, energy efficient migration approaches using storage migration and live VM migration, and a dynamic consolidation technique using migration control to avoid unnecessary migrations. The document also summarizes works that have designed methods to minimize migration downtime and network traffic, proposed a resource reservation framework for efficient migration of multiple VMs, and addressed real-time issues in live migration. Finally, it provides a table summarizing the techniques, tools used, and potential future work or gaps identified for each discussed work.
A survey on data mining and analysis in hadoop and mongo dbAlexander Decker
This document discusses data mining of big data using Hadoop and MongoDB. It provides an overview of Hadoop and MongoDB and their uses in big data analysis. Specifically, it proposes using Hadoop for distributed processing and MongoDB for data storage and input. The document reviews several related works that discuss big data analysis using these tools, as well as their capabilities for scalable data storage and mining. It aims to improve computational time and fault tolerance for big data analysis by mining data stored in Hadoop using MongoDB and MapReduce.
1. The document discusses several challenges for integrating media with cloud computing including media content convergence, scalability and expandability, finding appropriate applications, and reliability.
2. Media content convergence challenges include dealing with the heterogeneity of media types, services, networks, devices, and quality of service requirements as well as integrating technologies used by media providers and consumers.
3. Scalability and expandability challenges involve adapting to the increasing volume of media content and being able to support new media formats and outlets over time.
This document surveys trust architectures that leverage provenance in wireless sensor networks. It begins with background on provenance, which refers to the documented history or derivation of data. Provenance can be used to assess trust by providing metadata about how data was processed. The document then discusses challenges for using provenance to establish trust in wireless sensor networks, which have constraints on energy and computation. Finally, it provides background on trust, which is the subjective probability that a node will behave dependably. Trust architectures need to be lightweight to account for the constraints of wireless sensor networks.
This document discusses private equity investments in Kenya. It provides background on private equity and discusses trends in various regions. The objectives of the study discussed are to establish the extent of private equity adoption in Kenya, identify common forms of private equity utilized, and determine typical exit strategies. Private equity can involve venture capital, leveraged buyouts, or mezzanine financing. Exits allow recycling of capital into new opportunities. The document provides context on private equity globally and in developing markets like Africa to frame the goals of the study.
This document discusses a study that analyzes the financial health of the Indian logistics industry from 2005-2012 using Altman's Z-score model. The study finds that the average Z-score for selected logistics firms was in the healthy to very healthy range during the study period. The average Z-score increased from 2006 to 2010 when the Indian economy was hit by the global recession, indicating the overall performance of the Indian logistics industry was good. The document reviews previous literature on measuring financial performance and distress using ratios and Z-scores, and outlines the objectives and methodology used in the current study.
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Discover top-tier mobile app development services, offering innovative solutions for iOS and Android. Enhance your business with custom, user-friendly mobile applications.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/how-axelera-ai-uses-digital-compute-in-memory-to-deliver-fast-and-energy-efficient-computer-vision-a-presentation-from-axelera-ai/
Bram Verhoef, Head of Machine Learning at Axelera AI, presents the “How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-efficient Computer Vision” tutorial at the May 2024 Embedded Vision Summit.
As artificial intelligence inference transitions from cloud environments to edge locations, computer vision applications achieve heightened responsiveness, reliability and privacy. This migration, however, introduces the challenge of operating within the stringent confines of resource constraints typical at the edge, including small form factors, low energy budgets and diminished memory and computational capacities. Axelera AI addresses these challenges through an innovative approach of performing digital computations within memory itself. This technique facilitates the realization of high-performance, energy-efficient and cost-effective computer vision capabilities at the thin and thick edge, extending the frontier of what is achievable with current technologies.
In this presentation, Verhoef unveils his company’s pioneering chip technology and demonstrates its capacity to deliver exceptional frames-per-second performance across a range of standard computer vision networks typical of applications in security, surveillance and the industrial sector. This shows that advanced computer vision can be accessible and efficient, even at the very edge of our technological ecosystem.
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
Similarity of inference face matching on angle oriented
1. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.2, 2012
Similarity of Inference Face Matching On Angle Oriented
Face Recognition
R.N.V.Jagan Mohan1, R.SubbaRao2 and Dr.K.Raja Sekhara Rao3
1. Research Scholar, Acharya Nagarjuna University, Mobile: 91-9848957141,Email:mohanrnvj@gmail.com
2. Shri Vishnu Engineering College for Women, Bhimavaram-534202, Andhra Pradesh, India,rsr_vishnu@rediffmail.com.
3. Prinipal,K.L.University,Vaddeswaram,Guntur-522510,email:rajasekhar.kurra@klce.ac.in
Abstract: Face Recognition is a well-known image processing technique that has been used in many
applications like Law enforcement security, Bio-Metric Systems etc. In this paper complete image of face
recognition algorithm is proposed. In the prepared algorithm the local information is extracted using angle
oriented discrete cosine transforms and invokes certain normalization techniques. To increase the
Reliability of the Face detection process, neighborhood pixel information is incorporated into the proposed
method. Also this study analyzes and compares the obtained results from the proposed Angle oriented face
recognition with threshold based face detector to show the level of robustness using texture features in the
proposed face detector. It was verified that a face recognition based on textual features can lead to an
efficient and more reliable face detection method compared with KLT (Karhunen Loeve Transform), a
threshold face detector.
Keywords: Angle Oriented, Euclidian Distance, Face Recognition, Feature Extraction,
Image texture features.
Introduction
Many authors discussed the face recognition by comparing the face of human being with the database and
identifying the features of image. Face Recognition has received considerable attention over past two
decades, where variation caused by illumination is most significant factor that alerts the appearance of face
[15]. The database of system consists of individual facial features along with their geometrical relations. So
for the input taken the facial features are compared with the database. If a match is found the face of the
person is said to be recognized. In this process we consider feature extraction capabilities of discrete cosine
transform (DCT) and invoke certain normalization techniques which increase its robust face recognition.
The face recognition falls in to two main categories, Chellappa et al., 1995[5]; they are (i) feature-based
and (ii) holistic. Feature-based approach to face recognition relies on detection and characterization of the
individual facial features like eyes, nose and mouth etc, and their geometrical relationships. On the other
hand, holistic approach to face recognition involves encoding of the entire facial image. Earlier works on
face recognition as discussed by Ziad M.Hafed and Martin Levine 2001 [17], Ting Shan et al 2006 [14], are
considered. Alaa Y. Taqa and Hamid A. Jalab 2010 [1, 2] are proposed color-based or texture-based skin
detector approaches.
In this paper we discussed a new computational approach that is converting the input image to database
image using Angle orientation technique in section1. Section 2 deals with the mathematical definition of
the discrete cosine transform and its relationship to KLT. The basics of face recognition system using DCT
that includes the details of proposed algorithm and discussion of various parameters which affect its
performance are discussed in section 3. The experimental results of the proposed system are highlighted in
section 4.The conclusion and future perceptives are mentioned in section 5.
1. Angle Orientation
First the input image is selected and compared with the database image. If input image size is not
equivalent to database size, the input image is to be resized to match with the size of database image. We
compare the pose of the image in both input and database images. If the input image is not at an angle of
45
2. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.2, 2012
900 we can’t compare the images; some authors Ziad. M. Hafed and Martine D. Levine (2001) [17] used
eye coordinates techniques to recognize such an image. In this approach one can identify the feature images
of the faces even though they are angle oriented. If the input image angle is not 900, rotate the image to 900
and then apply normalization technique such as geometric and illumination technique. Recognition of an
image by using rotational axis is easy to achieve or recognize the face. When the input image rotates from
horizontal axis to vertical axis the face rotates anti-clock wise and the face appears in which it is the same
as the database pose, then the object is recognized. Similarly, when the input image rotates from vertical
axis to horizontal axis the face rotates clock wise and the face appears in which it is the same as the
database pose, then the object is recognized. Therefore if input image is Angle oriented, the pose is
changed or Angle is altered using rotational axis and then compared.
The Two types of angle rotations, clock wise and Anti-clock wise, with different angles are given in the
images of Figure.1.1 and Figure 1.2.
Figure1.1: Angle Rotation – Anti Clock Wise Direction
46
3. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.2, 2012
Figure1.2: Angle Rotation – Clock Wise Direction
2.0. Discrete Cosine Transform
Discrete cosine transform (DCT) has been used as a feature extraction step in various studies on face
recognition [8, 9, 11, 12, and 17]. Until now, discrete cosine transforms have been performed either in a
holistic appearance-based sense [7], or in a local appearance-based sense ignoring the spatial information to
some extent during the classification step by feeding some kind of neural networks with local DCT
coefficients or by modelling them with some kind of statistical tools [8,9,11,12,17].
Ahmed, Natarajan, and Rao (1974) first introduced the discrete cosine transform (DCT) in the early
seventies. Ever since, the DCT has grown in popularity, and several variants have been proposed (Rao and
Yip, 1990) [10]. In particular, the DCT was categorized by Wang (1984) [16] into four slightly different
transformations named DCT-I, DCT-II, DCT-III, and DCT-IV. Of the four classes we concern with DCT-II
suggested by Wang, in this paper.
(2.1.1)
Where
(2.1.2)
N is the length of x; x and y of the same size. If x is a matrix, DCT transforms its columns. The series is
indexed from n = 1 and k = 1 instead of the usual n = 0 and k = 0 because vectors run from 1 to N instead
of 0 to N-1.Using the formulae (2.1.1) and (2.1.2) we find the feature vectors of an input sequence using
discrete cosine transform.
2.1. Similarity Matching Methods
The main objective of similarity measures is to define a value that allows the comparison of feature vectors
(reduced vectors in discrete cosine transform frameworks). With this measure the identification of a new
feature vector will be possible by searching the most similar vector into the database. This is the well-
known nearest-neighbor method. One way to define similarity is to use a measure of distance, d(x, y), in
which the similarity between vectors, S(x, y) is inverse to the distance measure. In the next sub-sections
distance measures are shown (Euclidean) and similarity measures (Cosine).
Euclidean Distance
D(x, y) = √(x-y)T(x-y) ---4.0.1
Cosine Similarity
S(x, y) = cos (x, y) = (xTy/||x||.||y||) ---4.0.2
2.2. Relationship with KLT
Karhunen-Loeve Transform (KLT) is a unitary transform that diagonalizes the covariance or the correlation
matrix of a discrete random sequence. Also it is considered as an optimal transform among all discrete
transforms based on a number of criteria. It is, however, used infrequently as it is dependent on the
47
4. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.2, 2012
statistics of the sequence i.e. when the statistics changes so as the KLT. Because of this signal dependence,
generally it has no fast algorithm. Other discrete transforms such as cosine transform (DCT) even though
suboptimal; have been extremely popular in video coding. The principal reasons for the heavy usage of
DCT are that it is signal independent and it has fast algorithms resulting in efficient implementation. In
spite of this, KLT has been used as a bench mark in evaluating the performance of other transforms.
Furthermore, DCT closely approximates the compact representation ability of the KLT, which makes it a
very useful tool for signal representation both in terms of information packing and in terms of
computational complexity due to its data independent nature.
3. Basic Algorithm for Angle Oriented Face Recognition using DCT
The basic algorithm for Angle Oriented Face Recognition discussed in this paper is depicted in figure 3.1.
The algorithm involves both face normalization and Recognization. Mathew Turk and Alex Pentland [15]
expanded the idea of face recognition. It can be seen in the figure 3.1 that the system receives input image
of size N x N and is compared with the size of database image, if the input image and database image are
not equal, is to be resized the image. While implementing an image processing solution, the selection of
suitable illumination is a crucial element in determining the quality of the captured images and can have a
huge effect on the subsequent evaluation of the image. If the pose of the selected image is required to rotate
to obtain the database image rotate the face with an angle θ until it matches with the database image. The
rotation of the image may be bidirectional, clock wise or anti-clock wise, depending on the selected pose of
the image.
Once a normalized face obtained, it can be compared with other faces, under the same nominal size,
orientation, position, and illumination conditions. This comparison is based on features extracted using
DCT. The input images are divided into N x N blocks to define the local regions of processing. The N x N
two-dimensional Discrete Cosine Transform (DCT) is used to transform the data into the frequency
domain. Thereafter, statistical operators that calculate various functions of spatial frequency in the block
are used to produce a block-level DCT coefficient.
To recognize a particular input image or face, the system compares this image feature vector to the feature
vector of database faces using a Euclidean Distance nearest neighbor classifier [6] (Duda and Fart, 1973).
After obtaining the Euclidean Distances for N x N Matrix one needs to find the averages of the each
column of the matrix, and then find the average of all these averages, if the overall average is negative we
may say there is a match between the input image and database image.
48
5. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.2, 2012
Figure 3.1: Angle Oriented Face Recognition using DCT
4.0. Experimental Results
4.1. Clock wise Rotation
The experimental Results are calculated for various angles of θ in clock wise direction using the two
methods DCT with Euclidean Distance and DCT with Cosine Similarity. The mean recognition values of
two methods are measured. Out of a sample of 13 observations 12 are recognized. The percentage of
recognition for DCT with Euclidean distance and Cosine Similarity is 92.31% and the same for is 80%.
Figure 4.1.2: Graph for Clock wise Angle Oriented Face Recognition
Using DCT with Euclidean Distance and Cosine Similarity
49
6. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.2, 2012
The obtained data was presented in Figure 4.1.2. In Figure 4.1.2 the red line indicates the recognition level
for KLT and that of blue gives for DCT. It can be observed that the recognition level of input image in
DCT is very high. It is also noticed that as the sample size is increasing the recognition level is also
increasing in DCT while comparing with KLT.
4.2. Anti-Clock wise
The same methodology as that of clock wise rotation is maintained in anti-clock wise rotation, as well. The
mean recognition values of DCT with Euclidean distance and DCT with Cosine Similarity for the both
methods are calculated for several values of θ using Anti-Clock wise direction. In this method also DCT
with Euclidean Distance has shown high reliability of recognition level because the percentage of
recognition is 92.30% whereas the same is 85.46% in DCT with Cosine Similarity. In Figure 4.2.2 the
results are shown graphically, we can find the rapid decrease in the graph for DCT which showed with blue
colored line, indicates the high reliability of recognition of the input image.
Figure 4.2.2: Graph for Anti-Clock wise Angle Oriented Face
Recognition using Euclidean Distance and Cosine Similarity
4.3. Comparison between DCT and KLT
The DCT and KLT techniques are experimented under standard execution environment by considering the
synthesized data of the students of Sri Vishnu Educational Society. The Percentage of recognition level in
both the methods of experimental results is shown in Table 4.3.1. The phenomenal growth of DCT
reliability is observed when compared with KLT.
The Performance of recognition level of 10000 records for both the methods of experimental results is
given in Figure 4.3.2. The graph clearly shows that the reliability performance of DCT is constant
increasing with respect to KLT while the number of records is increased.
S. No. No. of records Performance in DCT Performance in KLT
1 1000 91.46 65.42
50
7. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.2, 2012
2 2000 92.01 64.23
3 3000 93.25 52.15
4 4000 94.12 54.12
5 5000 94.62 53.10
6 6000 96.5 52.63
7 7000 97.23 51.71
8 8000 97.56 50.46
9 9000 98.46 50.04
10 10000 98.89 54.13
Table 4.3.1: Performance Records in DCT and KLT
Figure 4.3.2: Bar Chart for Performance in DCT and KLT
5. Conclusion and Perspectives
Holistic approach to face recognition is used which has involved in encoding the entire facial image. An
angle oriented algorithm which can be rotated either clock wise or anti clock wise directions is proposed
and successfully implemented through the experimental results. The algorithm is proposed with the mean
values of the Euclidian classifiers. It is proved that the proposed angle oriented discrete cosine transform
increases the reliability of the face detection when compared with the KLT.
This approach has applications in Intrusion detection and new technologies like Biometric systems etc. The
authors view a random variable which indicates the magnitude of the recognition level of an image which
will follow some probability distribution.
References:
[1]. Alaa Y.Taqa, Hamid A.Jalab, “Increasing the Reliability of Fuzzy Inference System Skin Detector”
American Journal of Applied Sciences 7(8):1129-1138, 2010.
51
8. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.2, 2012
[2]. Alaa Y.Taqa, Hamid A.Jalab, ”Increasing the reliability of skin detectors” Scientific Research and
Essays Vol.5(17),PP.2480-2490,4 September,2010.
[3]. Almas Anjum, M., and Yunus Javed, M. “Face Images Feature Extraction Analysis for Recognition
in Frequency Domain” Proc. of the 6th WSEAS Int. Conf. on Signal Processing, Computational Geometry
& Artificial Vision, Elounda, Greece, 2006.
[4]. Annadurai, S., and Saradha, A. “Discrete Cosine Transform Based Face Recognition Using Linear
Discriminate Analysis” Proceedings of International Conference on Intelligent Knowledge Systems, 2004.
[5]. Chellappa, R., Wilson, .C, and Sirohey, S. 1995. Human and machine recognition of faces: A survey.
In Proc. IEEE, 83(5):705-740.
[6]. Duda, R.O., and Hart, P.E. 1973 Pattern Classification and Scene Analysis. Wiley: New York, NY.
[7]. Ekenel, H.K., and Stiefelhagen, R. “Local Appearance based Face Recognition Using Discrete Cosine
Transform”, EUSIPCO 2005, Antalya, Turkey, 2005.
[8]. Nefian, A. “A Hidden Markov Model-based Approach for Face Detection and Recognition“, PhD
thesis, Georgia Institute of Technology, 1999.
[9]. Pan, Z., and Bolouri, H. “High speed face recognition based on discrete cosine transforms and neural
networks”, Technical report, University of Hertfordshire, UK, 1999.
[10]. Rao, K. and Yip, P. 1990. Discrete Cosine Transform-Algorithm, Advantages, Applications.
Academic: New York, N.Y.
[11]. Sanderson, and Paliwal, K.K. “Features for robust face based identity verification”, Signal processing,
83(5), 2003.
[12]. Scott, W.L. “Block-level Discrete Cosine Transform Coefficients for Autonomic Face Recognition”,
P.hd thesis, Louisiana State University, USA, May 2003.
[13]. Shin, D., Lee, H.S., and Kim, D. “Illumination-robust face recognition using ridge regressive
bilinear models”, Pattern Recognition Letters, Vol.29, no.1, pp, 49-58, 2008.
[14]. Ting Shan, Brain, Lovell, C., and Shaokang Chen. “Face Recognition to Head Pose from One sample
Image” proceedings of the 18th International conference on Pattern Recognition, 2006.
[15].Turk, M., and Pentland, A. “Eigen faces for recognition”, Journal of Cognitive Neuroscience, vol.3,
no.1, pp.71-86, 1991.
[16]. Wang, Z. 1984. Fast algorithms for the discrete W transform and for the Discrete Fourier Transform.
IEEE Trans. Acoust., Speech, and Signal Proc., 32:803-816.
[17]. Ziad Hafed, M., and Martin Levine, “Face Recognition using Discrete Cosine Transform”,
International Journal of Computer Vision 43(3), 167-188, 2001.
52