Enhancement of Error Correction in Quantum Cryptography BB84 Protocol<br />A. Goneid, S. El-Kassas, M. El-Ashmawy and A. Abbas<br />Computer Science & Engineering Dept., the American University in Cairo, Cairo, Egypt<br />Abstract <br />The Quantum Cryptography BB84 protocol has proved to be quite successful in the process of Quantum Key Distribution (QKD). The protocol is used to create a secure key that is shared between two communicating parties. A major phase in this protocol is the correction of errors and discrepancies between keys of sender and receiver. The present work is concerned with enhancing the error correction phase in such protocol so that the keys generated would be created more efficiently; i.e., to enable the creation of longer shared keys in less time. For this purpose, we have introduced an algorithm based on the use of a memory structure between rounds of the error correction phase.  The advantages of the algorithm presented in this paper are established using simulated experiments.  A performance evaluation parameter is computed as the ratio of key length to the total operations needed to reach it. This value needs to be maximized for best performance. The experimental results have been compared through a parameterization model assuming that the relative increase in evaluation is linear in the block size increment. Parameters deduced from the experiments show that the performance of the enhanced algorithm is better by a factor of approximately 1.7 – 3.1 relative to the standard algorithm for initial block sizes ≥ 0.5 % of the key length, and by a factor of 1.15 – 1.30 for initial block sizes <  0.5 % of the key length.<br />Keywords: Quantum Cryptography, Quantum Algorithms, Quantum Computing, Information                                Theory<br />New Convergence and Performance Measures for Blind Source Separation Algorithms<br />Amr Goneid<br />Computer Science & Engineering Dept., the American University in Cairo, Cairo, Egypt<br />Abeer Kamel and Ibrahim Farag<br />Faculty of Computers and Information, Cairo University, Cairo, Egypt<br />Abstract <br />Neural learning algorithms developed for blind separation of mixed source signals give rise to a Global Separating-Mixing (GSM) matrix that can be used to measure the performance of the unmixing system.  In the case of the instantaneous linear noiseless mixing model, we consider the GSM as a transformation operator and show that it is equivalent to a combined stretching and rotation in the signal space. The extent of rotation is obtained using a polar decomposition method and can be taken as a measure of convergence to the problem solution. We also propose a new performance index (E3) that can be used to measure the performance of algorithms used in blind separation problems. The index E3 is more precise than the commonly used E1 and E2 indices and is normalized to the interval {0,1}. Experimentations using artificially generated supergaussian Laplacian signals have been performed using a fast ICA algorithm and considering a wide range of the number of mixed sources. Using the proposed E3 measure, we present experimental results on the dependence of algorithm performance on the number of mixed signals.<br />Keywords: Independent Component Analysis, Neural Computing, Machine Learning<br />Face Detection and Count Estimates in Surveillance Color Images<br />Mona F.M. Mursi1,2, Ghazy M.R. Assassa1,3,<br />Abeer Al-Humaimeedy1,2, Khaled Alghathbar  1,4<br />1 Center of Excellence in Information Assurance (CoEIA),<br />2 Department of Information Technology <br />3 Department of Computer Science <br />4 Department of Information Systems<br />College of Computer and Information Sciences<br />King Saud University<br />Kingdom of Saudi Arabia<br />{monmursi, gassassa, humaimeedy, ghathbar }@ coeia.edu.sa<br /> <br />Abstract <br />Face detection has various applications in many areas, an important example of which is security- related surveillance in confined areas. Once an image is analyzed for faces detection, the faces in the image are tallied. In this paper, color segmentation is used as a first step in the face detection process followed by grouping likely face regions into clusters of connected pixels. Median filtering is then performed to eliminate the small clusters and the resulting blobs are matched against a face pattern (ellipse) subjected to constraints for rejecting non-face blobs. The system was implemented and validated via numerical test cases for images with different formats, sizes, number of people, and complexity of the image background. Numerical results suggest that the proposed approach correctly detects and counts non-dark faces with reasonable background complexity. <br />Keywords: Face detection, color segmentation, face pattern (ellipse).   <br />Application of a Rough Set in Web Log Mining<br />Abdel-Badeeh M. Salem, Wael H. Khalifa<br />Faculty of Computer and Information Sciences, Ain Shams University, <br />Abbassia , Cairo, Egypt<br />absalem@asunet.shams.edu.eg, wael@yalla.com<br />Abstract <br />Web Usage Mining is a branch of web mining concerned with the extraction of interesting patterns logs generated from user’s navigation across web sites. A rough set is a formal approximation of a crisp set, which give the lower and the upper approximation of the original set. In this paper we present an experiment to extract association rules from web usage data using rough sets. The proposed methodology will be applied on two data sets one from the website of environmental protection agency (EPA) and the other is from San Diego Super Computer Center (SDSC). The datasets are cleaned from automatic requests, entries generated by bots and entries that are results of errors. The second step is session and transaction detections. The third step is the decision table generation followed by applying the rough sets algorithm on decision tables to generate the association rules. Finally the association rules are analyzed.<br />Keywords: Machine Learning, Rough Sets, Web Usage Mining, Web Log Mining,   Association Rules, Knowledge<br />A Decision Support System for the Egyptian Cabinet Using a Hybrid of Mining Algorithms<br />Madeeh El-Gedawy<br />Senior System Analyst, <br />The Egyptian Cabinet, Information and Decision Support Center (IDSC)<br />madeehnayer@idsc.net.eg <br />Abd El-Fatah Hegazy<br />Information Systems Department, <br />Arab Academy of Science and Technology and Maritime Transport <br />Post-Graduate Division<br />ahegazy@aast.edu<br />Amr Badr <br />Department of Computer Science, <br />Faculty of Computers and Information Systems, Cairo University<br />ruaab@rusys.eg.net<br />Abstract <br />This paper presents a decision support system of hybrid mining algorithms for the Egyptian cabinet. Autoregression Tree was used to predict the values of the customer trust kpi (key performance indicator). Microsoft association rules were used to correlate the kpi's variables to each other. K-means clustering was used to divide the Egyptian population into 5 natural groups using hard clustering. Tf-idf weights were used to create a logical thesaurus out of the Egyptian blogs. This system proved to be helpful to the decision maker in visualizing the trend of the kpi, expecting the values' deviations, making historical predictions,  analyzing the dependency network generated for these variables, distinguishing both the weak and strong relationships among the variables' itemsets, navigating in each cluster profile, and making observations concerning some social and political topics. <br />Keywords: Data Mining, Text Mining, Time Series Analysis, Clustering, Association Rules, Decision Support System.   <br />Financial Prediction Using Soft Computing Techniques<br />Dina Fawzy (1), Abdel Fattah Hegazy (1), Amr Badr (2)<br />(1)Arab Academy for Science and Technology, College of Information Technology<br />(2) Cairo University, Faculty of Computers and Information, <br />Department of Computer Science<br />engdinafawzy@hotmail.com, ruaab@rusys.eg.net<br /> <br />Abstract <br />        A system was implemented using Probalistic Neural Network model for Stock Market prediction to examine and analyze the use of neural networks as a forecasting tool, which also have the ability to predict future trends of Stock Market. Accuracy was compared against traditional forecasting methods to show the differences between other models such as Multilayer Feed Forward Neural Network, Multilayer Feed Forward Neural Network/Genetic Algorithm and Ant Colony Optimization Model. This system is an automated system for trading in financial markets which was built on neural network models that can be used as decision-support or decision-making systems, so the system is capable of finding optimal decision for forecasting Stock Market status. The system was constructed to predict market directions when the moves are small and large, where the network output is the predicted price depending on the past data of some weeks presented to the network as an input, if the predicted price was up, the trading strategy must buy the contracts, if the predicted price was down, the trading strategy must sell the contracts. Stock Markets behavior is non-linear and is also difficult to forecast, some assertions are made to model the stock market status which requires a non-linear dynamical system, then neural network models are used because they are suitable for non-linear problems and volatility forecasting. <br />Keywords: Multilayer Feed Forward Neural Network, Probalistic neural Networks, <br />                    Genetic Algorithm, Ant Colony, Prediction, Stock Market.   <br />Analysis of RNA/Interferon Structures for Virus C Type 4<br />Attalah Hashad, Khaled Kamal, Ahmed Fahmy<br />Arab Academy for Science and Technology, <br />Faculty of Engineering, Computer Engineering Department.<br />hashad@cairo.aast.edu, khaledkm@hotmail.com, afahmy1610@yahoo.com<br />Amr Badr<br />Cairo University, Faculty of Computers & Information, Computer Science Department.<br />ruaab@rusys.eg.net<br />Abstract <br />Hepatitis C is a predominant genotype found throughout the Middle East and parts of Africa, with high population prevalence in Egypt. Due to the world’s constant effort to find treatment for this fatal disease; many researches and trials have been made. It has become evident that virus C itself envelopes a self destructive gene [1], which if activated by a specific order to the mRNA aboard the virus, forms interferon. Interferon is an anti-viral if produced from virus C itself becomes specific only to it. Through a variety of bioinformatics tools we architect algorithms to enhance the chance of finding this gene which order the mRNA to produce the virus C specific interferon .Tools such as RNA to protein synthesis, gene prediction, protein classification and gene classification have been constructed and tried to reach this goal .As a result of these trials, several matches were made with alternating percentages but at least acknowledging the possibility this interferon/RNA analysis. <br />Keywords: Virus C type 4, Interferon<br /> An Empirical Study of a Conversion Methodology from OO-based Systems to Component-based Systems<br />Hassan Mathkour, Ameur Touir, Hind Hakami, Ghazy Assassa<br />Department of Computer Science, King Saud University, Riyadh, Saudi Arabia<br />mathkour@ccis.ksu.edu.sa, binmathkour@yahoo.com, touir@ccis.ksu.edu.sa <br /> <br />Abstract <br />This paper presents a conversion methodology to generate component-based software systems from object-oriented based software systems and an experiment to demonstrate the working of the methodology. The generation process is achieved via several steps starting from a transformation of the input software codes to their related UML designs; then creating the corresponding graphs whose nodes are elements such as classes and interfaces and the edges are the relationships between those elements. A clustering technique is then used to create a component for each cluster and regenerate the codes accordingly. The framework is a platform-independent and the intermediate outputs are XML-based files. It allows the using of different thresholds to secure a best solution. <br />Keywords: Co<br />An Automated Arabic Graphology System: A Theoretical and Empirical Study<br />Hassan I. Mathkour<br />Department of Computer Science, College of Computer and Information Sciences<br />King Saud University, Riyadh, Saudi Arabia<br />mathkour@ccis.ksu.edu.sa<br /> <br />Abstract <br />In this paper, we present an approach to computerize the novel application of Arabic graphology─ the art of predicting the personality traits through handwriting analysis. The graphology is used to assess the writer’s physical and mental traits. As compared to other psychological tests, graphology provides a simple, thorough and quick test because it merely needs a handwriting sample to assess the writers’ character and capabilities such as the writer’s energy level, health conditions, sincerity, honesty, trustworthiness, generosity, cunningness, intelligence, selfishness. <br />Very little research work has been done to build a reliable computer aided graphology system for English language. For Arabic language, to the best of our information, no work on this subject is available. This research is an attempt in this direction. In this research we explore the possibility to develop an automated system for discovering the personality related knowledge from the Arabic handwritten samples. We have collected 1876 handwriting samples written by people belonging to the disparate social strata and conducted a personality test to assess their personalities. The system reported here predicted the personality traits with 100% accuracy on the training set samples of 1000 writers, 67.1% on 876 test set handwriting sample when using equal covariance matrices, and 75.91 on 876 test set handwriting sample when using different covariance matrices. <br />Keywords: Graphology, handwriting analysis, personality assessment, personality traits, Arabic optical character<br />Evolutionary Face Recognition using Principle Components Analysis<br />Hatim A. AboalsamhCenter of Excellence in Information Assurance (CoEIA),Department of Computer ScienceCollege of Computer and Information SciencesKing Saud UniversityRiyadh, Kingdom of Saudi Arabiahatim@ksu.edu.sa<br />Abstract<br />Face recognition plays a significant role in physical security applications for access control and real time video surveillance systems. Popular appearance-based (holistic) approaches for face recognition, such as principle components analysis (PCA), depend on the pre-existence of image datasets where training is carried out in a batch mode. Real world applications with continuous growth of the face images’ datasets would suffer from the batch approach. Whereas, an  evolutionary approach would allow new training elements to be added without the necessity of repeating an entire batch training that includes the new elements. In this paper, various incremental PCA (IPCA) training and relearning strategies are proposed and applied to the candid covariance-free incremental principle component algorithm. The effect of the number of increments and size of the eigen vectors on the correct rate of recognition are studied. Training time for various training strategies is computed and compared. The results suggest that batch PCA is inferior to all considered IPCAs and that increment level relearning yields the best correct recognition rate. On the other side, batch PCA was found to be  faster than all IPCAs <br />Keywords: Principle Components Analysis, Face Recognition, Neural Networks, Computer   Vision. <br />
Enhancement of Error Correction in Quantum Cryptography BB84 ...
Enhancement of Error Correction in Quantum Cryptography BB84 ...
Enhancement of Error Correction in Quantum Cryptography BB84 ...
Enhancement of Error Correction in Quantum Cryptography BB84 ...
Enhancement of Error Correction in Quantum Cryptography BB84 ...
Enhancement of Error Correction in Quantum Cryptography BB84 ...
Enhancement of Error Correction in Quantum Cryptography BB84 ...
Enhancement of Error Correction in Quantum Cryptography BB84 ...
Enhancement of Error Correction in Quantum Cryptography BB84 ...
Enhancement of Error Correction in Quantum Cryptography BB84 ...

Enhancement of Error Correction in Quantum Cryptography BB84 ...

  • 1.
    Enhancement of ErrorCorrection in Quantum Cryptography BB84 Protocol<br />A. Goneid, S. El-Kassas, M. El-Ashmawy and A. Abbas<br />Computer Science & Engineering Dept., the American University in Cairo, Cairo, Egypt<br />Abstract <br />The Quantum Cryptography BB84 protocol has proved to be quite successful in the process of Quantum Key Distribution (QKD). The protocol is used to create a secure key that is shared between two communicating parties. A major phase in this protocol is the correction of errors and discrepancies between keys of sender and receiver. The present work is concerned with enhancing the error correction phase in such protocol so that the keys generated would be created more efficiently; i.e., to enable the creation of longer shared keys in less time. For this purpose, we have introduced an algorithm based on the use of a memory structure between rounds of the error correction phase. The advantages of the algorithm presented in this paper are established using simulated experiments. A performance evaluation parameter is computed as the ratio of key length to the total operations needed to reach it. This value needs to be maximized for best performance. The experimental results have been compared through a parameterization model assuming that the relative increase in evaluation is linear in the block size increment. Parameters deduced from the experiments show that the performance of the enhanced algorithm is better by a factor of approximately 1.7 – 3.1 relative to the standard algorithm for initial block sizes ≥ 0.5 % of the key length, and by a factor of 1.15 – 1.30 for initial block sizes < 0.5 % of the key length.<br />Keywords: Quantum Cryptography, Quantum Algorithms, Quantum Computing, Information Theory<br />New Convergence and Performance Measures for Blind Source Separation Algorithms<br />Amr Goneid<br />Computer Science & Engineering Dept., the American University in Cairo, Cairo, Egypt<br />Abeer Kamel and Ibrahim Farag<br />Faculty of Computers and Information, Cairo University, Cairo, Egypt<br />Abstract <br />Neural learning algorithms developed for blind separation of mixed source signals give rise to a Global Separating-Mixing (GSM) matrix that can be used to measure the performance of the unmixing system. In the case of the instantaneous linear noiseless mixing model, we consider the GSM as a transformation operator and show that it is equivalent to a combined stretching and rotation in the signal space. The extent of rotation is obtained using a polar decomposition method and can be taken as a measure of convergence to the problem solution. We also propose a new performance index (E3) that can be used to measure the performance of algorithms used in blind separation problems. The index E3 is more precise than the commonly used E1 and E2 indices and is normalized to the interval {0,1}. Experimentations using artificially generated supergaussian Laplacian signals have been performed using a fast ICA algorithm and considering a wide range of the number of mixed sources. Using the proposed E3 measure, we present experimental results on the dependence of algorithm performance on the number of mixed signals.<br />Keywords: Independent Component Analysis, Neural Computing, Machine Learning<br />Face Detection and Count Estimates in Surveillance Color Images<br />Mona F.M. Mursi1,2, Ghazy M.R. Assassa1,3,<br />Abeer Al-Humaimeedy1,2, Khaled Alghathbar 1,4<br />1 Center of Excellence in Information Assurance (CoEIA),<br />2 Department of Information Technology <br />3 Department of Computer Science <br />4 Department of Information Systems<br />College of Computer and Information Sciences<br />King Saud University<br />Kingdom of Saudi Arabia<br />{monmursi, gassassa, humaimeedy, ghathbar }@ coeia.edu.sa<br /> <br />Abstract <br />Face detection has various applications in many areas, an important example of which is security- related surveillance in confined areas. Once an image is analyzed for faces detection, the faces in the image are tallied. In this paper, color segmentation is used as a first step in the face detection process followed by grouping likely face regions into clusters of connected pixels. Median filtering is then performed to eliminate the small clusters and the resulting blobs are matched against a face pattern (ellipse) subjected to constraints for rejecting non-face blobs. The system was implemented and validated via numerical test cases for images with different formats, sizes, number of people, and complexity of the image background. Numerical results suggest that the proposed approach correctly detects and counts non-dark faces with reasonable background complexity. <br />Keywords: Face detection, color segmentation, face pattern (ellipse). <br />Application of a Rough Set in Web Log Mining<br />Abdel-Badeeh M. Salem, Wael H. Khalifa<br />Faculty of Computer and Information Sciences, Ain Shams University, <br />Abbassia , Cairo, Egypt<br />absalem@asunet.shams.edu.eg, wael@yalla.com<br />Abstract <br />Web Usage Mining is a branch of web mining concerned with the extraction of interesting patterns logs generated from user’s navigation across web sites. A rough set is a formal approximation of a crisp set, which give the lower and the upper approximation of the original set. In this paper we present an experiment to extract association rules from web usage data using rough sets. The proposed methodology will be applied on two data sets one from the website of environmental protection agency (EPA) and the other is from San Diego Super Computer Center (SDSC). The datasets are cleaned from automatic requests, entries generated by bots and entries that are results of errors. The second step is session and transaction detections. The third step is the decision table generation followed by applying the rough sets algorithm on decision tables to generate the association rules. Finally the association rules are analyzed.<br />Keywords: Machine Learning, Rough Sets, Web Usage Mining, Web Log Mining, Association Rules, Knowledge<br />A Decision Support System for the Egyptian Cabinet Using a Hybrid of Mining Algorithms<br />Madeeh El-Gedawy<br />Senior System Analyst, <br />The Egyptian Cabinet, Information and Decision Support Center (IDSC)<br />madeehnayer@idsc.net.eg <br />Abd El-Fatah Hegazy<br />Information Systems Department, <br />Arab Academy of Science and Technology and Maritime Transport <br />Post-Graduate Division<br />ahegazy@aast.edu<br />Amr Badr <br />Department of Computer Science, <br />Faculty of Computers and Information Systems, Cairo University<br />ruaab@rusys.eg.net<br />Abstract <br />This paper presents a decision support system of hybrid mining algorithms for the Egyptian cabinet. Autoregression Tree was used to predict the values of the customer trust kpi (key performance indicator). Microsoft association rules were used to correlate the kpi's variables to each other. K-means clustering was used to divide the Egyptian population into 5 natural groups using hard clustering. Tf-idf weights were used to create a logical thesaurus out of the Egyptian blogs. This system proved to be helpful to the decision maker in visualizing the trend of the kpi, expecting the values' deviations, making historical predictions, analyzing the dependency network generated for these variables, distinguishing both the weak and strong relationships among the variables' itemsets, navigating in each cluster profile, and making observations concerning some social and political topics. <br />Keywords: Data Mining, Text Mining, Time Series Analysis, Clustering, Association Rules, Decision Support System. <br />Financial Prediction Using Soft Computing Techniques<br />Dina Fawzy (1), Abdel Fattah Hegazy (1), Amr Badr (2)<br />(1)Arab Academy for Science and Technology, College of Information Technology<br />(2) Cairo University, Faculty of Computers and Information, <br />Department of Computer Science<br />engdinafawzy@hotmail.com, ruaab@rusys.eg.net<br /> <br />Abstract <br /> A system was implemented using Probalistic Neural Network model for Stock Market prediction to examine and analyze the use of neural networks as a forecasting tool, which also have the ability to predict future trends of Stock Market. Accuracy was compared against traditional forecasting methods to show the differences between other models such as Multilayer Feed Forward Neural Network, Multilayer Feed Forward Neural Network/Genetic Algorithm and Ant Colony Optimization Model. This system is an automated system for trading in financial markets which was built on neural network models that can be used as decision-support or decision-making systems, so the system is capable of finding optimal decision for forecasting Stock Market status. The system was constructed to predict market directions when the moves are small and large, where the network output is the predicted price depending on the past data of some weeks presented to the network as an input, if the predicted price was up, the trading strategy must buy the contracts, if the predicted price was down, the trading strategy must sell the contracts. Stock Markets behavior is non-linear and is also difficult to forecast, some assertions are made to model the stock market status which requires a non-linear dynamical system, then neural network models are used because they are suitable for non-linear problems and volatility forecasting. <br />Keywords: Multilayer Feed Forward Neural Network, Probalistic neural Networks, <br /> Genetic Algorithm, Ant Colony, Prediction, Stock Market. <br />Analysis of RNA/Interferon Structures for Virus C Type 4<br />Attalah Hashad, Khaled Kamal, Ahmed Fahmy<br />Arab Academy for Science and Technology, <br />Faculty of Engineering, Computer Engineering Department.<br />hashad@cairo.aast.edu, khaledkm@hotmail.com, afahmy1610@yahoo.com<br />Amr Badr<br />Cairo University, Faculty of Computers & Information, Computer Science Department.<br />ruaab@rusys.eg.net<br />Abstract <br />Hepatitis C is a predominant genotype found throughout the Middle East and parts of Africa, with high population prevalence in Egypt. Due to the world’s constant effort to find treatment for this fatal disease; many researches and trials have been made. It has become evident that virus C itself envelopes a self destructive gene [1], which if activated by a specific order to the mRNA aboard the virus, forms interferon. Interferon is an anti-viral if produced from virus C itself becomes specific only to it. Through a variety of bioinformatics tools we architect algorithms to enhance the chance of finding this gene which order the mRNA to produce the virus C specific interferon .Tools such as RNA to protein synthesis, gene prediction, protein classification and gene classification have been constructed and tried to reach this goal .As a result of these trials, several matches were made with alternating percentages but at least acknowledging the possibility this interferon/RNA analysis. <br />Keywords: Virus C type 4, Interferon<br /> An Empirical Study of a Conversion Methodology from OO-based Systems to Component-based Systems<br />Hassan Mathkour, Ameur Touir, Hind Hakami, Ghazy Assassa<br />Department of Computer Science, King Saud University, Riyadh, Saudi Arabia<br />mathkour@ccis.ksu.edu.sa, binmathkour@yahoo.com, touir@ccis.ksu.edu.sa <br /> <br />Abstract <br />This paper presents a conversion methodology to generate component-based software systems from object-oriented based software systems and an experiment to demonstrate the working of the methodology. The generation process is achieved via several steps starting from a transformation of the input software codes to their related UML designs; then creating the corresponding graphs whose nodes are elements such as classes and interfaces and the edges are the relationships between those elements. A clustering technique is then used to create a component for each cluster and regenerate the codes accordingly. The framework is a platform-independent and the intermediate outputs are XML-based files. It allows the using of different thresholds to secure a best solution. <br />Keywords: Co<br />An Automated Arabic Graphology System: A Theoretical and Empirical Study<br />Hassan I. Mathkour<br />Department of Computer Science, College of Computer and Information Sciences<br />King Saud University, Riyadh, Saudi Arabia<br />mathkour@ccis.ksu.edu.sa<br /> <br />Abstract <br />In this paper, we present an approach to computerize the novel application of Arabic graphology─ the art of predicting the personality traits through handwriting analysis. The graphology is used to assess the writer’s physical and mental traits. As compared to other psychological tests, graphology provides a simple, thorough and quick test because it merely needs a handwriting sample to assess the writers’ character and capabilities such as the writer’s energy level, health conditions, sincerity, honesty, trustworthiness, generosity, cunningness, intelligence, selfishness. <br />Very little research work has been done to build a reliable computer aided graphology system for English language. For Arabic language, to the best of our information, no work on this subject is available. This research is an attempt in this direction. In this research we explore the possibility to develop an automated system for discovering the personality related knowledge from the Arabic handwritten samples. We have collected 1876 handwriting samples written by people belonging to the disparate social strata and conducted a personality test to assess their personalities. The system reported here predicted the personality traits with 100% accuracy on the training set samples of 1000 writers, 67.1% on 876 test set handwriting sample when using equal covariance matrices, and 75.91 on 876 test set handwriting sample when using different covariance matrices. <br />Keywords: Graphology, handwriting analysis, personality assessment, personality traits, Arabic optical character<br />Evolutionary Face Recognition using Principle Components Analysis<br />Hatim A. AboalsamhCenter of Excellence in Information Assurance (CoEIA),Department of Computer ScienceCollege of Computer and Information SciencesKing Saud UniversityRiyadh, Kingdom of Saudi Arabiahatim@ksu.edu.sa<br />Abstract<br />Face recognition plays a significant role in physical security applications for access control and real time video surveillance systems. Popular appearance-based (holistic) approaches for face recognition, such as principle components analysis (PCA), depend on the pre-existence of image datasets where training is carried out in a batch mode. Real world applications with continuous growth of the face images’ datasets would suffer from the batch approach. Whereas, an evolutionary approach would allow new training elements to be added without the necessity of repeating an entire batch training that includes the new elements. In this paper, various incremental PCA (IPCA) training and relearning strategies are proposed and applied to the candid covariance-free incremental principle component algorithm. The effect of the number of increments and size of the eigen vectors on the correct rate of recognition are studied. Training time for various training strategies is computed and compared. The results suggest that batch PCA is inferior to all considered IPCAs and that increment level relearning yields the best correct recognition rate. On the other side, batch PCA was found to be faster than all IPCAs <br />Keywords: Principle Components Analysis, Face Recognition, Neural Networks, Computer Vision. <br />