This document discusses using fuzzy logic to evaluate biometrics traits and determine their security level. It proposes a model that uses biometrics characteristics categorized by previous researchers as a knowledge base. The model was simulated using a fuzzy logic library and found that fingerprint, hand geometry, hand veins, iris, ear canal and palm print have a medium security level based on metrics like universality, uniqueness, permanence, collectability, performance and circumvention. The results showed fuzzy logic provides a simple way to draw conclusions from vague information.
Advancement in wireless communications lead more and more mobile wireless networks e.g.,
mobile networks [mobile ad hoc networks (MANETs)], wireless sensor networks, etc. Some of
the challenges in MANET include: Dynamic network topology, Speed, Bandwidth, computation
capability, Scalability, Quality of service, Secure and Reliable routing. One of the most
important challenges in mobile wireless networks is the Secure and reliable routing and the
main characteristic of MANET with respect to security is the lack of clear line of defence.
Therefore, the SP routing problem in MANET turns into dynamic optimization problem. In this
paper, a path detection algorithm and a model to detect intruders that is misbehaving nodes in
the alternative paths is proposed.
Determining costs of construction errors, based on fuzzy logic systems ipcmc2...Mohammad Lemar ZALMAİ
In construction projects, construction errors affect negatively to the production, that influences the overall of the project in both time and budget. Generally, construction companies could not estimate this kind of errors during the bidding process. In this case, these companies did not consider important issues on the budget of the contract, and in the contracting period, project participants assumed that the project would be executed as it scheduled and designed. During the project, different construction processes’ costs are higher than estimated values due to construction errors.
The errors that were recognized during the construction process cause time and financial losses, on the other hand, the errors that were noticed after the project’s termination cause repair and correction costs. Moreover, the company may gain a bad reputation in the sector.
The key points of this study are to analyze project costs by considering construction errors and re-construction costs due to labor errors by using fuzzy interpretation mechanism. This methodology is applied to a residential construction project. With using of this methodology, forthcoming extra costs related to construction errors can be estimated. And some precautions can be taken for further legal conflicts between parties.
Bayesian system reliability and availability analysis underthe vague environm...ijsc
Reliability modeling is the most important discipline of reliable engineering. The main purpose of this paper is to provide a methodology for discussing the vague environment. Actually we discuss on Bayesian system reliability and availability analysis on the vague environment based on Exponential distribution
under squared error symmetric and precautionary asymmetric loss functions. In order to apply the Bayesian approach, model parameters are assumed to be vague random variables with vague prior distributions. This approach will be used to create the vague Bayes estimate of system reliability and availability by introducing and applying a theorem called “Resolution Identity” for vague sets. For this purpose, the original problem is transformed into a nonlinear programming problem which is then divided up into eight subproblems to simplify computations. Finally, the results obtained for the subproblems can
be used to determine the membership functions of the vague Bayes estimate of system reliability. Finally, the sub problems can be solved by using any commercial optimizers, e.g. GAMS or LINGO.
New Watermarking/Encryption Method for Medical ImagesFull Protection in m-Hea...IJECEIAES
In this paper, we present a new method for medical images security dedicated to m-Health based on a combination between a novel semi reversible watermarking approach robust to JPEG compression, a new proposed fragile watermarking and a new proposed encryption algorithm. The purpose of the combination of these three proposed algorithms (encryption, robust and fragile watermarking) is to ensure the full protection of medical image, its information and its report in terms of confidentiality and reliability (authentication and integrity). A hardware implementation to evaluate our system is done using the Texas instrument C6416 DSK card by converting m-files to C/C++ using MATLAB coder. Our m-health security system is then run on the android platform. Experimental results show that the proposed algorithm can achieve high security with good performance.
SYMMETRICAL WEIGHTED SUBSPACE HOLISTIC APPROACH FOR EXPRESSION RECOGNITIONijcsit
Human face expression is one of the cognitive activity or attribute to deliver the opinions to others.This paper mainly delivers the performance of appearance based holistic approach subspace methods based on Principal Component Analysis (PCA). In this work texture features are extracted from face images using Gabor filter. It was observed that extracted texture feature vector space has higher dimensional and has
more number of redundant contents. Hence training, testing and classification time becomes more. The expression recognition accuracy rate is also reduced. To overcome this problem Symmetrical Weighted 2DPCA (SW2DPCA) subspace method is introduced. Extracted feature vector space is projected in to subspace by using SW2DPCA method. By implementing weighted principles on odd and even symmetrical
decomposition space of training samples sets proposed method have been formed. Conventional PCA and 2DPCA method yields less recognition rate due to larger variations in expressions and light due to more number of feature space redundant variants. Proposed SW2DPCA method optimizes this problem by reducing redundant contents and discarding unequal variants. In this work a well known JAFFE databases
is used for experiments and tested with proposed SW2DPCA algorithm. From the experimental results it was found that facial recognition accuracy rate of GF+SW2DPCA based feature fusion subspace method has been increased to 95.24% compared to 2DPCA method.
MAGNETIC RESONANCE BRAIN IMAGE SEGMENTATIONVLSICS Design
Segmentation of tissues and structures from medical images is the first step in many image analysis applications developed for medical diagnosis. With the growing research on medical image segmentation, it is essential to categorize the research outcomes and provide researchers with an overview of the existing segmentation techniques in medical images. In this paper, different image segmentation methods applied on magnetic resonance brain images are reviewed. The selection of methods includes sources from image processing journals, conferences, books, dissertations and thesis. The conceptual details of the methods are explained and mathematical details are avoided for simplicity. Both broad and detailed categorizations of reviewed segmentation techniques are provided. The state of art research is provided with emphasis on developed techniques and image properties used by them. The methods defined are not always mutually independent. Hence, their inter relationships are also stated. Finally, conclusions are drawn summarizing commonly used techniques and their complexities in application.
Advancement in wireless communications lead more and more mobile wireless networks e.g.,
mobile networks [mobile ad hoc networks (MANETs)], wireless sensor networks, etc. Some of
the challenges in MANET include: Dynamic network topology, Speed, Bandwidth, computation
capability, Scalability, Quality of service, Secure and Reliable routing. One of the most
important challenges in mobile wireless networks is the Secure and reliable routing and the
main characteristic of MANET with respect to security is the lack of clear line of defence.
Therefore, the SP routing problem in MANET turns into dynamic optimization problem. In this
paper, a path detection algorithm and a model to detect intruders that is misbehaving nodes in
the alternative paths is proposed.
Determining costs of construction errors, based on fuzzy logic systems ipcmc2...Mohammad Lemar ZALMAİ
In construction projects, construction errors affect negatively to the production, that influences the overall of the project in both time and budget. Generally, construction companies could not estimate this kind of errors during the bidding process. In this case, these companies did not consider important issues on the budget of the contract, and in the contracting period, project participants assumed that the project would be executed as it scheduled and designed. During the project, different construction processes’ costs are higher than estimated values due to construction errors.
The errors that were recognized during the construction process cause time and financial losses, on the other hand, the errors that were noticed after the project’s termination cause repair and correction costs. Moreover, the company may gain a bad reputation in the sector.
The key points of this study are to analyze project costs by considering construction errors and re-construction costs due to labor errors by using fuzzy interpretation mechanism. This methodology is applied to a residential construction project. With using of this methodology, forthcoming extra costs related to construction errors can be estimated. And some precautions can be taken for further legal conflicts between parties.
Bayesian system reliability and availability analysis underthe vague environm...ijsc
Reliability modeling is the most important discipline of reliable engineering. The main purpose of this paper is to provide a methodology for discussing the vague environment. Actually we discuss on Bayesian system reliability and availability analysis on the vague environment based on Exponential distribution
under squared error symmetric and precautionary asymmetric loss functions. In order to apply the Bayesian approach, model parameters are assumed to be vague random variables with vague prior distributions. This approach will be used to create the vague Bayes estimate of system reliability and availability by introducing and applying a theorem called “Resolution Identity” for vague sets. For this purpose, the original problem is transformed into a nonlinear programming problem which is then divided up into eight subproblems to simplify computations. Finally, the results obtained for the subproblems can
be used to determine the membership functions of the vague Bayes estimate of system reliability. Finally, the sub problems can be solved by using any commercial optimizers, e.g. GAMS or LINGO.
New Watermarking/Encryption Method for Medical ImagesFull Protection in m-Hea...IJECEIAES
In this paper, we present a new method for medical images security dedicated to m-Health based on a combination between a novel semi reversible watermarking approach robust to JPEG compression, a new proposed fragile watermarking and a new proposed encryption algorithm. The purpose of the combination of these three proposed algorithms (encryption, robust and fragile watermarking) is to ensure the full protection of medical image, its information and its report in terms of confidentiality and reliability (authentication and integrity). A hardware implementation to evaluate our system is done using the Texas instrument C6416 DSK card by converting m-files to C/C++ using MATLAB coder. Our m-health security system is then run on the android platform. Experimental results show that the proposed algorithm can achieve high security with good performance.
SYMMETRICAL WEIGHTED SUBSPACE HOLISTIC APPROACH FOR EXPRESSION RECOGNITIONijcsit
Human face expression is one of the cognitive activity or attribute to deliver the opinions to others.This paper mainly delivers the performance of appearance based holistic approach subspace methods based on Principal Component Analysis (PCA). In this work texture features are extracted from face images using Gabor filter. It was observed that extracted texture feature vector space has higher dimensional and has
more number of redundant contents. Hence training, testing and classification time becomes more. The expression recognition accuracy rate is also reduced. To overcome this problem Symmetrical Weighted 2DPCA (SW2DPCA) subspace method is introduced. Extracted feature vector space is projected in to subspace by using SW2DPCA method. By implementing weighted principles on odd and even symmetrical
decomposition space of training samples sets proposed method have been formed. Conventional PCA and 2DPCA method yields less recognition rate due to larger variations in expressions and light due to more number of feature space redundant variants. Proposed SW2DPCA method optimizes this problem by reducing redundant contents and discarding unequal variants. In this work a well known JAFFE databases
is used for experiments and tested with proposed SW2DPCA algorithm. From the experimental results it was found that facial recognition accuracy rate of GF+SW2DPCA based feature fusion subspace method has been increased to 95.24% compared to 2DPCA method.
MAGNETIC RESONANCE BRAIN IMAGE SEGMENTATIONVLSICS Design
Segmentation of tissues and structures from medical images is the first step in many image analysis applications developed for medical diagnosis. With the growing research on medical image segmentation, it is essential to categorize the research outcomes and provide researchers with an overview of the existing segmentation techniques in medical images. In this paper, different image segmentation methods applied on magnetic resonance brain images are reviewed. The selection of methods includes sources from image processing journals, conferences, books, dissertations and thesis. The conceptual details of the methods are explained and mathematical details are avoided for simplicity. Both broad and detailed categorizations of reviewed segmentation techniques are provided. The state of art research is provided with emphasis on developed techniques and image properties used by them. The methods defined are not always mutually independent. Hence, their inter relationships are also stated. Finally, conclusions are drawn summarizing commonly used techniques and their complexities in application.
Design and implementation of network security using genetic algorithmeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Effectiveness of MPEG-7 Color Features in Clothing RetrievaljournalBEEI
Clothing is a human used to cover the body. Clothing consist of dress, pants, skirts, and others. Clothing usually consists of various colors or a combination of several colors. Colors become one of the important reference used by humans in determining or looking for clothing according to their wishes. Color is one of the features that fit the human vision. Content Based Image Retrieval (CBIR) is a technique in Image Retrieval that give index to an image based on the characteristics contained in image such as color, shape, and texture. CBIR can make it easier to find something because it helps the grouping process on image based on its characteristic. In this case CBIR is used for the searching process of Muslim fashion based on the color features. The color used in this research is the color descriptor MPEG-7 which is Scalable Color Descriptor (SCD) and Dominant Color Descriptor (DCD). The SCD color feature displays the overall color proportion of the image, while the DCD displays the most dominant color in the image. For each image of Muslim women's clothing, the extraction process utilize SCD and DCD. This study used 150 images of Muslim women's clothing as a dataset consistingclass of red, blue, yellow, green and brown. Each class consists of 30 images. The similarity between the image features is measured using the eucludian distance. This study used human perception in viewing the color of clothing.The effectiveness is calculated for the color features of SCD and DCD adjusted to the human subjective similarity. Based on the simulation of effectiveness DCD result system gives higher value than SCD
An effective approach to offline arabic handwriting recognitionijaia
Segmentation is the most challenging part of the Arabic handwriting recognition, due to the unique
characteristics of Arabic writing that allows the same shape to denote different characters. In this paper,
an off-line Arabic handwriting recognition system is proposed. The processing details are presented in
three main stages. Firstly, the image is skeletonized to one pixel thin. Secondly, transfer each diagonally
connected foreground pixel to the closest horizontal or vertical line. Finally, these orthogonal lines are
coded as vectors of unique integer numbers; each vector represents one letter of the word. In order to
evaluate the proposed techniques, the system has been tested on the IFN/ENIT database, and the
experimental results show that our method is superior to those methods currently available.
Abstract:
A combination of exponential and Lindley failure rate model is considered and named it as exponential-Lindley
additive failure rate model. In this paper, we studied the distributional properties, central and non-central moments,
estimation of parameters, testing of hypothesis and the power of likelihood ratio criterion about the proposed model.
Key words: Exponential distribution, Lindley distribution, ML estimation, Likelihood ratio type criterion.
Comparative analysis of dynamic programming algorithms to find similarity in ...eSAT Journals
Abstract There exist many computational methods for finding similarity in gene sequence, finding suitable methods that gives optimal similarity is difficult task. Objective of this project is to find an appropriate method to compute similarity in gene/protein sequence, both within the families and across the families. Many dynamic programming algorithms like Levenshtein edit distance; Longest Common Subsequence and Smith-waterman have used dynamic programming approach to find similarities between two sequences. But none of the method mentioned above have used real benchmark data sets. They have only used dynamic programming algorithms for synthetic data. We proposed a new method to compute similarity. The performance of the proposed algorithm is evaluated using number of data sets from various families, and similarity value is calculated both within the family and across the families. A comparative analysis and time complexity of the proposed method reveal that Smith-waterman approach is appropriate method when gene/protein sequence belongs to same family and Longest Common Subsequence is best suited when sequence belong to two different families. Keywords - Bioinformatics, Gene, Gene Sequencing, Edit distance, String Similarity.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
A bidirectional text transcription of braille for odia, hindi, telugu and eng...eSAT Journals
Abstract
Communication gap establishes an aura of unwillingness of understanding the factor behind. Basically, this is one of the common
agenda for the visually challenged people. In order to bridge this gap, a proper platform of learning for both the mass and the
visually challenged for any native language is emphasized in this paper through Braille pattern. Braille, a code, well known to
visually challenged as their mode of communication is now converted into normal text in Odia, Hindi, Telugu and English
languages using Image segmentation as the base criteria with MATLAB as its simulation field so that every person can be able to
easily decode the information being conveyed by these people. The algorithm makes the best use of segmentation, histogram
analysis, pattern recognition, letter arrays, data base generation with testing in software and dumping in using Spartan 3e FPGA
kit. This paper also elaborates on the reverse conversion of native languages and English to Braille making the paper to be more
compatible. The processing speed with efficiency and accuracy defines the effective features of this paper as a successful
approach in both software and hardware.
Keywords: visually challenge, image segmentation, MATLAB, pattern recognition, Spartan 3e FPGA kit, compatible.
BrailleOCR: An Open Source Document to Braille Converter Applicationpijush15
This presentation is actually about an Open Source application, BrailleOCR that helps to convert scanned documents to Braille and thus helps the Visually Impaired.
What is the use of this application in real life? Well, BrailleOCR is currently the only app that integrated Optical character recognition and Braille Translation together. This app will eventually help converting a lot of important documents to Braille. The project site for this project is given here
IJCA Paper: http://www.ijcaonline.org/archives/volume68/number16/11664-7254
Project site: https://code.google.com/p/brailleocr/
The app uses a four step process. Initially, we have a scanned image, which is a RGB image. The first step or the Pre-Processing step deals with conversion of a RGB image to grayscale. The 2nd step deals with Character Recognition using the Tesseract Engine. Now, the recognition step may have errors and we require post processing to correct them. The 3rd step is thus the Post-Processing step and it actually corrects errors in the previous step. The final and the most important step is the Braille Conversion step.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Using Grid Puzzle to Solve Constraint-Based Scheduling Problemcsandit
Constraint programming (CP) is one of the most effe
ctive techniques for solving practical
operational problems. The outstanding feature of th
e method is a set of constraints affecting a
solution of a problem can be imposed without a need
to explicitly defining a linear relation
among variables, i.e. an equation. Nevertheless, th
e challenge of paramount importance in
using this technique is how to present the operatio
nal problem in a solvable Constraint
Satisfaction Problem (CSP) model. The problem model
ling is problem independent and could be
an exhaustive task at the beginning stage of proble
m solving, particularly when the problem is a
real-world practical problem. This paper investigat
es the application of a simple grid puzzle
game when a player attempts to solve a practical sc
heduling problem. The examination
scheduling is presented as an operational game. The
game‘s rules are set up based on the
operational practice. CP is then applied to solve t
he defined puzzle and the results show the
success of the proposed method. The benefit of usin
g a grid puzzle as the model is that the
method can amplify the simplicity of CP in solving
practical problems.
Assessment of wear rate is an inseparable section of the saw ability of dimension stone, and an essential task to optimization in the diamond wire saw performance. This research aims to provide an accurate, practical and applicable model for predicting the wear rate of diamond bead based on rock properties using applications and performances of intelligent systems. In order to reach this purpose, 38 cutting test results with 38 different rocks were used from andesites, limestones and real marbles quarries located in eleven areas in Turkey. Prediction of wear rate is determined by optimization techniques like Multilayer Perceptron (MLP) and hybrid Genetic algorithm –Artificial neural network (GA-ANN) models that were utilized to build two estimation models by MATLAB software. In this study, 80% of the total samples were used randomly for the training dataset, and the remaining 20% was considered as testing data for GA-ANN model. Further, accuracy and performance capacity of models established were investigated using root mean square error (RMSE), the coefficient of determination (R2) and standard deviation (STD). Finally, a comparison was made among performances of these soft computing techniques for predicting and the results obtained indicated hybrid GA-ANN model with a coefficient of determination (R2) of training = 0.95 and testing = 0.991 can get more accurate predicting results in comparison with MLP models.
The purpose research is to develop the decision model of Multi-Criteria Group Decision Making (MCGDM) into Interval Value Fuzzy Multi-Criteria Group Decision Making (IV-FMCGDM), while the specific purpose is to construct decision-making model of Adaptive Interval Value Fuzzy Analytic Hierarchy Process (AIV- FAHP) uses Triangular Fuzzy Number (TFN) and group decision aggregation functions using Interval Value Geometric Means Aggregation (IV-GMA). The novelty research is to study the concept of group decision making by improving the middle point on the Interval Value Triangular Fuzzy Number (IV TFN). It provides more accurate modeling, and better rating performance, and more effective linguistic representation. This research produced a new decision-making model and algorithm based on AIV-FAHP used to measure the quality of e-learning.
Design and implementation of network security using genetic algorithmeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Effectiveness of MPEG-7 Color Features in Clothing RetrievaljournalBEEI
Clothing is a human used to cover the body. Clothing consist of dress, pants, skirts, and others. Clothing usually consists of various colors or a combination of several colors. Colors become one of the important reference used by humans in determining or looking for clothing according to their wishes. Color is one of the features that fit the human vision. Content Based Image Retrieval (CBIR) is a technique in Image Retrieval that give index to an image based on the characteristics contained in image such as color, shape, and texture. CBIR can make it easier to find something because it helps the grouping process on image based on its characteristic. In this case CBIR is used for the searching process of Muslim fashion based on the color features. The color used in this research is the color descriptor MPEG-7 which is Scalable Color Descriptor (SCD) and Dominant Color Descriptor (DCD). The SCD color feature displays the overall color proportion of the image, while the DCD displays the most dominant color in the image. For each image of Muslim women's clothing, the extraction process utilize SCD and DCD. This study used 150 images of Muslim women's clothing as a dataset consistingclass of red, blue, yellow, green and brown. Each class consists of 30 images. The similarity between the image features is measured using the eucludian distance. This study used human perception in viewing the color of clothing.The effectiveness is calculated for the color features of SCD and DCD adjusted to the human subjective similarity. Based on the simulation of effectiveness DCD result system gives higher value than SCD
An effective approach to offline arabic handwriting recognitionijaia
Segmentation is the most challenging part of the Arabic handwriting recognition, due to the unique
characteristics of Arabic writing that allows the same shape to denote different characters. In this paper,
an off-line Arabic handwriting recognition system is proposed. The processing details are presented in
three main stages. Firstly, the image is skeletonized to one pixel thin. Secondly, transfer each diagonally
connected foreground pixel to the closest horizontal or vertical line. Finally, these orthogonal lines are
coded as vectors of unique integer numbers; each vector represents one letter of the word. In order to
evaluate the proposed techniques, the system has been tested on the IFN/ENIT database, and the
experimental results show that our method is superior to those methods currently available.
Abstract:
A combination of exponential and Lindley failure rate model is considered and named it as exponential-Lindley
additive failure rate model. In this paper, we studied the distributional properties, central and non-central moments,
estimation of parameters, testing of hypothesis and the power of likelihood ratio criterion about the proposed model.
Key words: Exponential distribution, Lindley distribution, ML estimation, Likelihood ratio type criterion.
Comparative analysis of dynamic programming algorithms to find similarity in ...eSAT Journals
Abstract There exist many computational methods for finding similarity in gene sequence, finding suitable methods that gives optimal similarity is difficult task. Objective of this project is to find an appropriate method to compute similarity in gene/protein sequence, both within the families and across the families. Many dynamic programming algorithms like Levenshtein edit distance; Longest Common Subsequence and Smith-waterman have used dynamic programming approach to find similarities between two sequences. But none of the method mentioned above have used real benchmark data sets. They have only used dynamic programming algorithms for synthetic data. We proposed a new method to compute similarity. The performance of the proposed algorithm is evaluated using number of data sets from various families, and similarity value is calculated both within the family and across the families. A comparative analysis and time complexity of the proposed method reveal that Smith-waterman approach is appropriate method when gene/protein sequence belongs to same family and Longest Common Subsequence is best suited when sequence belong to two different families. Keywords - Bioinformatics, Gene, Gene Sequencing, Edit distance, String Similarity.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
A bidirectional text transcription of braille for odia, hindi, telugu and eng...eSAT Journals
Abstract
Communication gap establishes an aura of unwillingness of understanding the factor behind. Basically, this is one of the common
agenda for the visually challenged people. In order to bridge this gap, a proper platform of learning for both the mass and the
visually challenged for any native language is emphasized in this paper through Braille pattern. Braille, a code, well known to
visually challenged as their mode of communication is now converted into normal text in Odia, Hindi, Telugu and English
languages using Image segmentation as the base criteria with MATLAB as its simulation field so that every person can be able to
easily decode the information being conveyed by these people. The algorithm makes the best use of segmentation, histogram
analysis, pattern recognition, letter arrays, data base generation with testing in software and dumping in using Spartan 3e FPGA
kit. This paper also elaborates on the reverse conversion of native languages and English to Braille making the paper to be more
compatible. The processing speed with efficiency and accuracy defines the effective features of this paper as a successful
approach in both software and hardware.
Keywords: visually challenge, image segmentation, MATLAB, pattern recognition, Spartan 3e FPGA kit, compatible.
BrailleOCR: An Open Source Document to Braille Converter Applicationpijush15
This presentation is actually about an Open Source application, BrailleOCR that helps to convert scanned documents to Braille and thus helps the Visually Impaired.
What is the use of this application in real life? Well, BrailleOCR is currently the only app that integrated Optical character recognition and Braille Translation together. This app will eventually help converting a lot of important documents to Braille. The project site for this project is given here
IJCA Paper: http://www.ijcaonline.org/archives/volume68/number16/11664-7254
Project site: https://code.google.com/p/brailleocr/
The app uses a four step process. Initially, we have a scanned image, which is a RGB image. The first step or the Pre-Processing step deals with conversion of a RGB image to grayscale. The 2nd step deals with Character Recognition using the Tesseract Engine. Now, the recognition step may have errors and we require post processing to correct them. The 3rd step is thus the Post-Processing step and it actually corrects errors in the previous step. The final and the most important step is the Braille Conversion step.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Using Grid Puzzle to Solve Constraint-Based Scheduling Problemcsandit
Constraint programming (CP) is one of the most effe
ctive techniques for solving practical
operational problems. The outstanding feature of th
e method is a set of constraints affecting a
solution of a problem can be imposed without a need
to explicitly defining a linear relation
among variables, i.e. an equation. Nevertheless, th
e challenge of paramount importance in
using this technique is how to present the operatio
nal problem in a solvable Constraint
Satisfaction Problem (CSP) model. The problem model
ling is problem independent and could be
an exhaustive task at the beginning stage of proble
m solving, particularly when the problem is a
real-world practical problem. This paper investigat
es the application of a simple grid puzzle
game when a player attempts to solve a practical sc
heduling problem. The examination
scheduling is presented as an operational game. The
game‘s rules are set up based on the
operational practice. CP is then applied to solve t
he defined puzzle and the results show the
success of the proposed method. The benefit of usin
g a grid puzzle as the model is that the
method can amplify the simplicity of CP in solving
practical problems.
Assessment of wear rate is an inseparable section of the saw ability of dimension stone, and an essential task to optimization in the diamond wire saw performance. This research aims to provide an accurate, practical and applicable model for predicting the wear rate of diamond bead based on rock properties using applications and performances of intelligent systems. In order to reach this purpose, 38 cutting test results with 38 different rocks were used from andesites, limestones and real marbles quarries located in eleven areas in Turkey. Prediction of wear rate is determined by optimization techniques like Multilayer Perceptron (MLP) and hybrid Genetic algorithm –Artificial neural network (GA-ANN) models that were utilized to build two estimation models by MATLAB software. In this study, 80% of the total samples were used randomly for the training dataset, and the remaining 20% was considered as testing data for GA-ANN model. Further, accuracy and performance capacity of models established were investigated using root mean square error (RMSE), the coefficient of determination (R2) and standard deviation (STD). Finally, a comparison was made among performances of these soft computing techniques for predicting and the results obtained indicated hybrid GA-ANN model with a coefficient of determination (R2) of training = 0.95 and testing = 0.991 can get more accurate predicting results in comparison with MLP models.
The purpose research is to develop the decision model of Multi-Criteria Group Decision Making (MCGDM) into Interval Value Fuzzy Multi-Criteria Group Decision Making (IV-FMCGDM), while the specific purpose is to construct decision-making model of Adaptive Interval Value Fuzzy Analytic Hierarchy Process (AIV- FAHP) uses Triangular Fuzzy Number (TFN) and group decision aggregation functions using Interval Value Geometric Means Aggregation (IV-GMA). The novelty research is to study the concept of group decision making by improving the middle point on the Interval Value Triangular Fuzzy Number (IV TFN). It provides more accurate modeling, and better rating performance, and more effective linguistic representation. This research produced a new decision-making model and algorithm based on AIV-FAHP used to measure the quality of e-learning.
El 25% de los prospectos está listo para comprar, otro 25% no califica y no comprará en este momento, ¿qué haces tú con el 50% restante? La respuesta es: Gestión de Prospectos o Lead Management
081343812803 - Jasa Renovasi Rumah SurabayaNuri Ita
jasa pemborong rumah, biaya rehab rumah, harga bangun rumah minimalis, majalah rumah minimalis, kontraktor bangunan rumah, rehab rumah menjadi minimalis, harga renovasi rumah minimalis, cara hitung biaya renovasi rumah, membuat desain rumah,
Jasa Bangun dan Renovasi Rumah / Ruko / Gudang / Properti - Pasang Atap Galvalum - Interior
Melayani area : Surabaya - Sidoarjo - Pasuruan - Mojokerto - Gresik
CALL : 081343812803
Sistematización del proyecto "MOMENTICS", MODELO METODOLÓGICO DE INCORPORACIÓN DE LAS TIC EN EL ÁREA DE CIENCIAS SOCIALES del Colegio Eduardo Santos de Bogotá, implementado desde el año 2002 por el docente Antonio María Clavijo Rodríguez (D.R.A.). Se autoriza su utilización con fines pedagógicos o de investigación citando la fuente e informando debidamente al autor.
Neural Network based Supervised Self Organizing Maps for Face Recognition ijsc
The word biometrics refers to the use of physiological or biological characteristics of human to recognize and verify the identity of an individual. Face is one of the human biometrics for passive identification with uniqueness and stability. In this manuscript we present a new face based biometric system based on neural networks supervised self organizing maps (SOM). We name our method named SOM-F. We show that the proposed SOM-F method improves the performance and robustness of recognition. We apply the proposed method to a variety of datasets and show the results.
NEURAL NETWORK BASED SUPERVISED SELF ORGANIZING MAPS FOR FACE RECOGNITIONijsc
The word biometrics refers to the use of physiological or biological characteristics of human to recognize
and verify the identity of an individual. Face is one of the human biometrics for passive identification with
uniqueness and stability. In this manuscript we present a new face based biometric system based on neural
networks supervised self organizing maps (SOM). We name our method named SOM-F. We show that the
proposed SOM-F method improves the performance and robustness of recognition. We apply the proposed
method to a variety of datasets and show the results.
DCT AND DFT BASED BIOMETRIC RECOGNITION AND MULTIMODAL BIOMETRIC SECURITYIAEME Publication
This Research paper discusses the study and analysis conducted during this research on various techniques in biometric domain. A close glance on biometric enhancement techniques and their limitations are presented in this research paper. This process would enable researcher to understand the research contributions in the area of DCT and DFT based recognition and security, locate some crucial limitations of these notable research. This paper having summary about the different research papers that applicable to our topic of research which mentioned above. Biometric Recognition and security is a most important subject of research in this area of image processing.
Fusion based multimodal authentication in biometrics using context sensitive ...csandit
Biometrics is one of the primary key concepts of real application domains such as aadhar card,
passport, pan card, etc. In such applications user can provide two to three biometrics patterns
like face, finger, palm, signature, iris data, and so on. We considered face and finger patterns
for encoding and then also for verification. Using this data we proposed a novel model for
authentication in multimodal biometrics often called Context-Sensitive Exponent Associative
Memory Model (CSEAM). It provides different stages of security for biometrics patterns. In
stage 1, face and finger patterns can be fusion through Principal Component Analysis (PCA), in
stage 2 by applying SVD decomposition to generate keys from the fusion data and preprocessed
face pattern and then in stage 3, using CSEAM model the generated keys can be
encoded. The final key will be stored in the smart cards. In CSEAM model, exponential
kronecker product plays a critical role for encoding and also for verification to verify the
chosen samples from the users. This paper discusses by considering realistic biometric data in
terms of time and space.
FUSION BASED MULTIMODAL AUTHENTICATION IN BIOMETRICS USING CONTEXT-SENSITIVE ...cscpconf
Biometrics is one of the primary key concepts of real application domains such as aadhar card, passport, pan card, etc. In such applications user can provide two to three biometrics patterns
like face, finger, palm, signature, iris data, and so on. We considered face and finger patterns
for encoding and then also for verification. Using this data we proposed a novel model for
authentication in multimodal biometrics often called Context-Sensitive Exponent Associative Memory Model (CSEAM). It provides different stages of security for biometrics patterns. In
stage 1, face and finger patterns can be fusion through Principal Component Analysis (PCA), in stage 2 by applying SVD decomposition to generate keys from the fusion data and preprocessed face pattern and then in stage 3, using CSEAM model the generated keys can be encoded. The final key will be stored in the smart cards. In CSEAM model, exponential
kronecker product plays a critical role for encoding and also for verification to verify the chosen samples from the users. This paper discusses by considering realistic biometric data in
terms of time and space
Feature selection using modified particle swarm optimisation for face recogni...eSAT Journals
Abstract
One of the major influential factors which affects the accuracy of classification rate is the selection of right features. Not all features have vital role in classification. Many of the features in the dataset may be redundant and irrelevant, which increase the computational cost and may reduce classification rate. In this paper, we used DCT(Discrete cosine transform) coefficients as features for face recognition application. The coefficients are optimally selected based on a modified PSO algorithm. In this, the choice of coefficients is done by incorporating the average of the mean normalized standard deviations of various classes and giving more weightage to the lower indexed DCT coefficients. The algorithm is tested on ORL database. A recognition rate of 97% is obtained. Average number of features selected is about 40 percent for a 10 × 10 input. The modified PSO took about 50 iterations for convergence. These performance figures are found to be better than some of the work reported in literature.
Keywords: Particle swarm optimization, Discrete cosine transform, feature extraction, feature selection, face recognition, classification rate.
Analysis of the Iriscode Bioencoding SchemeCSCJournals
Cancelable biometrics is a technique used to enhance security and user privacy. These schemes are employed to generate multiple revocable data from the original biometric template. In this paper, the security of binary template transformations is evaluated, through a new transformation for iris templates, called bioencoding scheme. This transformation and its security is analyzed, using Boolean functions and non linear Boolean systems. A general discussion on binary template transformations is finally proposed.
With the surge in modern research focus towards Pervasive Computing, lot of techniques and challenges
needs to be addressed so as to effectively create smart spaces and achieve miniaturization. In the process of
scaling down to compact devices, the real things to ponder upon are the Information Retrieval challenges.
In this work, we discuss the aspects of multimedia which makes information access challenging. An
Example Pattern Recognition scenario is presented and the mathematical techniques that can be used to
model uncertainty are also presented for developing a system that can sense, compute and communicate in
a way that can make human life easy with smart objects assisting from around his surroundings.
Protection has become one of the biggest fields of study for several years, however the demand for this is
growing exponentially mostly with rise in sensitive data. The quality of the research can differ slightly from
any workstation to cloud, and though protection must be incredibly important all over. Throughout the past
two decades, sufficient focus has been given to substantiation along with validation in the technology
model. Identifying a legal person is increasingly become the difficult activity with the progression of time.
Some attempts are introduced in that same respect, in particular by utilizing human movements such as
fingerprints, facial recognition, palm scanning, retinal identification, DNA checking
FEATURE EXTRACTION METHODS FOR IRIS RECOGNITION SYSTEM: A SURVEYijcsit
Protection has become one of the biggest fields of study for several years, however the demand for this is growing exponentially mostly with rise in sensitive data. The quality of the research can differ slightly from any workstation to cloud, and though protection must be incredibly important all over. Throughout the past two decades, sufficient focus has been given to substantiation along with validation in the technology model. Identifying a legal person is increasingly become the difficult activity with the progression of time. Some attempts are introduced in that same respect, in particular by utilizing human movements such as fingerprints, facial recognition, palm scanning, retinal identification, DNA checking, breathing, speech checker, and so on. A number of methods for effective iris detection have indeed been suggested and researched. A general overview of current and state-of-the-art approaches to iris recognition is presented in this paper. In addition, significant advances in techniques, algorithms, qualified classifiers, datasets and methodologies for the extraction of features are also discussed.
MOCANAR: A MULTI-OBJECTIVE CUCKOO SEARCH ALGORITHM FOR NUMERIC ASSOCIATION RU...cscpconf
Extracting association rules from numeric features involves searching a very large search space. To
deal with this problem, in this paper a meta-heuristic algorithm is used that we have called
MOCANAR. The MOCANAR is a Pareto based multi-objective cuckoo search algorithm which
extracts high quality association rules from numeric datasets. The support, confidence,
interestingness and comprehensibility are the objectives that have been considered in the
MOCANAR. The MOCANAR extracts rules incrementally, in which, in each run of the algorithm, a
small number of high quality rules are made. In this paper, a comprehensive taxonomy of metaheuristic
algorithm have been presented. Using this taxonomy, we have decided to use a Cuckoo
Search algorithm because this algorithm is one of the most matured algorithms and also, it is simple
to use and easy to comprehend. In addition, until now, to our knowledge this method has not been
used as a multi-objective algorithm and has not been used in the association rule mining area. To
demonstrate the merit and associated benefits of the proposed methodology, the methodology has
been applied to a number of datasets and high quality results in terms of the objectives were
extracted
MOCANAR: A Multi-Objective Cuckoo Search Algorithm for Numeric Association Ru...csandit
Extracting association rules from numeric features
involves searching a very large search space. To
deal with this problem, in this paper a meta-heuris
tic algorithm is used that we have called
MOCANAR. The MOCANAR is a Pareto based multi-object
ive cuckoo search algorithm which
extracts high quality association rules from numeri
c datasets. The support, confidence,
interestingness and comprehensibility are the objec
tives that have been considered in the
MOCANAR. The MOCANAR extracts rules incrementally,
in which, in each run of the algorithm, a
small number of high quality rules are made. In thi
s paper, a comprehensive taxonomy of meta-
heuristic algorithm have been presented. Using this
taxonomy, we have decided to use a Cuckoo
Search algorithm because this algorithm is one of t
he most matured algorithms and also, it is simple
to use and easy to comprehend. In addition, until n
ow, to our knowledge this method has not been
used as a multi-objective algorithm and has not bee
n used in the association rule mining area. To
demonstrate the merit and associated benefits of th
e proposed methodology, the methodology has
been applied to a number of datasets and high quali
ty results in terms of the objectives were
extracted
ESTIMATING PROJECT DEVELOPMENT EFFORT USING CLUSTERED REGRESSION APPROACHcscpconf
Due to the intangible nature of “software”, accurate and reliable software effort estimation is a challenge in the software Industry. It is unlikely to expect very accurate estimates of software
development effort because of the inherent uncertainty in software development projects and the complex and dynamic interaction of factors that impact software development. Heterogeneity exists in the software engineering datasets because data is made available from diverse sources.
This can be reduced by defining certain relationship between the data values by classifying them into different clusters. This study focuses on how the combination of clustering and
regression techniques can reduce the potential problems in effectiveness of predictive efficiency due to heterogeneity of the data. Using a clustered approach creates the subsets of data having a degree of homogeneity that enhances prediction accuracy. It was also observed in this study that ridge regression performs better than other regression techniques used in the analysis.
Estimating project development effort using clustered regression approachcsandit
Due to the intangible nature of “software”, accurate and reliable software effort estimation is a
challenge in the software Industry. It is unlikely to expect very accurate estimates of software
development effort because of the inherent uncertainty in software development projects and the
complex and dynamic interaction of factors that impact software development. Heterogeneity
exists in the software engineering datasets because data is made available from diverse sources.
This can be reduced by defining certain relationship between the data values by classifying
them into different clusters. This study focuses on how the combination of clustering and
regression techniques can reduce the potential problems in effectiveness of predictive efficiency
due to heterogeneity of the data. Using a clustered approach creates the subsets of data having
a degree of homogeneity that enhances prediction accuracy. It was also observed in this study
that ridge regression performs better than other regression techniques used in the analysis.