Recently, several techniques have been presented
for blind source separation using linear or non-linear mixture
models. The problem is to recover the original source signals
without knowing apriori information about the mixture model.
Accordingly, several statistic and information theory-based
objective functions are used in literature to estimate the
original signals without providing mixture model. Here,
swarm intelligence played a major role to estimate the
separating matrix. In our work, we have considered the recent
optimization algorithm, called Artificial Bee Colony (ABC)
algorithm which is used to generate the separating matrix in
an optimal way. Here, Employee and onlooker bee and scout
bee phases are used to generate the optimal separating matrix
with lesser iterations. Here, new solutions are generated
according to the three major considerations such as, 1) all
elements of the separating matrix should be changed according
to best solution, 2) individual element of the separating matrix
should be changed to converge to the best optimal solution, 3)
Random solution should be added. These three considerations
are implemented in ABC algorithm to improve the
performance in Blind Source Separation (BSS). The
experimentation has been carried out using the speech signals
and the super and sub-Gaussian signal to validate the
performance. The proposed technique was compared with
Genetic algorithm in signal separation. From the result, it
was observed that ABC technique has outperformed existing
GA technique by achieving better fitness values and lesser
Euclidean distance.
An Approach to Face Recognition Using Feed Forward Neural NetworkEditor IJCATR
Abstract: Many approaches have been proposed for face recognition but there are major constraints like illumination, lightning, pose
etc., when taken into consideration, results in poor recognition rate. We propose a method to improve the recognition rate of the face
recognition system which uses various methods like homogeneity, energy, covariance, contrast, asymmetry, correlation, mean,
standard deviation, entropy, kurtosis to extract the facial features for a better recognition rate. Also the extracted features are trained
and it is associated with a feed forward back propagation neural network used for classification to render better results.
COMPARATIVE ANALYSIS OF MINUTIAE BASED FINGERPRINT MATCHING ALGORITHMSijcsit
Biometric matching involves finding similarity between fingerprint images.The accuracy and speed of the
matching algorithmdetermines its effectives. This researchaims at comparing two types of matching
algorithms namely(a) matching using global orientation features and (b) matching using minutia triangulation.The comparison is done using accuracy, time and number of similar features. The experiment is conducted on a datasets of 100 candidates using four (4) fingerprints from each candidate. The data is sampled from a mass registration conducted by a reputable organization in Kenya.Theresearch reveals that fingerprint matching based on algorithm (b) performs better in speed with an average of 38.32 milliseconds
as compared to matching based on algorithm (a) with an average of 563.76 milliseconds. On accuracy,algorithm(a) performs better with an average accuracy of 0.142433 as compared to algorithm (b) with an average accuracy score of 0.004202.
K-Medoids Clustering Using Partitioning Around Medoids for Performing Face Re...ijscmcj
Face recognition is one of the most unobtrusive biometric techniques that can be used for access control as well as surveillance purposes. Various methods for implementing face recognition have been proposed with varying degrees of performance in different scenarios. The most common issue with effective facial biometric systems is high susceptibility of variations in the face owing to different factors like changes in pose, varying illumination, different expression, presence of outliers, noise etc. This paper explores a novel technique for face recognition by performing classification of the face images using unsupervised learning approach through K-Medoids clustering. Partitioning Around Medoids algorithm (PAM) has been used for performing K-Medoids clustering of the data. The results are suggestive of increased robustness to noise and outliers in comparison to other clustering methods. Therefore the technique can also be used to increase the overall robustness of a face recognition system and thereby increase its invariance and make it a reliably usable biometric modality.
Comparison on PCA ICA and LDA in Face Recognitionijdmtaiir
Face recognition is used in wide range of application.
In recent years, face recognition has become one of the most
successful applications in image analysis and understanding.
Different statistical method and research groups reported a
contradictory result when comparing principal component
analysis (PCA) algorithm, independent component analysis
(ICA) algorithm, and linear discriminant analysis (LDA)
algorithm that has been proposed in recent years. The goal of
this paper is to compare and analyze the three algorithms and
conclude which is best. Feret Dataset is used for consistency
An Approach to Face Recognition Using Feed Forward Neural NetworkEditor IJCATR
Abstract: Many approaches have been proposed for face recognition but there are major constraints like illumination, lightning, pose
etc., when taken into consideration, results in poor recognition rate. We propose a method to improve the recognition rate of the face
recognition system which uses various methods like homogeneity, energy, covariance, contrast, asymmetry, correlation, mean,
standard deviation, entropy, kurtosis to extract the facial features for a better recognition rate. Also the extracted features are trained
and it is associated with a feed forward back propagation neural network used for classification to render better results.
COMPARATIVE ANALYSIS OF MINUTIAE BASED FINGERPRINT MATCHING ALGORITHMSijcsit
Biometric matching involves finding similarity between fingerprint images.The accuracy and speed of the
matching algorithmdetermines its effectives. This researchaims at comparing two types of matching
algorithms namely(a) matching using global orientation features and (b) matching using minutia triangulation.The comparison is done using accuracy, time and number of similar features. The experiment is conducted on a datasets of 100 candidates using four (4) fingerprints from each candidate. The data is sampled from a mass registration conducted by a reputable organization in Kenya.Theresearch reveals that fingerprint matching based on algorithm (b) performs better in speed with an average of 38.32 milliseconds
as compared to matching based on algorithm (a) with an average of 563.76 milliseconds. On accuracy,algorithm(a) performs better with an average accuracy of 0.142433 as compared to algorithm (b) with an average accuracy score of 0.004202.
K-Medoids Clustering Using Partitioning Around Medoids for Performing Face Re...ijscmcj
Face recognition is one of the most unobtrusive biometric techniques that can be used for access control as well as surveillance purposes. Various methods for implementing face recognition have been proposed with varying degrees of performance in different scenarios. The most common issue with effective facial biometric systems is high susceptibility of variations in the face owing to different factors like changes in pose, varying illumination, different expression, presence of outliers, noise etc. This paper explores a novel technique for face recognition by performing classification of the face images using unsupervised learning approach through K-Medoids clustering. Partitioning Around Medoids algorithm (PAM) has been used for performing K-Medoids clustering of the data. The results are suggestive of increased robustness to noise and outliers in comparison to other clustering methods. Therefore the technique can also be used to increase the overall robustness of a face recognition system and thereby increase its invariance and make it a reliably usable biometric modality.
Comparison on PCA ICA and LDA in Face Recognitionijdmtaiir
Face recognition is used in wide range of application.
In recent years, face recognition has become one of the most
successful applications in image analysis and understanding.
Different statistical method and research groups reported a
contradictory result when comparing principal component
analysis (PCA) algorithm, independent component analysis
(ICA) algorithm, and linear discriminant analysis (LDA)
algorithm that has been proposed in recent years. The goal of
this paper is to compare and analyze the three algorithms and
conclude which is best. Feret Dataset is used for consistency
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A NOVEL DATA DICTIONARY LEARNING FOR LEAF RECOGNITIONsipij
Automatic leaf recognition via image processing has been greatly important for a number of professionals, such as botanical taxonomic, environmental protectors, and foresters. Learn an over-complete leaf dictionary is an essential step for leaf image recognition. Big leaf images dimensions and training images number is facing of fast and complete data leaves dictionary. In this work an efficient approach applies to construct over-complete data leaves dictionary to set of big images diminutions based on sparse representation. In the proposed method a new cropped-contour method has used to crop the training image. The experiments are testing using correlation between the sparse representation and data dictionary and with focus on the computing time.
This paper compares two different techniques of iris recognition and explains the steps of extracting iris from eye image palette formation and conditioning of palette for matching. The focus of the paper is in finding the suitable method for iris recognition on the basis of recognition time, recognition rate, false detection rate, conditioning time, algorithm complexity, bulk detection, database handling. In this paper comparison has been performed two methods feature based and phase based recognition.
FACE RECOGNITION USING DIFFERENT LOCAL FEATURES WITH DIFFERENT DISTANCE TECHN...IJCSEIT Journal
A face recognition system using different local features with different distance measures is proposed in this
paper. Proposed method is fast and gives accurate detection. Feature vector is based on Eigen values,
Eigen vectors, and diagonal vectors of sub images. Images are partitioned into sub images to detect local
features. Sub partitions are rearranged into vertically and horizontally matrices. Eigen values, Eigenvector
and diagonal vectors are computed for these matrices. Global feature vector is generated for face
recognition. Experiments are performed on benchmark face YALE database. Results indicate that the
proposed method gives better recognition performance in terms of average recognized rate and retrieval
time compared to the existing methods.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Data Analytics on Solar Energy Using HadoopIJMERJOURNAL
ABSRACT: Missing data is one of the major issues in data miningand pattern recognition. The knowledge contains in attributeswith missing data values are important in improvingregression correlationprocessofanorganization.Thelearningprocesson each instance is necessary as it may containsome exceptional knowledge. There are various methods tohandle missing data in regression correlation. Analysis of photovoltaic cell, Sunlight striking on different geographical location to know the defective or connectionless photovoltaic cell plate. we mainly aims To showcasing the energy produced at different geographical location and to find the defective plate. And also analysis the data sets of energy produced along with current weather in particular area to know the status of the photovoltaic plates. In this project, we used Hadoop Map-Reduce framework to analyze the solar energy datasets.
A Binary Bat Inspired Algorithm for the Classification of Breast Cancer Data ijscai
Advancement in information and technology has made a major impact on medical science where the
researchers come up with new ideas for improving the classification rate of various diseases. Breast cancer
is one such disease killing large number of people around the world. Diagnosing the disease at its earliest
instance makes a huge impact on its treatment. The authors propose a Binary Bat Algorithm (BBA) based
Feedforward Neural Network (FNN) hybrid model, where the advantages of BBA and efficiency of FNN is
exploited for the classification of three benchmark breast cancer datasets into malignant and benign cases.
Here BBA is used to generate a V-shaped hyperbolic tangent function for training the network and a fitness
function is used for error minimization. FNNBBA based classification produces 92.61% accuracy for
training data and 89.95% for testing
META-HEURISTICS BASED ARF OPTIMIZATION FOR IMAGE RETRIEVALIJCSEIT Journal
The proposed approach avoids the semantic gap in image retrieval by combining automatic relevance
feedback and a modified stochastic algorithm. A visual feature database is constructed from the image
database, using combined feature vector. Very few fast-computable features are included in this step. The
user selects the query image, and based on that, the system ranks the whole dataset. The nearest images are
retrieved and the first automatic relevance feedback is generated. The combined similarity of textual and
visual feature space using Latent Semantic Indexing is evaluated and the images are labelled as relevant or
irrelevant. The feedback drives a feature re-weighting process and is routed to the particle swarm
optimizer. Instead of classical swarm update approach, the swarm is split, for each swarm to perform the
search in parallel, thereby increasing the performance of the system. It provides a powerful optimization
tool and an effective space exploration mechanism. The proposed approach aims to achieve the following
goals without any human interaction - to cluster relevant images using meta-heuristics and to dynamically
modify the feature space by feeding automatic relevance feedback.
Sparse representation based classification of mr images of brain for alzheime...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
A Genetic Algorithm on Optimization Test FunctionsIJMERJOURNAL
ABSTRACT: Genetic Algorithms (GAs) have become increasingly useful over the years for solving combinatorial problems. Though they are generally accepted to be good performers among metaheuristic algorithms, most works have concentrated on the application of the GAs rather than the theoretical justifications. In this paper, we examine and justify the suitability of Genetic Algorithms in solving complex, multi-variable and multi-modal optimization problems. To achieve this, a simple Genetic Algorithm was used to solve four standard complicated optimization test functions, namely Rosenbrock, Schwefel, Rastrigin and Shubert functions. These functions are benchmarks to test the quality of an optimization procedure towards a global optimum. We show that the method has a quicker convergence to the global optima and that the optimal values for the Rosenbrock, Rastrigin, Schwefel and Shubert functions are zero (0), zero (0), -418.9829 and -14.5080 respectively
NEURAL NETWORK BASED SUPERVISED SELF ORGANIZING MAPS FOR FACE RECOGNITIONijsc
The word biometrics refers to the use of physiological or biological characteristics of human to recognize
and verify the identity of an individual. Face is one of the human biometrics for passive identification with
uniqueness and stability. In this manuscript we present a new face based biometric system based on neural
networks supervised self organizing maps (SOM). We name our method named SOM-F. We show that the
proposed SOM-F method improves the performance and robustness of recognition. We apply the proposed
method to a variety of datasets and show the results.
Estimating project development effort using clustered regression approachcsandit
Due to the intangible nature of “software”, accurate and reliable software effort estimation is a
challenge in the software Industry. It is unlikely to expect very accurate estimates of software
development effort because of the inherent uncertainty in software development projects and the
complex and dynamic interaction of factors that impact software development. Heterogeneity
exists in the software engineering datasets because data is made available from diverse sources.
This can be reduced by defining certain relationship between the data values by classifying
them into different clusters. This study focuses on how the combination of clustering and
regression techniques can reduce the potential problems in effectiveness of predictive efficiency
due to heterogeneity of the data. Using a clustered approach creates the subsets of data having
a degree of homogeneity that enhances prediction accuracy. It was also observed in this study
that ridge regression performs better than other regression techniques used in the analysis.
ESTIMATING PROJECT DEVELOPMENT EFFORT USING CLUSTERED REGRESSION APPROACHcscpconf
Due to the intangible nature of “software”, accurate and reliable software effort estimation is a challenge in the software Industry. It is unlikely to expect very accurate estimates of software
development effort because of the inherent uncertainty in software development projects and the complex and dynamic interaction of factors that impact software development. Heterogeneity exists in the software engineering datasets because data is made available from diverse sources.
This can be reduced by defining certain relationship between the data values by classifying them into different clusters. This study focuses on how the combination of clustering and
regression techniques can reduce the potential problems in effectiveness of predictive efficiency due to heterogeneity of the data. Using a clustered approach creates the subsets of data having a degree of homogeneity that enhances prediction accuracy. It was also observed in this study that ridge regression performs better than other regression techniques used in the analysis.
Hierarchical Gaussian Scale-Space on Androgenic Hair Pattern RecognitionTELKOMNIKA JOURNAL
Androgenic hair pattern stated to be the new biometric trait since 2014. The research to improve
the performance of androgenic hair pattern recognition system has begun to be developed due to the
problems that occurred when other apparent biometric trait such as face is hidden from sight. The
recognition system was built with hierarchical Gaussian scale-space using 4 octaves and 3 levels in each
octave. The system also implemented the equalization process to adjust image’s intensity by using
histogram equalization. We analyzed 400 images of androgenic hair in the database that were analyzed
using 2-fold and 10-fold cross validation and Euclidean distance to classify it. The experimental results
showed that our proposed method gave better performance compared to previous work that used Haar
wavelet transformation and principal component analysis as the main method. The best recognition
precision was 94.23 % obtained from the base octave with the third level using histogram equalization and
10-fold cross validation.
Geometric Correction for Braille Document Images csandit
Image processing is an important research area in computer vision. clustering is an unsupervised
study. clustering can also be used for image segmentation. there exist so many methods for image
segmentation. image segmentation plays an important role in image analysis.it is one of the first
and the most important tasks in image analysis and computer vision. this proposed system
presents a variation of fuzzy c-means algorithm that provides image clustering. the kernel fuzzy
c-means clustering algorithm (kfcm) is derived from the fuzzy c-means clustering
algorithm(fcm).the kfcm algorithm that provides image clustering and improves accuracy
significantly compared with classical fuzzy c-means algorithm. the new algorithm is called
gaussian kernel based fuzzy c-means clustering algorithm (gkfcm)the major characteristic of
gkfcm is the use of a fuzzy clustering approach ,aiming to guarantee noise insensitiveness and
image detail preservation.. the objective of the work is to cluster the low intensity in homogeneity
area from the noisy images, using the clustering method, segmenting that portion separately using
content level set approach. the purpose of designing this system is to produce better segmentation
results for images corrupted by noise, so that it can be useful in various fields like medical image
analysis, such as tumor detection, study of anatomical structure, and treatment planning.
Possibility fuzzy c means clustering for expression invariant face recognitionIJCI JOURNAL
Face being the most natural method of identification for humans is one of the most significant biometric
modalities and various methods to achieve efficient face recognition have been proposed. However the
changes in face owing to different expressions, pose, makeup, illumination, age bring about marked
variations in the facial image. These changes will inevitably occur and they can be controlled only till a
certain degree beyond which they are bound to happen and will affect the face thereby adversely impacting
the performance of any face recognition system. This paper proposes a strategy to improve the
classification methodology in face recognition by using Possibility Fuzzy C-Means Clustering (PFCM).
This clustering technique was used for face recognition due to its properties like outlier insensitivity which
make it a suitable candidate for use in designing such robust applications.PFCM is a hybridization of
Possibilistic C-Means (PCM) and Fuzzy C-Means (FCM) clustering algorithms. PFCM is a robust
clustering technique and is especially significant for its noise insensitivity. It has also resolved the
coincident clusters problem which is faced by other clustering techniques. Therefore the technique can also
be used to increase the overall robustness of a face recognition system and thereby increase its invariance
and make it a reliably usable biometric modality.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Efficient Scalar Multiplication for Ate Based Pairing over KSS Curve of Embed...Md. Al-Amin Khandaker Nipu
Efficiency of the next generation pairing based security pro- tocols rely not only on the faster pairing calculation but also on efficient scalar multiplication on higher degree rational points. In this paper we proposed a scalar multiplication technique in the context of Ate based pairing with Kachisa-Schaefer-Scott (KSS) pairing friendly curves with embedding degree k = 18 at the 192-bit security level. From the system- atically obtained characteristics p, order r and Frobenious trace t of KSS curve, which is given by certain integer z also known as mother parame- ter, we exploit the relation #E(Fp) = p+1−t mod r by applying Frobe- nius mapping with rational point to enhance the scalar multiplication. In addition we proposed z-adic representation of scalar s. In combination of Frobenious mapping with multi-scalar multiplication technique we ef- ficiently calculate scalar multiplication by s. Our proposed method can achieve 3 times or more than 3 times faster scalar multiplication com- pared to binary scalar multiplication, sliding-window and non-adjacent form method.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A NOVEL DATA DICTIONARY LEARNING FOR LEAF RECOGNITIONsipij
Automatic leaf recognition via image processing has been greatly important for a number of professionals, such as botanical taxonomic, environmental protectors, and foresters. Learn an over-complete leaf dictionary is an essential step for leaf image recognition. Big leaf images dimensions and training images number is facing of fast and complete data leaves dictionary. In this work an efficient approach applies to construct over-complete data leaves dictionary to set of big images diminutions based on sparse representation. In the proposed method a new cropped-contour method has used to crop the training image. The experiments are testing using correlation between the sparse representation and data dictionary and with focus on the computing time.
This paper compares two different techniques of iris recognition and explains the steps of extracting iris from eye image palette formation and conditioning of palette for matching. The focus of the paper is in finding the suitable method for iris recognition on the basis of recognition time, recognition rate, false detection rate, conditioning time, algorithm complexity, bulk detection, database handling. In this paper comparison has been performed two methods feature based and phase based recognition.
FACE RECOGNITION USING DIFFERENT LOCAL FEATURES WITH DIFFERENT DISTANCE TECHN...IJCSEIT Journal
A face recognition system using different local features with different distance measures is proposed in this
paper. Proposed method is fast and gives accurate detection. Feature vector is based on Eigen values,
Eigen vectors, and diagonal vectors of sub images. Images are partitioned into sub images to detect local
features. Sub partitions are rearranged into vertically and horizontally matrices. Eigen values, Eigenvector
and diagonal vectors are computed for these matrices. Global feature vector is generated for face
recognition. Experiments are performed on benchmark face YALE database. Results indicate that the
proposed method gives better recognition performance in terms of average recognized rate and retrieval
time compared to the existing methods.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Data Analytics on Solar Energy Using HadoopIJMERJOURNAL
ABSRACT: Missing data is one of the major issues in data miningand pattern recognition. The knowledge contains in attributeswith missing data values are important in improvingregression correlationprocessofanorganization.Thelearningprocesson each instance is necessary as it may containsome exceptional knowledge. There are various methods tohandle missing data in regression correlation. Analysis of photovoltaic cell, Sunlight striking on different geographical location to know the defective or connectionless photovoltaic cell plate. we mainly aims To showcasing the energy produced at different geographical location and to find the defective plate. And also analysis the data sets of energy produced along with current weather in particular area to know the status of the photovoltaic plates. In this project, we used Hadoop Map-Reduce framework to analyze the solar energy datasets.
A Binary Bat Inspired Algorithm for the Classification of Breast Cancer Data ijscai
Advancement in information and technology has made a major impact on medical science where the
researchers come up with new ideas for improving the classification rate of various diseases. Breast cancer
is one such disease killing large number of people around the world. Diagnosing the disease at its earliest
instance makes a huge impact on its treatment. The authors propose a Binary Bat Algorithm (BBA) based
Feedforward Neural Network (FNN) hybrid model, where the advantages of BBA and efficiency of FNN is
exploited for the classification of three benchmark breast cancer datasets into malignant and benign cases.
Here BBA is used to generate a V-shaped hyperbolic tangent function for training the network and a fitness
function is used for error minimization. FNNBBA based classification produces 92.61% accuracy for
training data and 89.95% for testing
META-HEURISTICS BASED ARF OPTIMIZATION FOR IMAGE RETRIEVALIJCSEIT Journal
The proposed approach avoids the semantic gap in image retrieval by combining automatic relevance
feedback and a modified stochastic algorithm. A visual feature database is constructed from the image
database, using combined feature vector. Very few fast-computable features are included in this step. The
user selects the query image, and based on that, the system ranks the whole dataset. The nearest images are
retrieved and the first automatic relevance feedback is generated. The combined similarity of textual and
visual feature space using Latent Semantic Indexing is evaluated and the images are labelled as relevant or
irrelevant. The feedback drives a feature re-weighting process and is routed to the particle swarm
optimizer. Instead of classical swarm update approach, the swarm is split, for each swarm to perform the
search in parallel, thereby increasing the performance of the system. It provides a powerful optimization
tool and an effective space exploration mechanism. The proposed approach aims to achieve the following
goals without any human interaction - to cluster relevant images using meta-heuristics and to dynamically
modify the feature space by feeding automatic relevance feedback.
Sparse representation based classification of mr images of brain for alzheime...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
A Genetic Algorithm on Optimization Test FunctionsIJMERJOURNAL
ABSTRACT: Genetic Algorithms (GAs) have become increasingly useful over the years for solving combinatorial problems. Though they are generally accepted to be good performers among metaheuristic algorithms, most works have concentrated on the application of the GAs rather than the theoretical justifications. In this paper, we examine and justify the suitability of Genetic Algorithms in solving complex, multi-variable and multi-modal optimization problems. To achieve this, a simple Genetic Algorithm was used to solve four standard complicated optimization test functions, namely Rosenbrock, Schwefel, Rastrigin and Shubert functions. These functions are benchmarks to test the quality of an optimization procedure towards a global optimum. We show that the method has a quicker convergence to the global optima and that the optimal values for the Rosenbrock, Rastrigin, Schwefel and Shubert functions are zero (0), zero (0), -418.9829 and -14.5080 respectively
NEURAL NETWORK BASED SUPERVISED SELF ORGANIZING MAPS FOR FACE RECOGNITIONijsc
The word biometrics refers to the use of physiological or biological characteristics of human to recognize
and verify the identity of an individual. Face is one of the human biometrics for passive identification with
uniqueness and stability. In this manuscript we present a new face based biometric system based on neural
networks supervised self organizing maps (SOM). We name our method named SOM-F. We show that the
proposed SOM-F method improves the performance and robustness of recognition. We apply the proposed
method to a variety of datasets and show the results.
Estimating project development effort using clustered regression approachcsandit
Due to the intangible nature of “software”, accurate and reliable software effort estimation is a
challenge in the software Industry. It is unlikely to expect very accurate estimates of software
development effort because of the inherent uncertainty in software development projects and the
complex and dynamic interaction of factors that impact software development. Heterogeneity
exists in the software engineering datasets because data is made available from diverse sources.
This can be reduced by defining certain relationship between the data values by classifying
them into different clusters. This study focuses on how the combination of clustering and
regression techniques can reduce the potential problems in effectiveness of predictive efficiency
due to heterogeneity of the data. Using a clustered approach creates the subsets of data having
a degree of homogeneity that enhances prediction accuracy. It was also observed in this study
that ridge regression performs better than other regression techniques used in the analysis.
ESTIMATING PROJECT DEVELOPMENT EFFORT USING CLUSTERED REGRESSION APPROACHcscpconf
Due to the intangible nature of “software”, accurate and reliable software effort estimation is a challenge in the software Industry. It is unlikely to expect very accurate estimates of software
development effort because of the inherent uncertainty in software development projects and the complex and dynamic interaction of factors that impact software development. Heterogeneity exists in the software engineering datasets because data is made available from diverse sources.
This can be reduced by defining certain relationship between the data values by classifying them into different clusters. This study focuses on how the combination of clustering and
regression techniques can reduce the potential problems in effectiveness of predictive efficiency due to heterogeneity of the data. Using a clustered approach creates the subsets of data having a degree of homogeneity that enhances prediction accuracy. It was also observed in this study that ridge regression performs better than other regression techniques used in the analysis.
Hierarchical Gaussian Scale-Space on Androgenic Hair Pattern RecognitionTELKOMNIKA JOURNAL
Androgenic hair pattern stated to be the new biometric trait since 2014. The research to improve
the performance of androgenic hair pattern recognition system has begun to be developed due to the
problems that occurred when other apparent biometric trait such as face is hidden from sight. The
recognition system was built with hierarchical Gaussian scale-space using 4 octaves and 3 levels in each
octave. The system also implemented the equalization process to adjust image’s intensity by using
histogram equalization. We analyzed 400 images of androgenic hair in the database that were analyzed
using 2-fold and 10-fold cross validation and Euclidean distance to classify it. The experimental results
showed that our proposed method gave better performance compared to previous work that used Haar
wavelet transformation and principal component analysis as the main method. The best recognition
precision was 94.23 % obtained from the base octave with the third level using histogram equalization and
10-fold cross validation.
Geometric Correction for Braille Document Images csandit
Image processing is an important research area in computer vision. clustering is an unsupervised
study. clustering can also be used for image segmentation. there exist so many methods for image
segmentation. image segmentation plays an important role in image analysis.it is one of the first
and the most important tasks in image analysis and computer vision. this proposed system
presents a variation of fuzzy c-means algorithm that provides image clustering. the kernel fuzzy
c-means clustering algorithm (kfcm) is derived from the fuzzy c-means clustering
algorithm(fcm).the kfcm algorithm that provides image clustering and improves accuracy
significantly compared with classical fuzzy c-means algorithm. the new algorithm is called
gaussian kernel based fuzzy c-means clustering algorithm (gkfcm)the major characteristic of
gkfcm is the use of a fuzzy clustering approach ,aiming to guarantee noise insensitiveness and
image detail preservation.. the objective of the work is to cluster the low intensity in homogeneity
area from the noisy images, using the clustering method, segmenting that portion separately using
content level set approach. the purpose of designing this system is to produce better segmentation
results for images corrupted by noise, so that it can be useful in various fields like medical image
analysis, such as tumor detection, study of anatomical structure, and treatment planning.
Possibility fuzzy c means clustering for expression invariant face recognitionIJCI JOURNAL
Face being the most natural method of identification for humans is one of the most significant biometric
modalities and various methods to achieve efficient face recognition have been proposed. However the
changes in face owing to different expressions, pose, makeup, illumination, age bring about marked
variations in the facial image. These changes will inevitably occur and they can be controlled only till a
certain degree beyond which they are bound to happen and will affect the face thereby adversely impacting
the performance of any face recognition system. This paper proposes a strategy to improve the
classification methodology in face recognition by using Possibility Fuzzy C-Means Clustering (PFCM).
This clustering technique was used for face recognition due to its properties like outlier insensitivity which
make it a suitable candidate for use in designing such robust applications.PFCM is a hybridization of
Possibilistic C-Means (PCM) and Fuzzy C-Means (FCM) clustering algorithms. PFCM is a robust
clustering technique and is especially significant for its noise insensitivity. It has also resolved the
coincident clusters problem which is faced by other clustering techniques. Therefore the technique can also
be used to increase the overall robustness of a face recognition system and thereby increase its invariance
and make it a reliably usable biometric modality.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Efficient Scalar Multiplication for Ate Based Pairing over KSS Curve of Embed...Md. Al-Amin Khandaker Nipu
Efficiency of the next generation pairing based security pro- tocols rely not only on the faster pairing calculation but also on efficient scalar multiplication on higher degree rational points. In this paper we proposed a scalar multiplication technique in the context of Ate based pairing with Kachisa-Schaefer-Scott (KSS) pairing friendly curves with embedding degree k = 18 at the 192-bit security level. From the system- atically obtained characteristics p, order r and Frobenious trace t of KSS curve, which is given by certain integer z also known as mother parame- ter, we exploit the relation #E(Fp) = p+1−t mod r by applying Frobe- nius mapping with rational point to enhance the scalar multiplication. In addition we proposed z-adic representation of scalar s. In combination of Frobenious mapping with multi-scalar multiplication technique we ef- ficiently calculate scalar multiplication by s. Our proposed method can achieve 3 times or more than 3 times faster scalar multiplication com- pared to binary scalar multiplication, sliding-window and non-adjacent form method.
High speed and energy-efficient carry skip adder operating under a wide range...LogicMindtech Nologies
VLSI Projects for M. Tech, VLSI Projects in Vijayanagar, VLSI Projects in Bangalore, M. Tech Projects in Vijayanagar, M. Tech Projects in Bangalore, VLSI IEEE projects in Bangalore, IEEE 2015 VLSI Projects, FPGA and Xilinx Projects, FPGA and Xilinx Projects in Bangalore, FPGA and Xilinx Projects in Vijayangar
High performance pipelined architecture of elliptic curve scalar multiplicati...Ieee Xpert
High performance pipelined architecture of elliptic curve scalar multiplication over gf(2m) High performance pipelined architecture of elliptic curve scalar multiplication over gf(2m) High performance pipelined architecture of elliptic curve scalar multiplication over gf(2m) High performance pipelined architecture of elliptic curve scalar multiplication over gf(2m)
Ajay Kumar.Ph.D Research scholar at National Institute of Technology my mail id:-- ajaymodaliger@gmail.com
In this presentation slide i have Explained how to reducing Computational time complexity of Discrete Fourier transform(DFT) from O(n^2 ) to nlogn through Radix-2 .FFT Algorithm in this work i have also introduced how we can use Radix-2 FFT in encrypted signal processing application by considering homomarphic properties(RSA) of Paillier cryptosystem.
Electroencephalography (EEG) based brain Computer interface (BCI) needs efficient algorithms to extract discriminative features from raw EEG signals. The issue of selecting optimizing spatial spectral features is key to high performance motor imagery(MI) classification, which is one of the main topics in EEG-based brain computer interfaces. Some novel methods are used first which formulates the selection of features as maximizing mutual information between class labels and features. It then uses an efficient algorithms for pattern feature extraction frame work,to select an effective feature set. The results shows the classification accuracy obtained and is compared with the other existing algorithms
Multilinear Kernel Mapping for Feature Dimension Reduction in Content Based M...ijma
In the process of content-based multimedia retrieval, multimedia information is processed in order to
obtain descriptive features. Descriptive representation of features, results in a huge feature count, which
results in processing overhead. To reduce this descriptive feature overhead, various approaches have been
used to dimensional reduction, among which PCA and LDA are the most used methods. However, these
methods do not reflect the significance of feature content in terms of inter-relation among all dataset
features. To achieve a dimension reduction based on histogram transformation, features with low
significance can be eliminated. In this paper, we propose a feature dimensional reduction approaches to
achieve the dimension reduction approach based on a multi-linear kernel (MLK) modeling. A benchmark
dataset for the experimental work is taken and the proposed work is observed to be improved in analysis in
comparison to the conventional system.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
ElectroencephalographySignalClassification based on Sub-Band Common Spatial P...IOSRJVSP
Brain-computer interface (BCI) is a communication pathway between brain and an external device. It translates human thought into commands to control the external devices.Electroencephalography (EEG) is cost effective and easier way to implement the BCI. This paper presents a novel method for classifying EEG during motor imagery by the combination of common spatial pattern (CSP) and linear discriminant analysis (LDA). In the proposed method, the EEG signal is bandpass-filtered into multiple frequency bands. The CSP features are then extracted from each of these bands. The LDA classifier is subsequently used to classify the CSP features. In this paper, experimental results are presented on a publicly available BCI competition dataset and the performance is compared with existing approaches. The experimental result shows that the proposed method yields comparatively superior cross validation accuracies compared to prevailing methods.
Energy aware model for sensor network a nature inspired algorithm approachijdms
In this paper we are proposing to develop energy aware model for sensor network. In our approach, first
we used DBSCAN clustering technique to exploit the spatiotemporal correlation among the sensors, then
we identified subset of sensors called representative sensors which represent the entire network state. And
finally we used nature inspired algorithms such as Ant Colony Optimization, Bees Colony Optimization,
and Simulated Annealing to find the optimal transmission path for data transmission. We have conducted
our experiment on publicly available Intel Berkeley Research Lab dataset and the experimental results
shows that consumption of energy can be reduced.
Blind Image Seperation Using Forward Difference Method (FDM)sipij
In this paper, blind image separation is performed, exploiting the property of sparseness to represent images. A new sparse representation called forward difference method is proposed. It is known that most of the independent component analysis (ICA) basis functions, extracted from images are sparse and gives unreliable sparseness measure. In the proposed method, the image mixture is first transformed to sparse images. These images are divided into blocks and for each block the sparseness measure ε0 norm is applied. The block having the most sparseness is considered to determine the separation matrix. The efficiency of the proposed method is compared with other sparse representation functions.
A SIMPLE APPROACH FOR RELATIVELY AUTOMATED HIPPOCAMPUS SEGMENTATION FROM SAGI...ijbbjournal
In this paper, we present a relatively automated method to segment the hippocampus in t1 weighted
magnetic resonance images that can be acquired in the routine clinical setting. This paper describes a
simple approach for segmenting the hippocampus automatically from sagittal view of brain MRI. Large
datasets of structural MR images are collected to quantitatively analyze the relationships between brain
anatomy, disease progression, treatment regimens, and genetic influences upon brain structure..This
method segments the hippocampus without any human intervention for few slices present mid position in
the total volume. Experimental results using this method show a good agreement with the manual
segmented gold standard. These results may support the clinical studies of memory and neurodegenerative
disease
A Comparative Study of Machine Learning Algorithms for EEG Signal Classificationsipij
In this paper, different machine learning algorithms such as Linear Discriminant Analysis, Support vector
machine (SVM), Multi-layer perceptron, Random forest, K-nearest neighbour, and Autoencoder with SVM
have been compared. This comparison was conducted to seek a robust method that would produce good
classification accuracy. To this end, a robust method of classifying raw Electroencephalography (EEG)
signals associated with imagined movement of the right hand and relaxation state, namely Autoencoder
with SVM has been proposed. The EEG dataset used in this research was created by the University of
Tubingen, Germany. The best classification accuracy achieved was 70.4% with SVM through feature
engineering. However, our prosed method of autoencoder in combination with SVM produced a similar
accuracy of 65% without using any feature engineering technique. This research shows that this system of
classification of motor movements can be used in a Brain-Computer Interface system (BCI) to mentally
control a robotic device or an exoskeleton.
A Comparative Study of Machine Learning Algorithms for EEG Signal Classificationsipij
In this paper, different machine learning algorithms such as Linear Discriminant Analysis, Support vector
machine (SVM), Multi-layer perceptron, Random forest, K-nearest neighbour, and Autoencoder with SVM
have been compared. This comparison was conducted to seek a robust method that would produce good
classification accuracy. To this end, a robust method of classifying raw Electroencephalography (EEG)
signals associated with imagined movement of the right hand and relaxation state, namely Autoencoder
with SVM has been proposed. The EEG dataset used in this research was created by the University of
Tubingen, Germany. The best classification accuracy achieved was 70.4% with SVM through feature
engineering. However, our prosed method of autoencoder in combination with SVM produced a similar
accuracy of 65% without using any feature engineering technique. This research shows that this system of
classification of motor movements can be used in a Brain-Computer Interface system (BCI) to mentally
control a robotic device or an exoskeleton.
A Comparative Study of Machine Learning Algorithms for EEG Signal Classificationsipij
In this paper, different machine learning algorithms such as Linear Discriminant Analysis, Support vector
machine (SVM), Multi-layer perceptron, Random forest, K-nearest neighbour, and Autoencoder with SVM
have been compared. This comparison was conducted to seek a robust method that would produce good
classification accuracy. To this end, a robust method of classifying raw Electroencephalography (EEG)
signals associated with imagined movement of the right hand and relaxation state, namely Autoencoder
with SVM has been proposed. The EEG dataset used in this research was created by the University of
Tubingen, Germany. The best classification accuracy achieved was 70.4% with SVM through feature
engineering. However, our prosed method of autoencoder in combination with SVM produced a similar
accuracy of 65% without using any feature engineering technique. This research shows that this system of
classification of motor movements can be used in a Brain-Computer Interface system (BCI) to mentally
control a robotic device or an exoskeleton.
A COMPARATIVE STUDY OF MACHINE LEARNING ALGORITHMS FOR EEG SIGNAL CLASSIFICATIONsipij
In this paper, different machine learning algorithms such as Linear Discriminant Analysis, Support vector
machine (SVM), Multi-layer perceptron, Random forest, K-nearest neighbour, and Autoencoder with SVM
have been compared. This comparison was conducted to seek a robust method that would produce good
classification accuracy. To this end, a robust method of classifying raw Electroencephalography (EEG)
signals associated with imagined movement of the right hand and relaxation state, namely Autoencoder
with SVM has been proposed. The EEG dataset used in this research was created by the University of
Tubingen, Germany. The best classification accuracy achieved was 70.4% with SVM through feature
engineering. However, our prosed method of autoencoder in combination with SVM produced a similar
accuracy of 65% without using any feature engineering technique. This research shows that this system of
classification of motor movements can be used in a Brain-Computer Interface system (BCI) to mentally
control a robotic device or an exoskeleton.
A comparison of SIFT, PCA-SIFT and SURFCSCJournals
This paper compares three robust feature detection methods, they are, Scale Invariant Feature Transform (SIFT), Principal Component Analysis (PCA) -SIFT and Speeded Up Robust Features (SURF). Lowe presented SIFT [1], which was successfully used in recognition, stitching and many other applications because of its robustness. Yan Ke [2] gave a change of SIFT by using PCA to normalize the gradient patch instead of histogram. H. Bay [3] presented a faster method for SURF, which used Fast-Hessian detector. The performance of the three methods is compared for scale changes, rotation , blur, illumination changes and affine transformations, all of which uses repeatability as an evaluation measurement. Additionally, RANSAC is used to reject the inconsistent matches [4]. SIFT presents its stability in most situation except rotation and illumination changes. SURF is the fastest one with good performance as the same as SIFT, PCA-SIFT shows its advantages in rotation, blur and illumination changes.
Similar to Blind Source Separation of Super and Sub-Gaussian Signals with ABC Algorithm (20)
Power System State Estimation - A ReviewIDES Editor
The aim of this article is to provide a comprehensive
survey on power system state estimation techniques. The
algorithms used for finding the system states under both static
and dynamic state estimations are discussed in brief. The
authors are opinion that the scope of pursuing research in the
area of state estimation with PMU and SCADA measurements
is the state of the art and timely.
Artificial Intelligence Technique based Reactive Power Planning Incorporating...IDES Editor
Reactive Power Planning is a major concern in the
operation and control of power systems This paper compares
the effectiveness of Evolutionary Programming (EP) and
New Improved Differential Evolution (NIMDE) to solve
Reactive Power Planning (RPP) problem incorporating
FACTS Controllers like Static VAR Compensator (SVC),
Thyristor Controlled Series Capacitor (TCSC) and Unified
power flow controller (UPFC) considering voltage stability.
With help of Fast Voltage Stability Index (FVSI), the critical
lines and buses are identified to install the FACTS controllers.
The optimal settings of the control variables of the generator
voltages,transformer tap settings and allocation and parameter
settings of the SVC,TCSC,UPFC are considered for reactive
power planning. The test and Validation of the proposed
algorithm are conducted on IEEE 30–bus system and 72-bus
Indian system.Simulation results shows that the UPFC gives
better results than SVC and TCSC and the FACTS controllers
reduce the system losses.
Design and Performance Analysis of Genetic based PID-PSS with SVC in a Multi-...IDES Editor
Damping of power system oscillations with the help
of proposed optimal Proportional Integral Derivative Power
System Stabilizer (PID-PSS) and Static Var Compensator
(SVC)-based controllers are thoroughly investigated in this
paper. This study presents robust tuning of PID-PSS and
SVC-based controllers using Genetic Algorithms (GA) in
multi machine power systems by considering detailed model
of the generators (model 1.1). The effectiveness of FACTSbased
controllers in general and SVC-based controller in
particular depends upon their proper location. Modal
controllability and observability are used to locate SVC–based
controller. The performance of the proposed controllers is
compared with conventional lead-lag power system stabilizer
(CPSS) and demonstrated on 10 machines, 39 bus New England
test system. Simulation studies show that the proposed genetic
based PID-PSS with SVC based controller provides better
performance.
Optimal Placement of DG for Loss Reduction and Voltage Sag Mitigation in Radi...IDES Editor
This paper presents the need to operate the power
system economically and with optimum levels of voltages has
further led to an increase in interest in Distributed
Generation. In order to reduce the power losses and to improve
the voltage in the distribution system, distributed generators
(DGs) are connected to load bus. To reduce the total power
losses in the system, the most important process is to identify
the proper location for fixing and sizing of DGs. It presents a
new methodology using a new population based meta heuristic
approach namely Artificial Bee Colony algorithm(ABC) for
the placement of Distributed Generators(DG) in the radial
distribution systems to reduce the real power losses and to
improve the voltage profile, voltage sag mitigation. The power
loss reduction is important factor for utility companies because
it is directly proportional to the company benefits in a
competitive electricity market, while reaching the better power
quality standards is too important as it has vital effect on
customer orientation. In this paper an ABC algorithm is
developed to gain these goals all together. In order to evaluate
sag mitigation capability of the proposed algorithm, voltage
in voltage sensitive buses is investigated. An existing 20KV
network has been chosen as test network and results are
compared with the proposed method in the radial distribution
system.
Line Losses in the 14-Bus Power System Network using UPFCIDES Editor
Controlling power flow in modern power systems
can be made more flexible by the use of recent developments
in power electronic and computing control technology. The
Unified Power Flow Controller (UPFC) is a Flexible AC
transmission system (FACTS) device that can control all the
three system variables namely line reactance, magnitude and
phase angle difference of voltage across the line. The UPFC
provides a promising means to control power flow in modern
power systems. Essentially the performance depends on proper
control setting achievable through a power flow analysis
program. This paper presents a reliable method to meet the
requirements by developing a Newton-Raphson based load
flow calculation through which control settings of UPFC can
be determined for the pre-specified power flow between the
lines. The proposed method keeps Newton-Raphson Load Flow
(NRLF) algorithm intact and needs (little modification in the
Jacobian matrix). A MATLAB program has been developed to
calculate the control settings of UPFC and the power flow
between the lines after the load flow is converged. Case studies
have been performed on IEEE 5-bus system and 14-bus system
to show that the proposed method is effective. These studies
indicate that the method maintains the basic NRLF properties
such as fast computational speed, high degree of accuracy and
good convergence rate.
Study of Structural Behaviour of Gravity Dam with Various Features of Gallery...IDES Editor
The size and shape of opening in dam causes the
stress concentration, it also causes the stress variation in the
rest of the dam cross section. The gravity method of the analysis
does not consider the size of opening and the elastic property
of dam material. Thus the objective of study is comprises of
the Finite Element Method which considers the size of
opening, elastic property of material, and stress distribution
because of geometric discontinuity in cross section of dam.
Stress concentration inside the dam increases with the opening
in dam which results in the failure of dam. Hence it is
necessary to analyses large opening inside the dam. By making
the percentage area of opening constant and varying size and
shape of opening the analysis is carried out. For this purpose
a section of Koyna Dam is considered. Dam is defined as a
plane strain element in FEM, based on geometry and loading
condition. Thus this available information specified our path
of approach to carry out 2D plane strain analysis. The results
obtained are then compared mutually to get most efficient
way of providing large opening in the gravity dam.
Assessing Uncertainty of Pushover Analysis to Geometric ModelingIDES Editor
Pushover Analysis a popular tool for seismic
performance evaluation of existing and new structures and is
nonlinear Static procedure where in monotonically increasing
loads are applied to the structure till the structure is unable
to resist the further load .During the analysis, whatever the
strength of concrete and steel is adopted for analysis of
structure may not be the same when real structure is
constructed and the pushover analysis results are very sensitive
to material model adopted, geometric model adopted, location
of plastic hinges and in general to procedure followed by the
analyzer. In this paper attempt has been made to assess
uncertainty in pushover analysis results by considering user
defined hinges and frame modeled as bare frame and frame
with slab modeled as rigid diaphragm and results compared
with experimental observations. Uncertain parameters
considered includes the strength of concrete, strength of steel
and cover to the reinforcement which are randomly generated
and incorporated into the analysis. The results are then
compared with experimental observations.
Secure Multi-Party Negotiation: An Analysis for Electronic Payments in Mobile...IDES Editor
This paper is an attempt to base on auctions which
presents a frame work for the secure multi-party decision
protocols. In addition to the implementations which are very
light weighted, the main focus is on synchronizing security
features for avoiding agreements manipulations and reducing
the user traffic. Through this paper one can understand that
this different auction protocols on top of the frame work can
be collaborated using mobile devices. This paper present the
negotiation between auctioneer and the proffered and this
negotiation shows that multiparty security is far better than
the existing system.
Selfish Node Isolation & Incentivation using Progressive ThresholdsIDES Editor
The problems associated with selfish nodes in
MANET are addressed by a collaborative watchdog approach
which reduces the detection time for selfish nodes thereby
improves the performance and accuracy of watchdogs[1]. In
the related works they make use of credit based systems, reputation
based mechanisms, pathrater and watchdog mechanism
to detect such selfish nodes. In this paper we follow an approach
of collaborative watchdog which reduces the detection
time for selfish nodes and also involves the removal of such
selfish nodes based on some progressively assessed thresholds.
The threshold gives the nodes a chance to stop misbehaving
before it is permanently deleted from the network.
The node passes through several isolation processes before it
is permanently removed. Another version of AODV protocol
is used here which allows the simulation of selfish nodes in
NS2 by adding or modifying log files in the protocol.
Various OSI Layer Attacks and Countermeasure to Enhance the Performance of WS...IDES Editor
Wireless sensor networks are networks having non
wired infrastructure and dynamic topology. In OSI model each
layer is prone to various attacks, which halts the performance
of a network .In this paper several attacks on four layers of
OSI model are discussed and security mechanism is described
to prevent attack in network layer i.e wormhole attack. In
Wormhole attack two or more malicious nodes makes a covert
channel which attracts the traffic towards itself by depicting a
low latency link and then start dropping and replaying packets
in the multi-path route. This paper proposes promiscuous mode
method to detect and isolate the malicious node during
wormhole attack by using Ad-hoc on demand distance vector
routing protocol (AODV) with omnidirectional antenna. The
methodology implemented notifies that the nodes which are
not participating in multi-path routing generates an alarm
message during delay and then detects and isolate the
malicious node from network. We also notice that not only
the same kind of attacks but also the same kind of
countermeasures can appear in multiple layer. For example,
misbehavior detection techniques can be applied to almost all
the layers we discussed.
Responsive Parameter based an AntiWorm Approach to Prevent Wormhole Attack in...IDES Editor
The recent advancements in the wireless technology
and their wide-spread deployment have made remarkable
enhancements in efficiency in the corporate and industrial
and Military sectors The increasing popularity and usage of
wireless technology is creating a need for more secure wireless
Ad hoc networks. This paper aims researched and developed
a new protocol that prevents wormhole attacks on a ad hoc
network. A few existing protocols detect wormhole attacks but
they require highly specialized equipment not found on most
wireless devices. This paper aims to develop a defense against
wormhole attacks as an Anti-worm protocol which is based on
responsive parameters, that does not require as a significant
amount of specialized equipment, trick clock synchronization,
no GPS dependencies.
Cloud Security and Data Integrity with Client Accountability FrameworkIDES Editor
The Cloud based services provide much efficient
and seamless ways for data sharing across the cloud. The fact
that the data owners no longer possess data makes it very
difficult to assure data confidentiality and to enable secure
data sharing in the cloud. Despite of all its advantages this
will remain a major limitation that acts as a barrier to the
wider deployment of cloud based services. One of the possible
ways for ensuring trust in this aspect is the introduction of
accountability feature in the cloud computing scenario. The
Cloud framework requires promotion of distributed
accountability for such dynamic environment[1]. In some
works, there‘s an accountable framework suggested to ensure
distributed accountability for data sharing by the generation
of only a log of data access, but without any embedded feedback
mechanism for owner permission towards data
protection[2].The proposed system is an enhanced client
accountability framework which provides an additional client
side verification for each access towards enhanced security of
data. The integrity of content of data which resides in the
cloud service provider is also maintained by secured
outsourcing. Besides, the authentication of JAR(Java Archive)
files are done to ensure file protection and to maintain a safer
environment for data sharing. The analysis of various
functionalities of the framework depicts both the
accountability and security feature in an efficient manner.
Genetic Algorithm based Layered Detection and Defense of HTTP BotnetIDES Editor
A System state in HTTP botnet uses HTTP protocol
for the creation of chain of Botnets thereby compromising
other systems. By using HTTP protocol and port number 80,
attacks can not only be hidden but also pass through the
firewall without being detected. The DPR based detection
leads to better analysis of botnet attacks [3]. However, it
provides only probabilistic detection of the attacker and also
time consuming and error prone. This paper proposes a Genetic
algorithm based layered approach for detecting as well as
preventing botnet attacks. The paper reviews p2p firewall
implementation which forms the basis of filtering.
Performance evaluation is done based on precision, F-value
and probability. Layered approach reduces the computation
and overall time requirement [7]. Genetic algorithm promises
a low false positive rate.
Enhancing Data Storage Security in Cloud Computing Through SteganographyIDES Editor
in cloud computing data storage is a significant issue
because the entire data reside over a set of interconnected
resource pools that enables the data to be accessed through
virtual machines. It moves the application software’s and
databases to the large data centers where the management of
data is actually done. As the resource pools are situated over
various corners of the world, the management of data and
services may not be fully trustworthy. So, there are various
issues that need to be addressed with respect to the
management of data, service of data, privacy of data, security
of data etc. But the privacy and security of data is highly
challenging. To ensure privacy and security of data-at-rest in
cloud computing, we have proposed an effective and a novel
approach to ensure data security in cloud computing by means
of hiding data within images following is the concept of
steganography. The main objective of this paper is to prevent
data access from cloud data storage centers by unauthorized
users. This scheme perfectly stores data at cloud data storage
centers and retrieves data from it when it is needed.
The main tasks of a Wireless Sensor Network
(WSN) are data collection from its nodes and communication
of this data to the base station (BS). The protocols used for
communication among the WSN nodes and between the WSN
and the BS, must consider the resource constraints of nodes,
battery energy, computational capabilities and memory. The
WSN applications involve unattended operation of the network
over an extended period of time. In order to extend the lifetime
of a WSN, efficient routing protocols need to be adopted. The
proposed low power routing protocol based on tree-based
network structure reliably forwards the measured data towards
the BS using TDMA. An energy consumption analysis of the
WSN making use of this protocol is also carried out. It is
found that the network is energy efficient with an average
duty cycle of 0:7% for the WSN nodes. The OmNET++
simulation platform along with MiXiM framework is made
use of.
Permutation of Pixels within the Shares of Visual Cryptography using KBRP for...IDES Editor
The security of authentication of internet based
co-banking services should not be susceptible to high risks.
The passwords are highly vulnerable to virus attacks due to
the lack of high end embedding of security methods. In order
for the passwords to be more secure, people are generally
compelled to select jumbled up character based passwords
which are not only less memorable but are also equally prone
to insecurity. Multiple use of distributed shares has been
studied to solve the problem of authentication by algorithms
based on thresholding of pixels in image processing and visual
cryptography concepts where the subset of shares is considered
for the recovery of the original image for authentication using
correlation function[1][2].The main disadvantage in the above
study is the plain storage of shares and also one of the shares
is being supplied to the customer, which will lead to the
possibility of misuse by a third party. This paper proposes a
technique for scrambling of pixels by key based random
permutation (KBRP) within the shares before the
authentication has been attempted. Total number of shares to
be created is dependent on the multiplicity of ownership of
the account. By this method the problem of uncertainty among
the customers with regard to security, storage, retrieval of
holding of half of the shares is minimized.
This paper presents a trifocal Rotman Lens Design
approach. The effects of focal ratio and element spacing on
the performance of Rotman Lens are described. A three beam
prototype feeding 4 element antenna array working in L-band
has been simulated using RLD v1.7 software. Simulated
results show that the simulated lens has a return loss of –
12.4dB at 1.8GHz. Beam to array port phase error variation
with change in the focal ratio and element spacing has also
been investigated.
Band Clustering for the Lossless Compression of AVIRIS Hyperspectral ImagesIDES Editor
Hyperspectral images can be efficiently compressed
through a linear predictive model, as for example the one
used in the SLSQ algorithm. In this paper we exploit this
predictive model on the AVIRIS images by individuating,
through an off-line approach, a common subset of bands, which
are not spectrally related with any other bands. These bands
are not useful as prediction reference for the SLSQ 3-D
predictive model and we need to encode them via other
prediction strategies which consider only spatial correlation.
We have obtained this subset by clustering the AVIRIS bands
via the clustering by compression approach. The main result
of this paper is the list of the bands, not related with the
others, for AVIRIS images. The clustering trees obtained for
AVIRIS and the relationship among bands they depict is also
an interesting starting point for future research.
Microelectronic Circuit Analogous to Hydrogen Bonding Network in Active Site ...IDES Editor
A microelectronic circuit of block-elements
functionally analogous to two hydrogen bonding networks is
investigated. The hydrogen bonding networks are extracted
from â-lactamase protein and are formed in its active site.
Each hydrogen bond of the network is described in equivalent
electrical circuit by three or four-terminal block-element.
Each block-element is coded in Matlab. Static and dynamic
analyses are performed. The resultant microelectronic circuit
analogous to the hydrogen bonding network operates as
current mirror, sine pulse source, triangular pulse source as
well as signal modulator.
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...IDES Editor
In this paper a method is proposed to discriminate
real world scenes in to natural and manmade scenes of similar
depth. Global-roughness of a scene image varies as a function
of image-depth. Increase in image depth leads to increase in
roughness in manmade scenes; on the contrary natural scenes
exhibit smooth behavior at higher image depth. This particular
arrangement of pixels in scene structure can be well explained
by local texture information in a pixel and its neighborhood.
Our proposed method analyses local texture information of a
scene image using texture unit matrix. For final classification
we have used both supervised and unsupervised learning using
K-Nearest Neighbor classifier (KNN) and Self Organizing
Map (SOM) respectively. This technique is useful for online
classification due to very less computational complexity.
Biological screening of herbal drugs: Introduction and Need for
Phyto-Pharmacological Screening, New Strategies for evaluating
Natural Products, In vitro evaluation techniques for Antioxidants, Antimicrobial and Anticancer drugs. In vivo evaluation techniques
for Anti-inflammatory, Antiulcer, Anticancer, Wound healing, Antidiabetic, Hepatoprotective, Cardio protective, Diuretics and
Antifertility, Toxicity studies as per OECD guidelines
Acetabularia Information For Class 9 .docxvaibhavrinwa19
Acetabularia acetabulum is a single-celled green alga that in its vegetative state is morphologically differentiated into a basal rhizoid and an axially elongated stalk, which bears whorls of branching hairs. The single diploid nucleus resides in the rhizoid.
Executive Directors Chat Leveraging AI for Diversity, Equity, and InclusionTechSoup
Let’s explore the intersection of technology and equity in the final session of our DEI series. Discover how AI tools, like ChatGPT, can be used to support and enhance your nonprofit's DEI initiatives. Participants will gain insights into practical AI applications and get tips for leveraging technology to advance their DEI goals.
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
Normal Labour/ Stages of Labour/ Mechanism of LabourWasim Ak
Normal labor is also termed spontaneous labor, defined as the natural physiological process through which the fetus, placenta, and membranes are expelled from the uterus through the birth canal at term (37 to 42 weeks
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.