This document presents a novel approach for motif discovery using data mining algorithms. It proposes the MDWB algorithm which uses a word-based approach and MapReduce functions to identify motifs from large ChIP-seq datasets. The algorithm labels control and test datasets to determine thresholds and extracts frequently occurring substrings. MapReduce is used to reduce memory and time costs during the mining stage. Experimental results on ChIP-seq datasets show the MDWB algorithm achieves higher accuracy and runs faster than previous methods for identifying transcription factor binding sites.
Using Cisco Network Components to Improve NIDPS Performance csandit
Network Intrusion Detection and Prevention Systems (NIDPSs) are used to detect, prevent and
report evidence of attacks and malicious traffic. Our paper presents a study where we used open
source NIDPS software. We show that NIDPS detection performance can be weak in the face of
high-speed and high-load traffic in terms of missed alerts and missed logs. To counteract this
problem, we have proposed and evaluated a solution that utilizes QoS, queues and parallel
technologies in a multi-layer Cisco Catalyst Switch to increase NIDPSs detection performance.
Our approach designs a novel QoS architecture to organise and improve throughput-forwardplan
traffic in a layer 3 switch in order to improve NIDPS performance.
MotifGP is a motif discovery tool that uses multi-objective evolutionary algorithms to find DNA motifs. It evolves candidate motif solutions as network expressions using strongly typed genetic programming. MotifGP scores candidates based on how well they discriminate between input and control sequences while maintaining coverage of input sequences. When tested on 13 datasets, MotifGP identified known motifs and fully overlapped reference motifs in the databases, outperforming the existing DREME tool. Future work will explore different fitness functions and evolutionary methods to further improve motif discovery.
Improving the effectiveness of information retrieval system using adaptive ge...ijcsit
The document describes research into improving the effectiveness of information retrieval systems using an adaptive genetic algorithm. A genetic algorithm with variable crossover and mutation probabilities (adaptive GA) is investigated. The adaptive GA is tested on 242 Arabic abstracts using three information retrieval models: vector space model, extended Boolean model, and language model. Results show the adaptive GA approach improves retrieval effectiveness over traditional genetic algorithms and baseline information retrieval systems, as measured by average recall and precision. Key aspects of the adaptive GA used include variable crossover and mutation probabilities tuned during the search process, and fitness functions based on document retrieval order.
The document discusses feature selection and classification methods for detecting denial-of-service (DoS) attacks in intrusion detection systems. It proposes using Random Forests for feature selection and k-Nearest Neighbors for classification on the KDD99 dataset. Experimental results show that the proposed approach of using Random Forests to select important features before classifying with k-Nearest Neighbors increases detection accuracy while decreasing false positives compared to other algorithms.
AUTOMATED INFORMATION RETRIEVAL MODEL USING FP GROWTH BASED FUZZY PARTICLE SW...ijcseit
To mine out relevant facts at the time of need from web has been a tenuous task. Research on diverse fields
are fine tuning methodologies toward these goals that extracts the best of information relevant to the users
search query. In the proposed methodology discussed in this paper find ways to ease the search complexity
tackling the severe issues hindering the performance of traditional approaches in use. The proposed
methodology find effective means to find all possible semantic relatable frequent sets with FP Growth
algorithm. The outcome of which is the further source of fuel for Bio inspired Fuzzy PSO to find the optimal
attractive points for the web documents to get clustered meeting the requirement of the search query
without losing the relevance. On the whole the proposed system optimizes the objective function of
minimizing the intra cluster differences and maximizes the inter cluster distances along with retention of all
possible relationships with the search context intact. The major contribution being the system finds all
possible combinations matching the user search transaction and thereby making the system more
meaningful. These relatable sets form the set of particles for Fuzzy Clustering as well as PSO and thus
being unbiased and maintains a innate behaviour for any number of new additions to follow the herd
behaviour’s evaluations reveals the proposed methodology fares well as an optimized and effective
enhancements over the conventional approaches.
Data mining is the knowledge discovery in databases and the gaol is to extract patterns and knowledge from
large amounts of data. The important term in data mining is text mining. Text mining extracts the quality
information highly from text. Statistical pattern learning is used to high quality information. High –quality in
text mining defines the combinations of relevance, novelty and interestingness. Tasks in text mining are text
categorization, text clustering, entity extraction and sentiment analysis. Applications of natural language
processing and analytical methods are highly preferred to turn
Text mining efforts to innovate new, previous unknown or hidden data by automatically extracting
collection of information from various written resources. Applying knowledge detection method to
formless text is known as Knowledge Discovery in Text or Text data mining and also called Text Mining.
Most of the techniques used in Text Mining are found on the statistical study of a term either word or
phrase. There are different algorithms in Text mining are used in the previous method. For example
Single-Link Algorithm and Self-Organizing Mapping(SOM) is introduces an approach for visualizing
high-dimensional data and a very useful tool for processing textual data based on Projection method.
Genetic and Sequential algorithms are provide the capability for multiscale representation of datasets and
fast to compute with less CPU time based on the Isolet Reduces subsets in Unsupervised Feature
Selection. We are going to propose the Vector Space Model and Concept based analysis algorithm it will
improve the text clustering quality and a better text clustering result may achieve. We think it is a good
behavior of the proposed algorithm is in terms of toughness and constancy with respect to the formation of
Neural Network.
Mining frequent pattern is a NP-hard problem and has become a hot topic in recent researches. Moreover,
protein dataset contains distinct Pattern that can be used in many areas such as drug discovery, disease
prediction, etc. In early decades, pattern discovery and protein fold recognition was determined by
biophysics and biochemistry approach; and X-ray and NMR have been used for protein structure
prediction which are very expensive and time consuming while, a mathematical approach can reduce the
cost of such laboratory experiments. Many computer based tests have been applied for the protein fold
detection such as graph based algorithms and data mining viewpoints like classification or clustering, and
all have their advantages and drawbacks. Pattern matching in protein sequential dataset for fold
recognition plays a meaningful role in the field of bioinformatics since it evolved prediction of unknown
protein function. There are lots of pattern recognition algorithms but in this work we used PrefixSpan. The
reason of selecting this algorithm will be discussed below in section 2. For evaluating the result of
experiments we used SCOPE dataset which is a classified protein dataset and ASTRAL, a discriminative
sequential dataset of SCOPE.
Using Cisco Network Components to Improve NIDPS Performance csandit
Network Intrusion Detection and Prevention Systems (NIDPSs) are used to detect, prevent and
report evidence of attacks and malicious traffic. Our paper presents a study where we used open
source NIDPS software. We show that NIDPS detection performance can be weak in the face of
high-speed and high-load traffic in terms of missed alerts and missed logs. To counteract this
problem, we have proposed and evaluated a solution that utilizes QoS, queues and parallel
technologies in a multi-layer Cisco Catalyst Switch to increase NIDPSs detection performance.
Our approach designs a novel QoS architecture to organise and improve throughput-forwardplan
traffic in a layer 3 switch in order to improve NIDPS performance.
MotifGP is a motif discovery tool that uses multi-objective evolutionary algorithms to find DNA motifs. It evolves candidate motif solutions as network expressions using strongly typed genetic programming. MotifGP scores candidates based on how well they discriminate between input and control sequences while maintaining coverage of input sequences. When tested on 13 datasets, MotifGP identified known motifs and fully overlapped reference motifs in the databases, outperforming the existing DREME tool. Future work will explore different fitness functions and evolutionary methods to further improve motif discovery.
Improving the effectiveness of information retrieval system using adaptive ge...ijcsit
The document describes research into improving the effectiveness of information retrieval systems using an adaptive genetic algorithm. A genetic algorithm with variable crossover and mutation probabilities (adaptive GA) is investigated. The adaptive GA is tested on 242 Arabic abstracts using three information retrieval models: vector space model, extended Boolean model, and language model. Results show the adaptive GA approach improves retrieval effectiveness over traditional genetic algorithms and baseline information retrieval systems, as measured by average recall and precision. Key aspects of the adaptive GA used include variable crossover and mutation probabilities tuned during the search process, and fitness functions based on document retrieval order.
The document discusses feature selection and classification methods for detecting denial-of-service (DoS) attacks in intrusion detection systems. It proposes using Random Forests for feature selection and k-Nearest Neighbors for classification on the KDD99 dataset. Experimental results show that the proposed approach of using Random Forests to select important features before classifying with k-Nearest Neighbors increases detection accuracy while decreasing false positives compared to other algorithms.
AUTOMATED INFORMATION RETRIEVAL MODEL USING FP GROWTH BASED FUZZY PARTICLE SW...ijcseit
To mine out relevant facts at the time of need from web has been a tenuous task. Research on diverse fields
are fine tuning methodologies toward these goals that extracts the best of information relevant to the users
search query. In the proposed methodology discussed in this paper find ways to ease the search complexity
tackling the severe issues hindering the performance of traditional approaches in use. The proposed
methodology find effective means to find all possible semantic relatable frequent sets with FP Growth
algorithm. The outcome of which is the further source of fuel for Bio inspired Fuzzy PSO to find the optimal
attractive points for the web documents to get clustered meeting the requirement of the search query
without losing the relevance. On the whole the proposed system optimizes the objective function of
minimizing the intra cluster differences and maximizes the inter cluster distances along with retention of all
possible relationships with the search context intact. The major contribution being the system finds all
possible combinations matching the user search transaction and thereby making the system more
meaningful. These relatable sets form the set of particles for Fuzzy Clustering as well as PSO and thus
being unbiased and maintains a innate behaviour for any number of new additions to follow the herd
behaviour’s evaluations reveals the proposed methodology fares well as an optimized and effective
enhancements over the conventional approaches.
Data mining is the knowledge discovery in databases and the gaol is to extract patterns and knowledge from
large amounts of data. The important term in data mining is text mining. Text mining extracts the quality
information highly from text. Statistical pattern learning is used to high quality information. High –quality in
text mining defines the combinations of relevance, novelty and interestingness. Tasks in text mining are text
categorization, text clustering, entity extraction and sentiment analysis. Applications of natural language
processing and analytical methods are highly preferred to turn
Text mining efforts to innovate new, previous unknown or hidden data by automatically extracting
collection of information from various written resources. Applying knowledge detection method to
formless text is known as Knowledge Discovery in Text or Text data mining and also called Text Mining.
Most of the techniques used in Text Mining are found on the statistical study of a term either word or
phrase. There are different algorithms in Text mining are used in the previous method. For example
Single-Link Algorithm and Self-Organizing Mapping(SOM) is introduces an approach for visualizing
high-dimensional data and a very useful tool for processing textual data based on Projection method.
Genetic and Sequential algorithms are provide the capability for multiscale representation of datasets and
fast to compute with less CPU time based on the Isolet Reduces subsets in Unsupervised Feature
Selection. We are going to propose the Vector Space Model and Concept based analysis algorithm it will
improve the text clustering quality and a better text clustering result may achieve. We think it is a good
behavior of the proposed algorithm is in terms of toughness and constancy with respect to the formation of
Neural Network.
Mining frequent pattern is a NP-hard problem and has become a hot topic in recent researches. Moreover,
protein dataset contains distinct Pattern that can be used in many areas such as drug discovery, disease
prediction, etc. In early decades, pattern discovery and protein fold recognition was determined by
biophysics and biochemistry approach; and X-ray and NMR have been used for protein structure
prediction which are very expensive and time consuming while, a mathematical approach can reduce the
cost of such laboratory experiments. Many computer based tests have been applied for the protein fold
detection such as graph based algorithms and data mining viewpoints like classification or clustering, and
all have their advantages and drawbacks. Pattern matching in protein sequential dataset for fold
recognition plays a meaningful role in the field of bioinformatics since it evolved prediction of unknown
protein function. There are lots of pattern recognition algorithms but in this work we used PrefixSpan. The
reason of selecting this algorithm will be discussed below in section 2. For evaluating the result of
experiments we used SCOPE dataset which is a classified protein dataset and ASTRAL, a discriminative
sequential dataset of SCOPE.
Particle Swarm Optimization based K-Prototype Clustering Algorithm iosrjce
This document summarizes a research paper that proposes a new Particle Swarm Optimization (PSO) based K-Prototype clustering algorithm to cluster mixed numeric and categorical data. It begins with background information on clustering algorithms like K-Means, K-Modes, and K-Prototype. It then describes the K-Prototype algorithm, PSO, and discrete binary PSO. Related work integrating PSO with other clustering algorithms is also reviewed. The proposed approach uses binary PSO to select improved initial prototypes for K-Prototype clustering in order to obtain better clustering results than traditional K-Prototype and avoid local optima.
Answer extraction and passage retrieval forWaheeb Ahmed
—Question Answering systems (QASs) do the task of
retrieving text portions from a collection of documents that
contain the answer to the user’s questions. These QASs use a
variety of linguistic tools that be able to deal with small
fragments of text. Therefore, to retrieve the documents which
contains the answer from a large document collections, QASs
employ Information Retrieval (IR) techniques to minimize the
number of documents collections to a treatable amount of
relevant text. In this paper, we propose a model for passage
retrieval model that do this task with a better performance for
the purpose of Arabic QASs. We first segment each the top five
ranked documents returned by the IR module into passages.
Then, we compute the similarity score between the user’s
question terms and each passage. The top five passages (with
high similarity score) are retrieved are retrieved. Finally,
Answer Extraction techniques are applied to extract the final
answer. Our method achieved an average for precision of
87.25%, Recall of 86.2% and F1-measure of 87%.
Proceedings of the 50th Hawaii International Conference on System Sciences | 2017
Discovering Malware with Time Series Shapelets
Om P. Patri
University of Southern California
Los Angeles, CA 90089
patri@usc.edu
Novel Class Detection Using RBF SVM Kernel from Feature Evolving Data Streamsirjes
In the data mining field the classification of data stream creates many problems. The challenges
faces in the data stream are infinite length, concept drift, concept evaluation and feature evolution. Most of the
existing system focuses on the only first two challenges. We propose a framework in which each classifier is
prepared with the novel class detector for addressing the two challenges concept drift and concept evaluation
and for addressing the feature evolution feature set homogeneous technique is proposed. We improved the
novel class detection module by building it more adaptive to evolving the stream. SVM based feature extraction
for RBF kernel method is also proposed for detecting the novel class from the steaming data. By using the
concept of permutation and combination RBF kernel extracts the features and find out the relation between
them. This improves the novel class detect technique and provide more accuracy for classifying the data
Computational Biology Methods for Drug Discovery_Phase 1-5_November 2015Mathew Varghese
This document describes a computational biology methods program launched by Thomson Reuters to implement network and pathway analysis approaches for analyzing omics data. The program provides tools through an R package to enable analysis of diverse data sets within biological networks. It focuses on selecting state-of-the-art algorithms and implementing them in a robust, easy-to-use software package. The program aims to maximize the utility of internal and external biological network and pathway databases for drug discovery applications such as target identification and patient stratification.
Identifying the Coding and Non Coding Regions of DNA Using Spectral AnalysisIJMER
This paper presents a new method for exon detection in DNA sequences based on multi-scale parametric spectral analysis Identification and analysis of hidden features of coding and non-coding regions of DNA sequence is a challenging problem in the area of genomics. The objective of this paper is to estimate and compare spectral content of coding and non-coding segments of DNA sequence both by Parametric and Non-parametric methods. In this context protein coding region (exon) identification in the DNA sequence has been attaining a great interest in few decades. These coding regions can be identified by exploiting the period-3 property present in it. The discrete Fourier transform has been commonly used as a spectral estimation technique to extract the period-3 patterns present in DNA sequence. Consequently an attempt has been made so that some hidden internal properties of the DNA sequence can be brought into light in order to identify coding regions from non-coding ones. In this approach the DNA sequence from various Homo Sapiens genes have been identified for sample test and assigned numerical values based on weak-strong hydrogen bonding (WSHB) before application of digital signal analysis techniques.
IRJET- Facial Expression Recognition System using Neural Network based on...IRJET Journal
This document describes a facial expression recognition system using a neural network approach. It uses the Japanese Female Facial Expressions (JAFFE) database to classify 7 facial expressions. The system extracts features using 2D discrete cosine transform (DCT), local binary patterns (LBP), and histogram of oriented gradients (HOG). These features are used to create a hybrid feature vector for each image. A single hidden layer feedforward neural network is trained on the feature vectors using different learning algorithms to classify the expressions. Experimental results show that a neural network trained with gradient descent and adaptive learning rate achieves the highest average accuracy of 97.2% for classifying expressions in the JAFFE database.
Performance analysis on secured data method in natural language steganographyjournalBEEI
The rapid amount of exchange information that causes the expansion of the internet during the last decade has motivated that a research in this field. Recently, steganography approaches have received an unexpected attention. Hence, the aim of this paper is to review different performance metric; covering the decoding, decrypting and extracting performance metric. The process of data decoding interprets the received hidden message into a code word. As such, data encryption is the best way to provide a secure communication. Decrypting take an encrypted text and converting it back into an original text. Data extracting is a process which is the reverse of the data embedding process. The effectiveness evaluation is mainly determined by the performance metric aspect. The intention of researchers is to improve performance metric characteristics. The evaluation success is mainly determined by the performance analysis aspect. The objective of this paper is to present a review on the study of steganography in natural language based on the criteria of the performance analysis. The findings review will clarify the preferred performance metric aspects used. This review is hoped to help future research in evaluating the performance analysis of natural language in general and the proposed secured data revealed on natural language steganography in specific.
Research Inventy : International Journal of Engineering and Science is publis...researchinventy
This document summarizes a research paper that proposes using principal component analysis and sequential hypothesis testing in a game theory framework to detect intrusions in a mobile ad hoc network. Specifically, it detects replica node attacks by tracking features like source/destination addresses and routing requests/replies to build profiles for each node. It then applies sequential probability ratio testing on routing requests and replies to test hypotheses about whether nodes are normal or abnormal. A two-player game model is also used to identify the optimal attack and defense strategies between an attacker and defender. Simulation results show that the approach can decrease the number of claims needed for detection while minimizing false positives and negatives.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Elevating forensic investigation system for file clusteringeSAT Journals
Abstract In computer forensic investigation, thousands of files are usually surveyed. Much of the data in those files consists of formless manuscript, whose investigation by computer examiners is very tough to accomplish. Clustering is the unverified organization of designs that is data items, remarks, or feature vectors into groups (clusters). To find a noble clarification for this automated method of analysis are of great interest. In particular, algorithms such as K-means, K-medoids, Single Link, Complete Link and Average Link can simplify the detection of new and valuable information from the documents under investigation. This paper is going to present an tactic that applies text clustering algorithms to forensic examination of computers seized in police investigations using multithreading technique for data clustering. Keywords- Clustering, forensic computing, text mining, multithreading.
Arabic morphology complex is a core challenge of
Arabic information retrieval. This limitation makes
Arabic language is so complex environment of
information retrieval specialists due to the high
inflected in Arabic language. This paper explains the
importance of the stemming in information retrieval
through developed a method to search about information needs in Arabic text. The new developed
method based on light stemmer to increase matching
the keywords between the query, with the related
document in the test collection.
Prediction of Answer Keywords using Char-RNNIJECEIAES
Generating sequences of characters using a Recurrent Neural Network (RNN) is a tried and tested method for creating unique and context aware words, and is fundamental in Natural Language Processing tasks. These type of Neural Networks can also be used a question-answering system. The main drawback of most of these systems is that they work from a factoid database of information, and when queried about new and current information, the responses are usually bleak. In this paper, the author proposes a novel approach to finding answer keywords from a given body of news text or headline, based on the query that was asked, where the query would be of the nature of current affairs or recent news, with the use of Gated Recurrent Unit (GRU) variant of RNNs. Thus, this ensures that the answers provided are relevant to the content of query that was put forth.
Text documents clustering using modified multi-verse optimizerIJECEIAES
In this study, a multi-verse optimizer (MVO) is utilised for the text document clus- tering (TDC) problem. TDC is treated as a discrete optimization problem, and an objective function based on the Euclidean distance is applied as similarity measure. TDC is tackled by the division of the documents into clusters; documents belonging to the same cluster are similar, whereas those belonging to different clusters are dissimilar. MVO, which is a recent metaheuristic optimization algorithm established for continuous optimization problems, can intelligently navigate different areas in the search space and search deeply in each area using a particular learning mechanism. The proposed algorithm is called MVOTDC, and it adopts the convergence behaviour of MVO operators to deal with discrete, rather than continuous, optimization problems. For evaluating MVOTDC, a comprehensive comparative study is conducted on six text document datasets with various numbers of documents and clusters. The quality of the final results is assessed using precision, recall, F-measure, entropy accuracy, and purity measures. Experimental results reveal that the proposed method performs competitively in comparison with state-of-the-art algorithms. Statistical analysis is also conducted and shows that MVOTDC can produce significant results in comparison with three well-established methods.
Feature selection in high-dimensional datasets is
considered to be a complex and time-consuming problem. To
enhance the accuracy of classification and reduce the execution
time, Parallel Evolutionary Algorithms (PEAs) can be used. In
this paper, we make a review for the most recent works which
handle the use of PEAs for feature selection in large datasets.
We have classified the algorithms in these papers into four main
classes (Genetic Algorithms (GA), Particle Swarm Optimization
(PSO), Scattered Search (SS), and Ant Colony Optimization
(ACO)). The accuracy is adopted as a measure to compare the
efficiency of these PEAs. It is noticeable that the Parallel Genetic
Algorithms (PGAs) are the most suitable algorithms for feature
selection in large datasets; since they achieve the highest accuracy.
On the other hand, we found that the Parallel ACO is timeconsuming
and less accurate comparing with other PEA.
Syntactic search relies on keywords contained in a query to find suitable documents. So, documents that do
not contain the keywords but contain information related to the query are not retrieved. Spreading
activation is an algorithm for finding latent information in a query by exploiting relations between nodes in
an associative network or semantic network. However, the classical spreading activation algorithm uses all
relations of a node in the network that will add unsuitable information into the query. In this paper, we
propose a novel approach for semantic text search, called query-oriented-constrained spreading activation
that only uses relations relating to the content of the query to find really related information. Experiments
on a benchmark dataset show that, in terms of the MAP measure, our search engine is 18.9% and 43.8%
respectively better than the syntactic search and the search using the classical constrained spreading
activation.
This document discusses speaker recognition techniques and applications. It provides an overview of common feature extraction methods for speaker recognition systems such as Linear Predictive Coding (LPC), Perceptual Linear Predictive Coefficients (PLPC), Mel Frequency Cepstral Coefficients (MFCC), and Gamma Tone Frequency Cepstral Coefficients (GFCC). It compares these techniques and finds that MFCC and PLPC are most often used due to being derived from models of human auditory processing. GFCC has benefits of higher frequency resolution and noise robustness compared to MFCC. Speaker recognition has applications in security systems for access control.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
An efficient algorithm for sequence generation in data miningijcisjournal
Data mining is the method or the activity of analyzing data from different perspectives and summarizing it
into useful information. There are several major data mining techniques that have been developed and are
used in the data mining projects which include association, classification, clustering, sequential patterns,
prediction and decision tree. Among different tasks in data mining, sequential pattern mining is one of the
most important tasks. Sequential pattern mining involves the mining of the subsequences that appear
frequently in a set of sequences. It has a variety of applications in several domains such as the analysis of
customer purchase patterns, protein sequence analysis, DNA analysis, gene sequence analysis, web access
patterns, seismologic data and weather observations. Various models and algorithms have been developed
for the efficient mining of sequential patterns in large amount of data. This research paper analyzes the
efficiency of three sequence generation algorithms namely GSP, SPADE and PrefixSpan on a retail dataset
by applying various performance factors. From the experimental results, it is observed that the PrefixSpan
algorithm is more efficient than other two algorithms.
Genome structure prediction a review over soft computing techniqueseSAT Journals
Abstract There are some techniques like spectrometry or crystallography for the determination of DNA, RNA or protein structures. These processes provide very accurate results for the structure estimation. But these conventional techniques are very slow and could be applied over a few special cases only. Soft computing techniques guarantee a near appropriate results in much smaller time and have very large applicability. These techniques are much easier to apply. Different approaches have been used in soft computing including nature inspired computing for estimation of genome structures with a considerable accuracy of results. This paper provides a review over different soft computing techniques been applied along with application method for the determination of genome structure. Keywords—DNA, RNA, proteins, structure, soft computing, techniques.
Ijricit 01-002 enhanced replica detection in short time for large data setsIjripublishers Ijri
Similarity check of real world entities is a necessary factor in these days which is named as Data Replica Detection.
Time is an critical factor today in tracking Data Replica Detection for large data sets, without having impact over quality
of Dataset. In this we primarily introduce two Data Replica Detection algorithms , where in these contribute enhanced
procedural standards in finding Data Replication at limited execution periods.This contribute better improvised state
of time than conventional techniques . We propose two Data Replica Detection algorithms namely progressive sorted
neighborhood method (PSNM), which performs best on small and almost clean datasets, and progressive blocking (PB),
which performs best on large and very grimy datasets. Both enhance the efficiency of duplicate detection even on very
large datasets.
A Novel Framework for Short Tandem Repeats (STRs) Using Parallel String MatchingIJERA Editor
This document presents a novel framework for identifying Short Tandem Repeats (STRs) using parallel string matching. It begins with background on STRs and challenges with existing sequential algorithms. It then describes a two-phase methodology - first applying a basic improved right prefix algorithm sequentially, then applying it in parallel using multi-threading on multicore processors. Results show the basic algorithm outperforms Boyer-Moore, Knuth-Morris-Pratt and brute force algorithms sequentially. When applied in parallel, processing time is reduced from 80ms sequentially to 40ms in parallel on multicore systems. The parallel STR identification framework allows efficient searching of repeats in large genomes.
Particle Swarm Optimization based K-Prototype Clustering Algorithm iosrjce
This document summarizes a research paper that proposes a new Particle Swarm Optimization (PSO) based K-Prototype clustering algorithm to cluster mixed numeric and categorical data. It begins with background information on clustering algorithms like K-Means, K-Modes, and K-Prototype. It then describes the K-Prototype algorithm, PSO, and discrete binary PSO. Related work integrating PSO with other clustering algorithms is also reviewed. The proposed approach uses binary PSO to select improved initial prototypes for K-Prototype clustering in order to obtain better clustering results than traditional K-Prototype and avoid local optima.
Answer extraction and passage retrieval forWaheeb Ahmed
—Question Answering systems (QASs) do the task of
retrieving text portions from a collection of documents that
contain the answer to the user’s questions. These QASs use a
variety of linguistic tools that be able to deal with small
fragments of text. Therefore, to retrieve the documents which
contains the answer from a large document collections, QASs
employ Information Retrieval (IR) techniques to minimize the
number of documents collections to a treatable amount of
relevant text. In this paper, we propose a model for passage
retrieval model that do this task with a better performance for
the purpose of Arabic QASs. We first segment each the top five
ranked documents returned by the IR module into passages.
Then, we compute the similarity score between the user’s
question terms and each passage. The top five passages (with
high similarity score) are retrieved are retrieved. Finally,
Answer Extraction techniques are applied to extract the final
answer. Our method achieved an average for precision of
87.25%, Recall of 86.2% and F1-measure of 87%.
Proceedings of the 50th Hawaii International Conference on System Sciences | 2017
Discovering Malware with Time Series Shapelets
Om P. Patri
University of Southern California
Los Angeles, CA 90089
patri@usc.edu
Novel Class Detection Using RBF SVM Kernel from Feature Evolving Data Streamsirjes
In the data mining field the classification of data stream creates many problems. The challenges
faces in the data stream are infinite length, concept drift, concept evaluation and feature evolution. Most of the
existing system focuses on the only first two challenges. We propose a framework in which each classifier is
prepared with the novel class detector for addressing the two challenges concept drift and concept evaluation
and for addressing the feature evolution feature set homogeneous technique is proposed. We improved the
novel class detection module by building it more adaptive to evolving the stream. SVM based feature extraction
for RBF kernel method is also proposed for detecting the novel class from the steaming data. By using the
concept of permutation and combination RBF kernel extracts the features and find out the relation between
them. This improves the novel class detect technique and provide more accuracy for classifying the data
Computational Biology Methods for Drug Discovery_Phase 1-5_November 2015Mathew Varghese
This document describes a computational biology methods program launched by Thomson Reuters to implement network and pathway analysis approaches for analyzing omics data. The program provides tools through an R package to enable analysis of diverse data sets within biological networks. It focuses on selecting state-of-the-art algorithms and implementing them in a robust, easy-to-use software package. The program aims to maximize the utility of internal and external biological network and pathway databases for drug discovery applications such as target identification and patient stratification.
Identifying the Coding and Non Coding Regions of DNA Using Spectral AnalysisIJMER
This paper presents a new method for exon detection in DNA sequences based on multi-scale parametric spectral analysis Identification and analysis of hidden features of coding and non-coding regions of DNA sequence is a challenging problem in the area of genomics. The objective of this paper is to estimate and compare spectral content of coding and non-coding segments of DNA sequence both by Parametric and Non-parametric methods. In this context protein coding region (exon) identification in the DNA sequence has been attaining a great interest in few decades. These coding regions can be identified by exploiting the period-3 property present in it. The discrete Fourier transform has been commonly used as a spectral estimation technique to extract the period-3 patterns present in DNA sequence. Consequently an attempt has been made so that some hidden internal properties of the DNA sequence can be brought into light in order to identify coding regions from non-coding ones. In this approach the DNA sequence from various Homo Sapiens genes have been identified for sample test and assigned numerical values based on weak-strong hydrogen bonding (WSHB) before application of digital signal analysis techniques.
IRJET- Facial Expression Recognition System using Neural Network based on...IRJET Journal
This document describes a facial expression recognition system using a neural network approach. It uses the Japanese Female Facial Expressions (JAFFE) database to classify 7 facial expressions. The system extracts features using 2D discrete cosine transform (DCT), local binary patterns (LBP), and histogram of oriented gradients (HOG). These features are used to create a hybrid feature vector for each image. A single hidden layer feedforward neural network is trained on the feature vectors using different learning algorithms to classify the expressions. Experimental results show that a neural network trained with gradient descent and adaptive learning rate achieves the highest average accuracy of 97.2% for classifying expressions in the JAFFE database.
Performance analysis on secured data method in natural language steganographyjournalBEEI
The rapid amount of exchange information that causes the expansion of the internet during the last decade has motivated that a research in this field. Recently, steganography approaches have received an unexpected attention. Hence, the aim of this paper is to review different performance metric; covering the decoding, decrypting and extracting performance metric. The process of data decoding interprets the received hidden message into a code word. As such, data encryption is the best way to provide a secure communication. Decrypting take an encrypted text and converting it back into an original text. Data extracting is a process which is the reverse of the data embedding process. The effectiveness evaluation is mainly determined by the performance metric aspect. The intention of researchers is to improve performance metric characteristics. The evaluation success is mainly determined by the performance analysis aspect. The objective of this paper is to present a review on the study of steganography in natural language based on the criteria of the performance analysis. The findings review will clarify the preferred performance metric aspects used. This review is hoped to help future research in evaluating the performance analysis of natural language in general and the proposed secured data revealed on natural language steganography in specific.
Research Inventy : International Journal of Engineering and Science is publis...researchinventy
This document summarizes a research paper that proposes using principal component analysis and sequential hypothesis testing in a game theory framework to detect intrusions in a mobile ad hoc network. Specifically, it detects replica node attacks by tracking features like source/destination addresses and routing requests/replies to build profiles for each node. It then applies sequential probability ratio testing on routing requests and replies to test hypotheses about whether nodes are normal or abnormal. A two-player game model is also used to identify the optimal attack and defense strategies between an attacker and defender. Simulation results show that the approach can decrease the number of claims needed for detection while minimizing false positives and negatives.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Elevating forensic investigation system for file clusteringeSAT Journals
Abstract In computer forensic investigation, thousands of files are usually surveyed. Much of the data in those files consists of formless manuscript, whose investigation by computer examiners is very tough to accomplish. Clustering is the unverified organization of designs that is data items, remarks, or feature vectors into groups (clusters). To find a noble clarification for this automated method of analysis are of great interest. In particular, algorithms such as K-means, K-medoids, Single Link, Complete Link and Average Link can simplify the detection of new and valuable information from the documents under investigation. This paper is going to present an tactic that applies text clustering algorithms to forensic examination of computers seized in police investigations using multithreading technique for data clustering. Keywords- Clustering, forensic computing, text mining, multithreading.
Arabic morphology complex is a core challenge of
Arabic information retrieval. This limitation makes
Arabic language is so complex environment of
information retrieval specialists due to the high
inflected in Arabic language. This paper explains the
importance of the stemming in information retrieval
through developed a method to search about information needs in Arabic text. The new developed
method based on light stemmer to increase matching
the keywords between the query, with the related
document in the test collection.
Prediction of Answer Keywords using Char-RNNIJECEIAES
Generating sequences of characters using a Recurrent Neural Network (RNN) is a tried and tested method for creating unique and context aware words, and is fundamental in Natural Language Processing tasks. These type of Neural Networks can also be used a question-answering system. The main drawback of most of these systems is that they work from a factoid database of information, and when queried about new and current information, the responses are usually bleak. In this paper, the author proposes a novel approach to finding answer keywords from a given body of news text or headline, based on the query that was asked, where the query would be of the nature of current affairs or recent news, with the use of Gated Recurrent Unit (GRU) variant of RNNs. Thus, this ensures that the answers provided are relevant to the content of query that was put forth.
Text documents clustering using modified multi-verse optimizerIJECEIAES
In this study, a multi-verse optimizer (MVO) is utilised for the text document clus- tering (TDC) problem. TDC is treated as a discrete optimization problem, and an objective function based on the Euclidean distance is applied as similarity measure. TDC is tackled by the division of the documents into clusters; documents belonging to the same cluster are similar, whereas those belonging to different clusters are dissimilar. MVO, which is a recent metaheuristic optimization algorithm established for continuous optimization problems, can intelligently navigate different areas in the search space and search deeply in each area using a particular learning mechanism. The proposed algorithm is called MVOTDC, and it adopts the convergence behaviour of MVO operators to deal with discrete, rather than continuous, optimization problems. For evaluating MVOTDC, a comprehensive comparative study is conducted on six text document datasets with various numbers of documents and clusters. The quality of the final results is assessed using precision, recall, F-measure, entropy accuracy, and purity measures. Experimental results reveal that the proposed method performs competitively in comparison with state-of-the-art algorithms. Statistical analysis is also conducted and shows that MVOTDC can produce significant results in comparison with three well-established methods.
Feature selection in high-dimensional datasets is
considered to be a complex and time-consuming problem. To
enhance the accuracy of classification and reduce the execution
time, Parallel Evolutionary Algorithms (PEAs) can be used. In
this paper, we make a review for the most recent works which
handle the use of PEAs for feature selection in large datasets.
We have classified the algorithms in these papers into four main
classes (Genetic Algorithms (GA), Particle Swarm Optimization
(PSO), Scattered Search (SS), and Ant Colony Optimization
(ACO)). The accuracy is adopted as a measure to compare the
efficiency of these PEAs. It is noticeable that the Parallel Genetic
Algorithms (PGAs) are the most suitable algorithms for feature
selection in large datasets; since they achieve the highest accuracy.
On the other hand, we found that the Parallel ACO is timeconsuming
and less accurate comparing with other PEA.
Syntactic search relies on keywords contained in a query to find suitable documents. So, documents that do
not contain the keywords but contain information related to the query are not retrieved. Spreading
activation is an algorithm for finding latent information in a query by exploiting relations between nodes in
an associative network or semantic network. However, the classical spreading activation algorithm uses all
relations of a node in the network that will add unsuitable information into the query. In this paper, we
propose a novel approach for semantic text search, called query-oriented-constrained spreading activation
that only uses relations relating to the content of the query to find really related information. Experiments
on a benchmark dataset show that, in terms of the MAP measure, our search engine is 18.9% and 43.8%
respectively better than the syntactic search and the search using the classical constrained spreading
activation.
This document discusses speaker recognition techniques and applications. It provides an overview of common feature extraction methods for speaker recognition systems such as Linear Predictive Coding (LPC), Perceptual Linear Predictive Coefficients (PLPC), Mel Frequency Cepstral Coefficients (MFCC), and Gamma Tone Frequency Cepstral Coefficients (GFCC). It compares these techniques and finds that MFCC and PLPC are most often used due to being derived from models of human auditory processing. GFCC has benefits of higher frequency resolution and noise robustness compared to MFCC. Speaker recognition has applications in security systems for access control.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
An efficient algorithm for sequence generation in data miningijcisjournal
Data mining is the method or the activity of analyzing data from different perspectives and summarizing it
into useful information. There are several major data mining techniques that have been developed and are
used in the data mining projects which include association, classification, clustering, sequential patterns,
prediction and decision tree. Among different tasks in data mining, sequential pattern mining is one of the
most important tasks. Sequential pattern mining involves the mining of the subsequences that appear
frequently in a set of sequences. It has a variety of applications in several domains such as the analysis of
customer purchase patterns, protein sequence analysis, DNA analysis, gene sequence analysis, web access
patterns, seismologic data and weather observations. Various models and algorithms have been developed
for the efficient mining of sequential patterns in large amount of data. This research paper analyzes the
efficiency of three sequence generation algorithms namely GSP, SPADE and PrefixSpan on a retail dataset
by applying various performance factors. From the experimental results, it is observed that the PrefixSpan
algorithm is more efficient than other two algorithms.
Genome structure prediction a review over soft computing techniqueseSAT Journals
Abstract There are some techniques like spectrometry or crystallography for the determination of DNA, RNA or protein structures. These processes provide very accurate results for the structure estimation. But these conventional techniques are very slow and could be applied over a few special cases only. Soft computing techniques guarantee a near appropriate results in much smaller time and have very large applicability. These techniques are much easier to apply. Different approaches have been used in soft computing including nature inspired computing for estimation of genome structures with a considerable accuracy of results. This paper provides a review over different soft computing techniques been applied along with application method for the determination of genome structure. Keywords—DNA, RNA, proteins, structure, soft computing, techniques.
Ijricit 01-002 enhanced replica detection in short time for large data setsIjripublishers Ijri
Similarity check of real world entities is a necessary factor in these days which is named as Data Replica Detection.
Time is an critical factor today in tracking Data Replica Detection for large data sets, without having impact over quality
of Dataset. In this we primarily introduce two Data Replica Detection algorithms , where in these contribute enhanced
procedural standards in finding Data Replication at limited execution periods.This contribute better improvised state
of time than conventional techniques . We propose two Data Replica Detection algorithms namely progressive sorted
neighborhood method (PSNM), which performs best on small and almost clean datasets, and progressive blocking (PB),
which performs best on large and very grimy datasets. Both enhance the efficiency of duplicate detection even on very
large datasets.
A Novel Framework for Short Tandem Repeats (STRs) Using Parallel String MatchingIJERA Editor
This document presents a novel framework for identifying Short Tandem Repeats (STRs) using parallel string matching. It begins with background on STRs and challenges with existing sequential algorithms. It then describes a two-phase methodology - first applying a basic improved right prefix algorithm sequentially, then applying it in parallel using multi-threading on multicore processors. Results show the basic algorithm outperforms Boyer-Moore, Knuth-Morris-Pratt and brute force algorithms sequentially. When applied in parallel, processing time is reduced from 80ms sequentially to 40ms in parallel on multicore systems. The parallel STR identification framework allows efficient searching of repeats in large genomes.
International Journal of Computer Science, Engineering and Information Techno...IJCSEIT Journal
In the field of proteomics because of more data is added, the computational methods need to be more
efficient. The part of molecular sequences is functionally more important to the molecule which is more
resistant to change. To ensure the reliability of sequence alignment, comparative approaches are used. The
problem of multiple sequence alignment is a proposition of evolutionary history. For each column in the
alignment, the explicit homologous correspondence of each individual sequence position is established. The
different pair-wise sequence alignment methods are elaborated in the present work. But these methods are
only used for aligning the limited number of sequences having small sequence length. For aligning
sequences based on the local alignment with consensus sequences, a new method is introduced. From NCBI
databank triticum wheat varieties are loaded. Phylogenetic trees are constructed for divided parts of
dataset. A single new tree is constructed from previous generated trees using advanced pruning technique.
Then, the closely related sequences are extracted by applying threshold conditions and by using shift
operations in the both directions optimal sequence alignment is obtained.
Performance Efficient DNA Sequence DetectionalgoRahul Shirude
The document proposes a parallel string matching algorithm using CUDA to perform DNA sequence detection on a GPU. It aims to optimize GPU core utilization and achieve high performance gains over sequential approaches. The proposed algorithm finds correct pattern matches in DNA database sequences. Experimental results show the parallel GPU approach has much higher performance than sequential CPU methods.
This document summarizes a research paper that proposes a hybrid genetic algorithm-particle swarm optimization (GA-PSO) approach for feature selection to design effective small interfering RNAs (siRNAs). The study uses a plant siRNA dataset containing 1,100 sequences and considers five properties (presence/absence of motifs, presence of specific nucleotides, thermodynamic characteristics) represented by 70 original features. A wrapper method evaluates feature subsets selected by the hybrid GA-PSO approach using an artificial neural network classifier. Results show the hybrid model improves predictive accuracy for siRNA design over general PSO alone. The hybrid approach aims to optimize feature selection performance by combining the exploration of GA with the exploitation of PSO.
siRNA has become an indispensible tool for silencing gene expression. It can act as an antiviral agent in RNAi pathway against plant diseases caused by plant viruses. However, identification of appropriate features for effective siRNA design has become a pressing issue for researchers which need to be resolved. Feature selection is a vital pre-processing technique involved in bioinformatics data set to find the most discriminative information not only for dimensionality reduction and detection of relevance features but also for minimizing the cost associated with features to design an accurate learning system. In this paper, we propose an ANN based feature selection approach using hybrid GA-PSO for selecting feature subset by discarding the irrelevant features and evaluating the cost of the model training. The results showed that the performance of proposed hybrid GA-PSO model outperformed the results of general PSO.a
International Journal of Engineering Research and DevelopmentIJERD Editor
This document discusses a study that uses the ke-REM (ke-Rule Extraction Method) classifier to predict promoter regions in DNA sequences. The study evaluates the performance of ke-REM compared to existing promoter prediction techniques. ke-REM constructs rules based on attribute-value pairs from a dataset of 106 E. coli DNA sequences, each containing 57 nucleotides. The results show that ke-REM competes well with existing methods for identifying promoter regions in DNA.
Data mining projects topics for java and dot netredpel dot com
This document discusses several papers related to data mining and machine learning techniques. It begins with a brief summary of each paper, discussing the key contributions and findings. The summaries cover topics such as differential privacy-preserving data anonymization, fault detection in power systems using decision trees, temporal pattern searching in event data, high dimensional indexing for similarity search, landmark-based approximate shortest path computation, feature selection for high dimensional data, temporal pattern mining in data streams, data leakage detection, keyword search in spatial databases, analyzing relationships on Wikipedia, improving recommender systems using user-item subgroups, decision trees for uncertain data, and building confidential query services in the cloud using data perturbation.
This document describes a project submitted by Ashish Singh Tomar to fulfill the requirements for a Bachelor's Degree in Bioinformatics. The project involves developing a next-generation sequencing (NGS) data analysis pipeline using the R statistical package. The document includes sections acknowledging contributions from colleagues and supervisors, as well as chapters outlining the aims, methodology, and results of the project.
An interactive approach to multiobjective clustering of gene expression patternsRavi Kumar
This document describes an interactive genetic algorithm-based multi-objective approach to cluster gene expression patterns. The proposed Interactive Multi-Objective Clustering (IMOC) algorithm simultaneously evolves the set of validity measures to optimize and finds the clustering solution. It takes input from a human decision maker during execution to learn the best validity measures and clustering for the gene expression data. The algorithm is applied to benchmark gene expression datasets and shows more biologically significant clusters than other clustering algorithms.
Delineation of techniques to implement on the enhanced proposed model using d...ijdms
In post genomic era with the advent of new technologies a huge amount of complex molecular data are
generated with high throughput. The management of this biological data is definitely a challenging task
due to complexity and heterogeneity of data for discovering new knowledge. Issues like managing noisy
and incomplete data are needed to be dealt with. Use of data mining in biological domain has made its
inventory success. Discovering new knowledge from the biological data is a major challenge in data
mining technique. The novelty of the proposed model is its combined use of intelligent techniques to classify
the protein sequence faster and efficiently. Use of FFT, fuzzy classifier, String weighted algorithm, gram
encoding method, neural network model and rough set classifier in a single model and in an appropriate
place can enhance the quality of the classification system .Thus the primary challenge is to identify and
classify the large protein sequences in a very fast and easy but intellectual way to decrease the time
complexity and space complexity.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
In this research, a hybrid wrapper model is proposed to identify the featured gene subset from the gene expression data. To balance the gap between exploration
and exploitation, a hybrid model with a popular meta-heuristic algorithm named
spider monkey optimizer (SMO) and simulated annealing (SA) is applied. In the proposed model, ReliefF is used as a filter to obtain the relevant gene subset
from dataset by removing the noise and outliers prior to feeding the data to the
wrapper SMO. To enhance the quality of the solution, simulated annealing is
deployed as local search with the SMO in the second phase, which will guide to the detection of the most optimal feature subset. To evaluate the performance of the proposed model, support vector machine (SVM) as a fitness function to recognize the most informative biomarker gene from the cancer datasets along with University of California, Irvine (UCI) datasets. To further evaluate the model, 4 different classifiers (SVM, na¨ıve Bayes (NB), decision tree (DT), and k-nearest neighbors (KNN)) are used. From the experimental results and analysis, it’s noteworthy to accept that the ReliefF-SMO-SA-SVM performs relatively better than its state-of-the-art counterparts. For cancer datasets, our model performs better in terms of accuracy with a maximum of 99.45%.
Enhancement of Error Correction in Quantum Cryptography BB84 ...butest
The document discusses several papers related to enhancing error correction in quantum cryptography protocols, measuring performance of blind source separation algorithms, detecting and counting faces in surveillance images, applying rough sets to web log mining, and using data mining techniques for financial prediction and analyzing RNA/interferon structures of hepatitis C virus.
A Threshold fuzzy entropy based feature selection method applied in various b...IJMER
Large amount of data have been stored and manipulated using various database
technologies. Processing all the attributes for the particular means is the difficult task. To avoid such
difficulties, feature selection process is processed.In this paper,we are collect a eight various benchmark
datasets from UCI repository.Feature selection process is carried out using fuzzy entropy based
relevance measure algorithm and follows three selection strategies like Mean selection strategy,Half
selection strategy and Neural network for threshold selection strategy. After the features are selected,
they are evaluated using Radial Basis Function (RBF) network,Stacking,Bagging,AdaBoostM1 and Antminer
classification methodologies.The test results depicts that Neural network for threshold selection
strategy works well in selecting features and Ant-miner methodology works best in bringing out better
accuracy with selected feature than processing with original dataset.The obtained result of this
experiment shows that clearly the Ant-miner is superiority than other classifiers.Thus, this proposed Antminer
algorithm could be a more suitable method for producing good results with fewer features than
the original datasets.
The authors present two novel progressive duplicate detection algorithms called Progressive Sorted Neighborhood Method (PSNM) and Progressive Blocking (PB) that improve the efficiency of duplicate detection over traditional approaches. PSNM works best on small, clean datasets by sorting records and comparing those within a sliding window, prioritizing nearby records. PB works best on large, dirty datasets by progressively combining blocks of records based on likelihood of matching. Experiments show these algorithms can double the efficiency of traditional methods and outperform related work by finding more duplicate pairs earlier within a given time frame.
Similar to IRJET-A Novel Approaches for Motif Discovery using Data Mining Algorithm (20)
TUNNELING IN HIMALAYAS WITH NATM METHOD: A SPECIAL REFERENCES TO SUNGAL TUNNE...IRJET Journal
1) The document discusses the Sungal Tunnel project in Jammu and Kashmir, India, which is being constructed using the New Austrian Tunneling Method (NATM).
2) NATM involves continuous monitoring during construction to adapt to changing ground conditions, and makes extensive use of shotcrete for temporary tunnel support.
3) The methodology section outlines the systematic geotechnical design process for tunnels according to Austrian guidelines, and describes the various steps of NATM tunnel construction including initial and secondary tunnel support.
STUDY THE EFFECT OF RESPONSE REDUCTION FACTOR ON RC FRAMED STRUCTUREIRJET Journal
This study examines the effect of response reduction factors (R factors) on reinforced concrete (RC) framed structures through nonlinear dynamic analysis. Three RC frame models with varying heights (4, 8, and 12 stories) were analyzed in ETABS software under different R factors ranging from 1 to 5. The results showed that displacement increased as the R factor decreased, indicating less linear behavior for lower R factors. Drift also decreased proportionally with increasing R factors from 1 to 5. Shear forces in the frames decreased with higher R factors. In general, R factors of 3 to 5 produced more satisfactory performance with less displacement and drift. The displacement variations between different building heights were consistent at different R factors. This study evaluated how R factors influence
A COMPARATIVE ANALYSIS OF RCC ELEMENT OF SLAB WITH STARK STEEL (HYSD STEEL) A...IRJET Journal
This study compares the use of Stark Steel and TMT Steel as reinforcement materials in a two-way reinforced concrete slab. Mechanical testing is conducted to determine the tensile strength, yield strength, and other properties of each material. A two-way slab design adhering to codes and standards is executed with both materials. The performance is analyzed in terms of deflection, stability under loads, and displacement. Cost analyses accounting for material, durability, maintenance, and life cycle costs are also conducted. The findings provide insights into the economic and structural implications of each material for reinforcement selection and recommendations on the most suitable material based on the analysis.
Effect of Camber and Angles of Attack on Airfoil CharacteristicsIRJET Journal
This document discusses a study analyzing the effect of camber, position of camber, and angle of attack on the aerodynamic characteristics of airfoils. Sixteen modified asymmetric NACA airfoils were analyzed using computational fluid dynamics (CFD) by varying the camber, camber position, and angle of attack. The results showed the relationship between these parameters and the lift coefficient, drag coefficient, and lift to drag ratio. This provides insight into how changes in airfoil geometry impact aerodynamic performance.
A Review on the Progress and Challenges of Aluminum-Based Metal Matrix Compos...IRJET Journal
This document reviews the progress and challenges of aluminum-based metal matrix composites (MMCs), focusing on their fabrication processes and applications. It discusses how various aluminum MMCs have been developed using reinforcements like borides, carbides, oxides, and nitrides to improve mechanical and wear properties. These composites have gained prominence for their lightweight, high-strength and corrosion resistance properties. The document also examines recent advancements in fabrication techniques for aluminum MMCs and their growing applications in industries such as aerospace and automotive. However, it notes that challenges remain around issues like improper mixing of reinforcements and reducing reinforcement agglomeration.
Dynamic Urban Transit Optimization: A Graph Neural Network Approach for Real-...IRJET Journal
This document discusses research on using graph neural networks (GNNs) for dynamic optimization of public transportation networks in real-time. GNNs represent transit networks as graphs with nodes as stops and edges as connections. The GNN model aims to optimize networks using real-time data on vehicle locations, arrival times, and passenger loads. This helps increase mobility, decrease traffic, and improve efficiency. The system continuously trains and infers to adapt to changing transit conditions, providing decision support tools. While research has focused on performance, more work is needed on security, socio-economic impacts, contextual generalization of models, continuous learning approaches, and effective real-time visualization.
Structural Analysis and Design of Multi-Storey Symmetric and Asymmetric Shape...IRJET Journal
This document summarizes a research project that aims to compare the structural performance of conventional slab and grid slab systems in multi-story buildings using ETABS software. The study will analyze both symmetric and asymmetric building models under various loading conditions. Parameters like deflections, moments, shears, and stresses will be examined to evaluate the structural effectiveness of each slab type. The results will provide insights into the comparative behavior of conventional and grid slabs to help engineers and architects select appropriate slab systems based on building layouts and design requirements.
A Review of “Seismic Response of RC Structures Having Plan and Vertical Irreg...IRJET Journal
This document summarizes and reviews a research paper on the seismic response of reinforced concrete (RC) structures with plan and vertical irregularities, with and without infill walls. It discusses how infill walls can improve or reduce the seismic performance of RC buildings, depending on factors like wall layout, height distribution, connection to the frame, and relative stiffness of walls and frames. The reviewed research paper analyzes the behavior of infill walls, effects of vertical irregularities, and seismic performance of high-rise structures under linear static and dynamic analysis. It studies response characteristics like story drift, deflection and shear. The document also provides literature on similar research investigating the effects of infill walls, soft stories, plan irregularities, and different
This document provides a review of machine learning techniques used in Advanced Driver Assistance Systems (ADAS). It begins with an abstract that summarizes key applications of machine learning in ADAS, including object detection, recognition, and decision-making. The introduction discusses the integration of machine learning in ADAS and how it is transforming vehicle safety. The literature review then examines several research papers on topics like lightweight deep learning models for object detection and lane detection models using image processing. It concludes by discussing challenges and opportunities in the field, such as improving algorithm robustness and adaptability.
Long Term Trend Analysis of Precipitation and Temperature for Asosa district,...IRJET Journal
The document analyzes temperature and precipitation trends in Asosa District, Benishangul Gumuz Region, Ethiopia from 1993 to 2022 based on data from the local meteorological station. The results show:
1) The average maximum and minimum annual temperatures have generally decreased over time, with maximum temperatures decreasing by a factor of -0.0341 and minimum by -0.0152.
2) Mann-Kendall tests found the decreasing temperature trends to be statistically significant for annual maximum temperatures but not for annual minimum temperatures.
3) Annual precipitation in Asosa District showed a statistically significant increasing trend.
The conclusions recommend development planners account for rising summer precipitation and declining temperatures in
P.E.B. Framed Structure Design and Analysis Using STAAD ProIRJET Journal
This document discusses the design and analysis of pre-engineered building (PEB) framed structures using STAAD Pro software. It provides an overview of PEBs, including that they are designed off-site with building trusses and beams produced in a factory. STAAD Pro is identified as a key tool for modeling, analyzing, and designing PEBs to ensure their performance and safety under various load scenarios. The document outlines modeling structural parts in STAAD Pro, evaluating structural reactions, assigning loads, and following international design codes and standards. In summary, STAAD Pro is used to design and analyze PEB framed structures to ensure safety and code compliance.
A Review on Innovative Fiber Integration for Enhanced Reinforcement of Concre...IRJET Journal
This document provides a review of research on innovative fiber integration methods for reinforcing concrete structures. It discusses studies that have explored using carbon fiber reinforced polymer (CFRP) composites with recycled plastic aggregates to develop more sustainable strengthening techniques. It also examines using ultra-high performance fiber reinforced concrete to improve shear strength in beams. Additional topics covered include the dynamic responses of FRP-strengthened beams under static and impact loads, and the performance of preloaded CFRP-strengthened fiber reinforced concrete beams. The review highlights the potential of fiber composites to enable more sustainable and resilient construction practices.
Survey Paper on Cloud-Based Secured Healthcare SystemIRJET Journal
This document summarizes a survey on securing patient healthcare data in cloud-based systems. It discusses using technologies like facial recognition, smart cards, and cloud computing combined with strong encryption to securely store patient data. The survey found that healthcare professionals believe digitizing patient records and storing them in a centralized cloud system would improve access during emergencies and enable more efficient care compared to paper-based systems. However, ensuring privacy and security of patient data is paramount as healthcare incorporates these digital technologies.
Review on studies and research on widening of existing concrete bridgesIRJET Journal
This document summarizes several studies that have been conducted on widening existing concrete bridges. It describes a study from China that examined load distribution factors for a bridge widened with composite steel-concrete girders. It also outlines challenges and solutions for widening a bridge in the UAE, including replacing bearings and stitching the new and existing structures. Additionally, it discusses two bridge widening projects in New Zealand that involved adding precast beams and stitching to connect structures. Finally, safety measures and challenges for strengthening a historic bridge in Switzerland under live traffic are presented.
React based fullstack edtech web applicationIRJET Journal
The document describes the architecture of an educational technology web application built using the MERN stack. It discusses the frontend developed with ReactJS, backend with NodeJS and ExpressJS, and MongoDB database. The frontend provides dynamic user interfaces, while the backend offers APIs for authentication, course management, and other functions. MongoDB enables flexible data storage. The architecture aims to provide a scalable, responsive platform for online learning.
A Comprehensive Review of Integrating IoT and Blockchain Technologies in the ...IRJET Journal
This paper proposes integrating Internet of Things (IoT) and blockchain technologies to help implement objectives of India's National Education Policy (NEP) in the education sector. The paper discusses how blockchain could be used for secure student data management, credential verification, and decentralized learning platforms. IoT devices could create smart classrooms, automate attendance tracking, and enable real-time monitoring. Blockchain would ensure integrity of exam processes and resource allocation, while smart contracts automate agreements. The paper argues this integration has potential to revolutionize education by making it more secure, transparent and efficient, in alignment with NEP goals. However, challenges like infrastructure needs, data privacy, and collaborative efforts are also discussed.
A REVIEW ON THE PERFORMANCE OF COCONUT FIBRE REINFORCED CONCRETE.IRJET Journal
This document provides a review of research on the performance of coconut fibre reinforced concrete. It summarizes several studies that tested different volume fractions and lengths of coconut fibres in concrete mixtures with varying compressive strengths. The studies found that coconut fibre improved properties like tensile strength, toughness, crack resistance, and spalling resistance compared to plain concrete. Volume fractions of 2-5% and fibre lengths of 20-50mm produced the best results. The document concludes that using a 4-5% volume fraction of coconut fibres 30-40mm in length with M30-M60 grade concrete would provide benefits based on previous research.
Optimizing Business Management Process Workflows: The Dynamic Influence of Mi...IRJET Journal
The document discusses optimizing business management processes through automation using Microsoft Power Automate and artificial intelligence. It provides an overview of Power Automate's key components and features for automating workflows across various apps and services. The document then presents several scenarios applying automation solutions to common business processes like data entry, monitoring, HR, finance, customer support, and more. It estimates the potential time and cost savings from implementing automation for each scenario. Finally, the conclusion emphasizes the transformative impact of AI and automation tools on business processes and the need for ongoing optimization.
Multistoried and Multi Bay Steel Building Frame by using Seismic DesignIRJET Journal
The document describes the seismic design of a G+5 steel building frame located in Roorkee, India according to Indian codes IS 1893-2002 and IS 800. The frame was analyzed using the equivalent static load method and response spectrum method, and its response in terms of displacements and shear forces were compared. Based on the analysis, the frame was designed as a seismic-resistant steel structure according to IS 800:2007. The software STAAD Pro was used for the analysis and design.
Cost Optimization of Construction Using Plastic Waste as a Sustainable Constr...IRJET Journal
This research paper explores using plastic waste as a sustainable and cost-effective construction material. The study focuses on manufacturing pavers and bricks using recycled plastic and partially replacing concrete with plastic alternatives. Initial results found that pavers and bricks made from recycled plastic demonstrate comparable strength and durability to traditional materials while providing environmental and cost benefits. Additionally, preliminary research indicates incorporating plastic waste as a partial concrete replacement significantly reduces construction costs without compromising structural integrity. The outcomes suggest adopting plastic waste in construction can address plastic pollution while optimizing costs, promoting more sustainable building practices.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
Discover the latest insights on Data Driven Maintenance with our comprehensive webinar presentation. Learn about traditional maintenance challenges, the right approach to utilizing data, and the benefits of adopting a Data Driven Maintenance strategy. Explore real-world examples, industry best practices, and innovative solutions like FMECA and the D3M model. This presentation, led by expert Jules Oudmans, is essential for asset owners looking to optimize their maintenance processes and leverage digital technologies for improved efficiency and performance. Download now to stay ahead in the evolving maintenance landscape.
An improved modulation technique suitable for a three level flying capacitor ...IJECEIAES
This research paper introduces an innovative modulation technique for controlling a 3-level flying capacitor multilevel inverter (FCMLI), aiming to streamline the modulation process in contrast to conventional methods. The proposed
simplified modulation technique paves the way for more straightforward and
efficient control of multilevel inverters, enabling their widespread adoption and
integration into modern power electronic systems. Through the amalgamation of
sinusoidal pulse width modulation (SPWM) with a high-frequency square wave
pulse, this controlling technique attains energy equilibrium across the coupling
capacitor. The modulation scheme incorporates a simplified switching pattern
and a decreased count of voltage references, thereby simplifying the control
algorithm.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.