Abstract Motivation: Gene regulatory network is the network based approach to represent the interactions between genes. DNA microarray is the most widely used technology for extracting the relationships between thousands of genes simultaneously. Gene microarray experiment provides the gene expression data for a particular condition and varying time periods. The expression of a particular gene depends upon the biological conditions and other genes. In this paper, we propose a new method for the analysis of microarray data. The proposed method makes use of S-system, which is a well-accepted model for the gene regulatory network reconstruction. Since the problem has multiple solutions, we have to identify an optimized solution. Evolutionary algorithms have been used to solve such problems. Though there are a number of attempts already been carried out by various researchers, the solutions are still not that satisfactory with respect to the time taken and the degree of accuracy achieved. Therefore, there is a need of huge amount further work in this topic for achieving solutions with improved performances. Results: In this work, we have proposed Clonal selection algorithm for identifying optimal gene regulatory network. The approach is tested on the real life data: SOS Ecoli DNA repairing gene expression data. It is observed that the proposed algorithm converges much faster and provides better results than the existing algorithms. Index Terms: Microarray analysis, Evolutionary Algorithm, Artificial Immune System, S-system, Gene Regulatory Network, SOS Ecoli DNA repairing, Clonal Selection Algorithm.
Application of Hybrid Genetic Algorithm Using Artificial Neural Network in Da...IOSRjournaljce
The main purpose of data mining is to extract knowledge from large amount of data. Artificial Neural network (ANN) has already been applied in a variety of domains with remarkable success. This paper presents the application of hybrid model for stroke disease that integrates Genetic algorithm and back propagation algorithm. Selecting a good subset of features, without sacrificing accuracy, is of great importance for neural networks to be successfully applied to the area. In addition the hybrid model that leads to further improvised categorization, accuracy compared to the result produced by genetic algorithm alone. In this study, a new hybrid model of Neural Networks and Genetic Algorithm (GA) to initialize and optimize the connection weights of ANN so as to improve the performance of the ANN and the same has been applied in a medical problem of predicting stroke disease for verification of the results.
Improving the effectiveness of information retrieval system using adaptive ge...ijcsit
The document describes research into improving the effectiveness of information retrieval systems using an adaptive genetic algorithm. A genetic algorithm with variable crossover and mutation probabilities (adaptive GA) is investigated. The adaptive GA is tested on 242 Arabic abstracts using three information retrieval models: vector space model, extended Boolean model, and language model. Results show the adaptive GA approach improves retrieval effectiveness over traditional genetic algorithms and baseline information retrieval systems, as measured by average recall and precision. Key aspects of the adaptive GA used include variable crossover and mutation probabilities tuned during the search process, and fitness functions based on document retrieval order.
A genetic algorithm approach for predicting ribonucleic acid sequencing data ...TELKOMNIKA JOURNAL
Malaria larvae accept explosive variable lifecycle as they spread across numerous mosquito vector stratosphere. Transcriptomes arise in thousands of diverse parasites. Ribonucleic acid sequencing (RNA-seq) is a prevalent gene expression that has led to enhanced understanding of genetic queries. RNA-seq tests transcript of gene expression, and provides methodological enhancements to machine learning procedures. Researchers have proposed several methods in evaluating and learning biological data. Genetic algorithm (GA) as a feature selection process is used in this study to fetch relevant information from the RNA-Seq Mosquito Anopheles gambiae malaria vector dataset, and evaluates the results using kth nearest neighbor (KNN) and decision tree classification algorithms. The experimental results obtained a classification accuracy of 88.3 and 98.3 percents respectively.
This document describes two machine learning techniques, particle swarm optimization with support vector machines (PSO-SVM) and recursive feature elimination with support vector machines (RFE-SVM), that were used to classify autism neuroimaging data from the Autism Brain Imaging Data Exchange database. PSO-SVM was used to select discriminative features for classification, while RFE-SVM ranked features by importance. Both techniques aimed to improve classification accuracy and reduce overfitting by selecting optimal feature subsets from the high-dimensional neuroimaging data. The results could help develop brain-based diagnostic criteria for autism.
Genome structure prediction a review over soft computing techniqueseSAT Journals
Abstract There are some techniques like spectrometry or crystallography for the determination of DNA, RNA or protein structures. These processes provide very accurate results for the structure estimation. But these conventional techniques are very slow and could be applied over a few special cases only. Soft computing techniques guarantee a near appropriate results in much smaller time and have very large applicability. These techniques are much easier to apply. Different approaches have been used in soft computing including nature inspired computing for estimation of genome structures with a considerable accuracy of results. This paper provides a review over different soft computing techniques been applied along with application method for the determination of genome structure. Keywords—DNA, RNA, proteins, structure, soft computing, techniques.
Prognosticating Autism Spectrum Disorder Using Artificial Neural Network: Lev...Avishek Choudhury
Autism spectrum condition (ASC) or autism spectrum disorder (ASD) is primarily identified with the help of behavioral indications encompassing social, sensory and motor characteristics. Although categorized, recurring motor actions are measured during diagnosis, quantifiable measures that ascertain kinematic physiognomies in the movement configurations of autistic persons are not adequately studied, hindering the advances in understanding the etiology of motor mutilation. Subject aspects such as behavioral characters that influences ASD need further exploration. Presently, limited autism datasets concomitant with screening ASD are available, and a majority of them are genetic. Hence, in this study, we used a dataset related to autism screening enveloping ten behavioral and ten personal attributes that have been effective in diagnosing ASD cases from controls in behavior science. ASD diagnosis is time exhaustive and uneconomical. The burgeoning ASD cases worldwide mandate a need for the fast and economical screening tool. Our study aimed to implement an artificial neural network with the Levenberg- Marquardt algorithm to detect ASD and examine its predictive accuracy. Consecutively, develop a clinical decision support system for early ASD identification.
IRJET- Prediction of Heart Disease using RNN AlgorithmIRJET Journal
This document discusses using a recurrent neural network (RNN) algorithm to predict heart disease. It proposes a method called prognosis prediction using RNN (PP-RNN) that uses multiple RNNs to learn from patient diagnosis code sequences in order to predict high-risk diseases. The experimental results show that the proposed PP-RNN method can achieve more accurate results than existing methods for predicting heart disease risk. It also provides background on related works using other techniques like decision trees, clustering, and AdaBoost for heart disease prediction.
This document describes a study that uses machine learning algorithms to efficiently predict DNA-binding proteins. Support vector machines and cascade correlation neural networks are optimized and compared to determine the best performing model. The SVM model achieves 86.7% accuracy at predicting DNA-binding proteins using features like overall charge, patch size, and amino acid composition of proteins. The CCNN model achieves lower accuracy of 75.4%. The study aims to improve on previous work by using the standard jack-knife validation technique to evaluate model performance on unseen data.
Application of Hybrid Genetic Algorithm Using Artificial Neural Network in Da...IOSRjournaljce
The main purpose of data mining is to extract knowledge from large amount of data. Artificial Neural network (ANN) has already been applied in a variety of domains with remarkable success. This paper presents the application of hybrid model for stroke disease that integrates Genetic algorithm and back propagation algorithm. Selecting a good subset of features, without sacrificing accuracy, is of great importance for neural networks to be successfully applied to the area. In addition the hybrid model that leads to further improvised categorization, accuracy compared to the result produced by genetic algorithm alone. In this study, a new hybrid model of Neural Networks and Genetic Algorithm (GA) to initialize and optimize the connection weights of ANN so as to improve the performance of the ANN and the same has been applied in a medical problem of predicting stroke disease for verification of the results.
Improving the effectiveness of information retrieval system using adaptive ge...ijcsit
The document describes research into improving the effectiveness of information retrieval systems using an adaptive genetic algorithm. A genetic algorithm with variable crossover and mutation probabilities (adaptive GA) is investigated. The adaptive GA is tested on 242 Arabic abstracts using three information retrieval models: vector space model, extended Boolean model, and language model. Results show the adaptive GA approach improves retrieval effectiveness over traditional genetic algorithms and baseline information retrieval systems, as measured by average recall and precision. Key aspects of the adaptive GA used include variable crossover and mutation probabilities tuned during the search process, and fitness functions based on document retrieval order.
A genetic algorithm approach for predicting ribonucleic acid sequencing data ...TELKOMNIKA JOURNAL
Malaria larvae accept explosive variable lifecycle as they spread across numerous mosquito vector stratosphere. Transcriptomes arise in thousands of diverse parasites. Ribonucleic acid sequencing (RNA-seq) is a prevalent gene expression that has led to enhanced understanding of genetic queries. RNA-seq tests transcript of gene expression, and provides methodological enhancements to machine learning procedures. Researchers have proposed several methods in evaluating and learning biological data. Genetic algorithm (GA) as a feature selection process is used in this study to fetch relevant information from the RNA-Seq Mosquito Anopheles gambiae malaria vector dataset, and evaluates the results using kth nearest neighbor (KNN) and decision tree classification algorithms. The experimental results obtained a classification accuracy of 88.3 and 98.3 percents respectively.
This document describes two machine learning techniques, particle swarm optimization with support vector machines (PSO-SVM) and recursive feature elimination with support vector machines (RFE-SVM), that were used to classify autism neuroimaging data from the Autism Brain Imaging Data Exchange database. PSO-SVM was used to select discriminative features for classification, while RFE-SVM ranked features by importance. Both techniques aimed to improve classification accuracy and reduce overfitting by selecting optimal feature subsets from the high-dimensional neuroimaging data. The results could help develop brain-based diagnostic criteria for autism.
Genome structure prediction a review over soft computing techniqueseSAT Journals
Abstract There are some techniques like spectrometry or crystallography for the determination of DNA, RNA or protein structures. These processes provide very accurate results for the structure estimation. But these conventional techniques are very slow and could be applied over a few special cases only. Soft computing techniques guarantee a near appropriate results in much smaller time and have very large applicability. These techniques are much easier to apply. Different approaches have been used in soft computing including nature inspired computing for estimation of genome structures with a considerable accuracy of results. This paper provides a review over different soft computing techniques been applied along with application method for the determination of genome structure. Keywords—DNA, RNA, proteins, structure, soft computing, techniques.
Prognosticating Autism Spectrum Disorder Using Artificial Neural Network: Lev...Avishek Choudhury
Autism spectrum condition (ASC) or autism spectrum disorder (ASD) is primarily identified with the help of behavioral indications encompassing social, sensory and motor characteristics. Although categorized, recurring motor actions are measured during diagnosis, quantifiable measures that ascertain kinematic physiognomies in the movement configurations of autistic persons are not adequately studied, hindering the advances in understanding the etiology of motor mutilation. Subject aspects such as behavioral characters that influences ASD need further exploration. Presently, limited autism datasets concomitant with screening ASD are available, and a majority of them are genetic. Hence, in this study, we used a dataset related to autism screening enveloping ten behavioral and ten personal attributes that have been effective in diagnosing ASD cases from controls in behavior science. ASD diagnosis is time exhaustive and uneconomical. The burgeoning ASD cases worldwide mandate a need for the fast and economical screening tool. Our study aimed to implement an artificial neural network with the Levenberg- Marquardt algorithm to detect ASD and examine its predictive accuracy. Consecutively, develop a clinical decision support system for early ASD identification.
IRJET- Prediction of Heart Disease using RNN AlgorithmIRJET Journal
This document discusses using a recurrent neural network (RNN) algorithm to predict heart disease. It proposes a method called prognosis prediction using RNN (PP-RNN) that uses multiple RNNs to learn from patient diagnosis code sequences in order to predict high-risk diseases. The experimental results show that the proposed PP-RNN method can achieve more accurate results than existing methods for predicting heart disease risk. It also provides background on related works using other techniques like decision trees, clustering, and AdaBoost for heart disease prediction.
This document describes a study that uses machine learning algorithms to efficiently predict DNA-binding proteins. Support vector machines and cascade correlation neural networks are optimized and compared to determine the best performing model. The SVM model achieves 86.7% accuracy at predicting DNA-binding proteins using features like overall charge, patch size, and amino acid composition of proteins. The CCNN model achieves lower accuracy of 75.4%. The study aims to improve on previous work by using the standard jack-knife validation technique to evaluate model performance on unseen data.
This document summarizes a research paper that proposes using a genetic algorithm to efficiently cluster wireless sensor nodes. The genetic algorithm aims to minimize the total communication distance between sensors and the base station in order to prolong the network lifetime. Simulation results showed that the genetic algorithm can quickly find good clustering solutions that reduce energy consumption compared to previous clustering methods. The full paper provides details on wireless sensor networks, related clustering algorithms, genetic algorithms, and the proposed genetic algorithm-based clustering method.
Delineation of techniques to implement on the enhanced proposed model using d...ijdms
In post genomic era with the advent of new technologies a huge amount of complex molecular data are
generated with high throughput. The management of this biological data is definitely a challenging task
due to complexity and heterogeneity of data for discovering new knowledge. Issues like managing noisy
and incomplete data are needed to be dealt with. Use of data mining in biological domain has made its
inventory success. Discovering new knowledge from the biological data is a major challenge in data
mining technique. The novelty of the proposed model is its combined use of intelligent techniques to classify
the protein sequence faster and efficiently. Use of FFT, fuzzy classifier, String weighted algorithm, gram
encoding method, neural network model and rough set classifier in a single model and in an appropriate
place can enhance the quality of the classification system .Thus the primary challenge is to identify and
classify the large protein sequences in a very fast and easy but intellectual way to decrease the time
complexity and space complexity.
Software Defect Prediction Using Radial Basis and Probabilistic Neural NetworksEditor IJCATR
This document discusses using neural networks for software defect prediction. It examines the effectiveness of using a radial basis function neural network and a probabilistic neural network on prediction accuracy and defect prediction compared to other techniques. The key findings are that neural networks provide an acceptable level of accuracy for defect prediction but perform poorly at actual defect prediction. Probabilistic neural networks performed consistently better than other techniques across different datasets in terms of prediction accuracy and defect prediction ability. The document recommends using an ensemble of different software defect prediction models rather than relying on a single technique.
AN EFFICIENT PSO BASED ENSEMBLE CLASSIFICATION MODEL ON HIGH DIMENSIONAL DATA...ijsc
As the size of the biomedical databases are growing day by day, finding an essential features in the disease prediction have become more complex due to high dimensionality and sparsity problems. Also, due to the
availability of a large number of micro-array datasets in the biomedical repositories, it is difficult to analyze, predict and interpret the feature information using the traditional feature selection based classification models. Most of the traditional feature selection based classification algorithms have computational issues such as dimension reduction, uncertainty and class imbalance on microarray datasets. Ensemble classifier is one of the scalable models for extreme learning machine due to its high efficiency, the fast processing speed for real-time applications. The main objective of the feature selection
based ensemble learning models is to classify the high dimensional data with high computational efficiency
and high true positive rate on high dimensional datasets. In this proposed model an optimized Particle swarm optimization (PSO) based Ensemble classification model was developed on high dimensional microarray
datasets. Experimental results proved that the proposed model has high computational efficiency compared to the traditional feature selection based classification models in terms of accuracy , true positive rate and error rate are concerned.
Classification of medical datasets using back propagation neural network powe...IJECEIAES
The classification is a one of the most indispensable domains in the data mining and machine learning. The classification process has a good reputation in the area of diseases diagnosis by computer systems where the progress in smart technologies of computer can be invested in diagnosing various diseases based on data of real patients documented in databases. The paper introduced a methodology for diagnosing a set of diseases including two types of cancer (breast cancer and lung), two datasets for diabetes and heart attack. Back Propagation Neural Network plays the role of classifier. The performance of neural net is enhanced by using the genetic algorithm which provides the classifier with the optimal features to raise the classification rate to the highest possible. The system showed high efficiency in dealing with databases differs from each other in size, number of features and nature of the data and this is what the results illustrated, where the ratio of the classification reached to 100% in most datasets).
The Evaluated Measurement of a Combined Genetic Algorithm and Artificial Immu...IJECEIAES
This paper demonstrates a hybrid between two optimization methods which are the Artificial Immune System (AIS) and Genetic Algorithm (GA). The novel algorithm called the immune genetic algorithm (IGA), provides improvement to the results that enable GA and AIS to work separately which is the main objective of this hybrid. Negative selection which is one of the techniques in the AIS, was employed to determine the input variables (populations) of the system. In order to illustrate the effectiveness of the IGA, the comparison with a steady-state GA, AIS, and PSO were also investigated. The testing of the performance was conducted by mathematical testing, problems were divided into single and multiple objectives. The five single objectives were then used to test the modified algorithm, the results showed that IGA performed better than all of the other methods. The DTLZ multi-objective testing functions were then used. The result also illustrated that the modified approach still had the best performance.
A Study on Genetic-Fuzzy Based Automatic Intrusion Detection on Network DatasetsDrjabez
1. The document proposes a genetic-fuzzy based method for automatic intrusion detection using network datasets. It combines fuzzy set theory with genetic algorithms to extract rules for both discrete and continuous attributes to detect normal and intrusion patterns.
2. The method was tested on KDD99 Cup and DARPA98 network intrusion detection datasets and showed high detection rates with low false alarm rates for both misuse detection and anomaly detection.
3. By extracting many rules to represent normal network behavior patterns, the proposed genetic-fuzzy approach can detect new or unknown intrusions based on anomalies without requiring prior domain expertise on intrusion patterns.
- The document discusses various approaches for applying machine learning and artificial intelligence to drug discovery.
- It describes how molecules and proteins can be represented as graphs, fingerprints, or sequences to be used as input for models.
- Different tasks in drug discovery like target binding prediction, generative design of new molecules, and drug repurposing are framed as questions that AI models can aim to answer.
- Techniques discussed include graph neural networks, reinforcement learning, and conditional generation using techniques like translation models.
- Several recent works applying these approaches for tasks like predicting drug-target interactions and generating synthesizable molecules are referenced.
Network embedding in biomedical data scienceArindam Ghosh
Excerpts from the paper:
What is it?
Network embedding aims at converting the network into a low-dimensional space while structural information of the network is preserved.
In this way, nodes and/or edges of the network can be represented as compacted yet informative vectors in the embedding space.
Advantages:
Typical non-network-based machine learning methods such as linear regression, Support Vector Machine (SVM) and decision forest, which have been demonstrated to be effective and efficient as the state-of-the-art techniques, can be applied to such vectors.
Current status:
Efforts of applying network embedding to improve biomedical data analysis are already planned or underway.
Difficulties:
The biomedical networks are sparse, noisy, incomplete, heterogeneous and usually consist of biomedical text and other domain knowledge. It makes embedding tasks more complicated than other application fields.
Drug discovery and development is a long and expensive process and over time has notoriously bucked Moore’s law that it now has its own law called Eroom’s Law named after it (the opposite of Moore’s). It is estimated that the attrition rate of drug candidates is up to 96% and the average cost to develop a new drug has reached almost $2.5 billion in recent years. One of the major causes for the high attrition rate is drug safety, which accounts for 30% of the failures.
Even if a drug is approved in market, it could be withdrawn due to safety problems. Therefore, evaluating drug safety extensively as early as possible is paramount in accelerating drug discovery and development. This talk provides a high-level overview of the current process of rational drug design that has been in place for many decades and covers some of the major areas where the application of AI, Deep learning and ML based techniques have had the most gains.
Specifically, this talk covers a variety of drug safety related AI and ML based techniques currently in use which can generally divided into 3 main categories:
1. Discovery,
2. Toxicity and Safety, and
3. Post-Market Monitoring.
We will address the recent progress in predictive models and techniques built for various toxicities. It will also cover some publicly available databases, tools and platforms available to easily leverage them.
We will also compare and contrast various modeling techniques including deep learning techniques and their accuracy using recent research. Finally, the talk will address some of the remaining challenges and limitations yet to be addressed in the area of drug discovery and safety assessment.
Solar Irradiation Prediction using back Propagation and Artificial Neural Net...ijtsrd
The document discusses using artificial neural networks to predict solar irradiation. It proposes a model using ANN with the Levenberg-Marquardt algorithm for backpropagation. The model aims to more accurately estimate available solar power by forecasting fluctuating solar irradiation levels. It achieves high accuracy of 97.74% and low error rate of 2.76% according to mean absolute percentage error and regression analysis. This performance improvement over contemporary techniques demonstrates ANN's effectiveness for nonlinear solar irradiation forecasting.
ANALYSIS OF MACHINE LEARNING ALGORITHMS WITH FEATURE SELECTION FOR INTRUSION ...IJNSA Journal
This document summarizes a research paper that analyzes machine learning algorithms for intrusion detection using the UNSW-NB15 dataset. It compares the performance of classifiers like KNN, SGD, Random Forest, Logistic Regression, and Naive Bayes, both with and without feature selection. Chi-Square feature selection is applied to reduce irrelevant features before training the classifiers. The classifiers' performance is evaluated based on metrics like accuracy, precision, recall, F1-score, true positive rate and false positive rate. The paper finds that feature selection can improve classifiers' performance for intrusion detection.
Inference of Nonlinear Gene Regulatory Networks through Optimized Ensemble of...Arinze Akutekwe
Comprehensive understanding of gene regulatory
networks (GRNs) is a major challenge in systems biology. Most
methods for modeling and inferring the dynamics of GRNs,
such as those based on state space models, vector autoregressive
models and G1DBN algorithm, assume linear dependencies
among genes. However, this strong assumption does not make
for true representation of time-course relationships across the
genes, which are inherently nonlinear. Nonlinear modeling
methods such as the S-systems and causal structure
identification (CSI) have been proposed, but are known to be
statistically inefficient and analytically intractable in high
dimensions. To overcome these limitations, we propose an
optimized ensemble approach based on support vector
regression (SVR) and dynamic Bayesian networks (DBNs). The
method called SVR-DBN, uses nonlinear kernels of the SVR to
infer the temporal relationships among genes within the DBN
framework. The two-stage ensemble is further improved by
SVR parameter optimization using Particle Swarm
Optimization. Results on eight insilico-generated datasets, and
two real world datasets of Drosophila Melanogaster and
Escherichia Coli, show that our method outperformed the
G1DBN algorithm by a total average accuracy of 12%. We
further applied our method to model the time-course
relationships of ovarian carcinoma. From our results, four hub
genes were discovered. Stratified analysis further showed that
the expression levels Prostrate differentiation factor and BTG
family member 2 genes, were significantly increased by the
cisplatin and oxaliplatin platinum drugs; while expression levels
of Polo-like kinase and Cyclin B1 genes, were both decreased by
the platinum drugs. These hub genes might be potential
biomarkers for ovarian carcinoma.
IRJET- Plant Disease Detection and Classification using Image Processing a...IRJET Journal
This document describes a method for detecting and classifying plant diseases using image processing and artificial neural networks. The method involves preprocessing images through grayscaling, resizing and filtering. K-means clustering is used to segment infected leaf regions. Features are extracted from segmented images and fed into feedforward and cascaded feedforward neural networks for disease classification. The method achieved accurate classification of several common plant diseases with fewer iterations and better performance than traditional feedforward backpropagation neural networks. This automatic disease detection approach could help improve agricultural productivity by facilitating early detection on large farms.
The National Resource for Network Biology (NRNB) held its External Advisory Council meeting on December 12, 2012. The NRNB is focused on developing network biology tools and collaborating with investigators. It oversees various technology research and development projects, software releases including Cytoscape 3.0, collaboration projects, and outreach/training events. The meeting agenda covered progress updates and sought advice on future plans.
Sample Work For Engineering Literature Review and Gap IdentificationPhD Assistance
Sample Work For Engineering Literature Review and Gap Identification - PhD Assistance - http://bit.ly/2E9fAVq
2.1 INTRODUCTION
2.2 RESEARCH GAPS IN EXISTING METHODS
2.3 OBJECTIVES OF THIS WORK
Read More : http://bit.ly/2Rl7XT5
#gapanalysis #strategicmanagement #datagapanalysis #gapanalysisppt #gapanalysishealthcare #gapanalysisfinance #gapanalysisEngineering
SURVEY ON MODELLING METHODS APPLICABLE TO GENE REGULATORY NETWORKijbbjournal
Gene Regulatory Network (GRN) plays an important role in knowing insight of cellular life cycle. It gives
information about at which different environmental conditions genes of particular interest get over
expressed or under expressed. Modelling of GRN is nothing but finding interactive relationships between
genes. Interaction can be positive or negative. For inference of GRN, time series data provided by
Microarray technology is used. Key factors to be considered while constructing GRN are scalability,
robustness, reliability and maximum detection of true positive interactions between genes. This paper
gives detailed technical review of existing methods applied for building of GRN along with scope for
future work.
This document provides an annual progress report for the National Resource for Network Biology (NRNB) for the period of May 1, 2011 to April 30, 2012. It summarizes the following:
1) Advances made in developing algorithms to identify network modules and use modules as biomarkers for disease. This includes methods to capture complex logical relationships within modules.
2) Progress on tools to enable new network analysis and visualization capabilities, including a new version of Cytoscape.
3) Growth of collaborations through the NRNB, which have nearly doubled over the past year to around 100 projects.
4) Continued development of the Cytoscape App Store to support the user and developer community.
Statistical analysis to identify the main parameters to effecting wwqi of sew...eSAT Journals
Abstract The present study was conducted to determine the wastewater quality index and to study statistical interrelationships amongst different parameters. The equation was developed to predict BOD and WWQI. A number of water quality physicochemical parameters were estimated quantitatively in wastewater samples following methods and procedures as per governing authority guidelines. Wastewater Quality Index (WWQI) is regarded as one of the most effective way to communicate wastewater quality in a collective way regarding wastewater quality parameters. The WWQI of wastewater samples was calculated with fuzzy MCDM methodology. The wastewater quality index for treated wastewater was evaluated considering eight parameters subscribed by Gujarat Pollution control Board (GPCB), a governing authority for environmental monitoring in Gujarat State, India. Considerable uncertainties are involved in the process of defining the treated wastewater quality for specific usage, like irrigation, reuse, etc.
The paper presents modeling of cognitive uncertainty in the field data, while dealing with these systems recourse to fuzzy logic. Also a statistical study is done to identify the main affecting variables to the WWQI. The Statistical Regression Analysis has been found to be highly useful tool for correlating different parameters. Correlation Analysis of the data suggests that TDS, SS, BOD, COD, O&G and Cl are significantly correlated with WWQI and DO of wastewater. The estimated BOD from independent variance DO for maximum, minimum and average is 25.35 mg/L, 2.65 mg/L and 13.56 mg/L respectively. While estimated WWQI from independent variance DO for maximum, minimum and average is 0.6212, 0.3074 and 0.4581 respectively. Out of eight parameters, TDS-BOD, TDS-COD, TDS-Cl, SS-BOD, SS-COD, and BOD-COD are significantly correlated. Present study shows that WWQI is influenced by BOD, COD, SS and TDS.
Software as a service for efficient cloud computingeSAT Journals
Abstract This Research paper explores importance of Software As A Service (SaaS) for efficient cloud computing in organizations and its implications. Enterprises now a days are betting big on SaaS and integrating this service delivery model of cloud computing architecture in their IT services. SaaS applications are service centric cloud computing delivery model used as IT Infrastructure which is multi-tenant architecture used to provide rich user experience with desired set of features requested by the cloud user. This research paper also discusses the importance of SaaS application architecture, functionality, efficiency, advantages and disadvantages. Keywords: Cloud Computing, Service Delivery Models, Software as a Service, SaaS Architecture.
An experimental study on mud concrete using soil as a fine aggrgate and ld sl...eSAT Journals
Abstract Aggregates are important ingredients of concrete. Sand is used abundantly after air and water. The extensive use of these natural resources is exploiting the environment every day. many alternative materials are being used, viz., slag sand, manufactured sand, quarry dust etc., as fine aggregates; Materials such as steel slag, blast furnace slag are being used as replacement for coarse aggregates. This paper reports the result of different mixes obtained by partial replacement of Natural coarse aggregates (NCA) and complete replacement of fine aggregates (FA) by alternative material such as LD slag and Natural soil respectively. This paper reports the result of different mixes obtained by partial replacement of natural coarse aggregates (CA) and complete replacement of fine aggregates (FA) by alternative material such as LD slag and Natural soil respectively. The wet compressive strength ranged from 16MPa to 20MPa for cubes made of Natural Sand and Natural Coarse Aggregates MIX-D. The wet compressive strength ranged from 18-26MPa for MIX-A; The value obtained for MIX-A was found to be 20% more compared to MIX-D. The split tensile strength ranged from 1.16-1.51MPa for MIX-A, it was concluded that, the mud concrete mix prepared with soil and LD slag gave the satisfactory result which was intended to achieve by normal conventional concrete mix MIX-D. The flexural strength ranged from 3.04-3.41MPa for MIX-A and 2.84-3.45MPa for M4, , it was concluded that, the mud concrete mix prepared with soil and LD slag gave the satisfactory result which was intended to achieve by normal conventional concrete mix. The mud concrete with Soil and LD slag cut down the cost of mix up to 43% when compared with normal conventional concrete of equivalent grade. Keywords: MUD Concrete, LD Slag, NCA, Alternative Materials, Wet Compressive Strength.
Reliability assessment for a four cylinder diesel engine by employing algebra...eSAT Journals
Abstract
In this paper, the authors have evaluated reliability of four cylinder diesel engine by employing the Algebra of logics which is easier
in comparison of old techniques. Here, a multi-component fuel system in diesel engine, comprised of four subsystems in series, has
considered. The authors, in this model, have considered a parallel redundant fuel injective device to improve the system’s
performance. The whole system can fail due to failure of atleast one component of all the routes of flow.
Boolean function technique has used to formulate and solve the mathematical model. Reliability and M.T.T.F. of the considered diesel
engine have been obtained to connect the model with physical situations. A numerical example and its graphical representation have
been appended at last to highlight important results.
Keywords: Boolean function, Algebra of logics, Parallel redundancy, Four cylinder diesel engine, Weibull time
distribution, Exponential time distribution, Reliability, MTTF etc.
This document summarizes a research paper that proposes using a genetic algorithm to efficiently cluster wireless sensor nodes. The genetic algorithm aims to minimize the total communication distance between sensors and the base station in order to prolong the network lifetime. Simulation results showed that the genetic algorithm can quickly find good clustering solutions that reduce energy consumption compared to previous clustering methods. The full paper provides details on wireless sensor networks, related clustering algorithms, genetic algorithms, and the proposed genetic algorithm-based clustering method.
Delineation of techniques to implement on the enhanced proposed model using d...ijdms
In post genomic era with the advent of new technologies a huge amount of complex molecular data are
generated with high throughput. The management of this biological data is definitely a challenging task
due to complexity and heterogeneity of data for discovering new knowledge. Issues like managing noisy
and incomplete data are needed to be dealt with. Use of data mining in biological domain has made its
inventory success. Discovering new knowledge from the biological data is a major challenge in data
mining technique. The novelty of the proposed model is its combined use of intelligent techniques to classify
the protein sequence faster and efficiently. Use of FFT, fuzzy classifier, String weighted algorithm, gram
encoding method, neural network model and rough set classifier in a single model and in an appropriate
place can enhance the quality of the classification system .Thus the primary challenge is to identify and
classify the large protein sequences in a very fast and easy but intellectual way to decrease the time
complexity and space complexity.
Software Defect Prediction Using Radial Basis and Probabilistic Neural NetworksEditor IJCATR
This document discusses using neural networks for software defect prediction. It examines the effectiveness of using a radial basis function neural network and a probabilistic neural network on prediction accuracy and defect prediction compared to other techniques. The key findings are that neural networks provide an acceptable level of accuracy for defect prediction but perform poorly at actual defect prediction. Probabilistic neural networks performed consistently better than other techniques across different datasets in terms of prediction accuracy and defect prediction ability. The document recommends using an ensemble of different software defect prediction models rather than relying on a single technique.
AN EFFICIENT PSO BASED ENSEMBLE CLASSIFICATION MODEL ON HIGH DIMENSIONAL DATA...ijsc
As the size of the biomedical databases are growing day by day, finding an essential features in the disease prediction have become more complex due to high dimensionality and sparsity problems. Also, due to the
availability of a large number of micro-array datasets in the biomedical repositories, it is difficult to analyze, predict and interpret the feature information using the traditional feature selection based classification models. Most of the traditional feature selection based classification algorithms have computational issues such as dimension reduction, uncertainty and class imbalance on microarray datasets. Ensemble classifier is one of the scalable models for extreme learning machine due to its high efficiency, the fast processing speed for real-time applications. The main objective of the feature selection
based ensemble learning models is to classify the high dimensional data with high computational efficiency
and high true positive rate on high dimensional datasets. In this proposed model an optimized Particle swarm optimization (PSO) based Ensemble classification model was developed on high dimensional microarray
datasets. Experimental results proved that the proposed model has high computational efficiency compared to the traditional feature selection based classification models in terms of accuracy , true positive rate and error rate are concerned.
Classification of medical datasets using back propagation neural network powe...IJECEIAES
The classification is a one of the most indispensable domains in the data mining and machine learning. The classification process has a good reputation in the area of diseases diagnosis by computer systems where the progress in smart technologies of computer can be invested in diagnosing various diseases based on data of real patients documented in databases. The paper introduced a methodology for diagnosing a set of diseases including two types of cancer (breast cancer and lung), two datasets for diabetes and heart attack. Back Propagation Neural Network plays the role of classifier. The performance of neural net is enhanced by using the genetic algorithm which provides the classifier with the optimal features to raise the classification rate to the highest possible. The system showed high efficiency in dealing with databases differs from each other in size, number of features and nature of the data and this is what the results illustrated, where the ratio of the classification reached to 100% in most datasets).
The Evaluated Measurement of a Combined Genetic Algorithm and Artificial Immu...IJECEIAES
This paper demonstrates a hybrid between two optimization methods which are the Artificial Immune System (AIS) and Genetic Algorithm (GA). The novel algorithm called the immune genetic algorithm (IGA), provides improvement to the results that enable GA and AIS to work separately which is the main objective of this hybrid. Negative selection which is one of the techniques in the AIS, was employed to determine the input variables (populations) of the system. In order to illustrate the effectiveness of the IGA, the comparison with a steady-state GA, AIS, and PSO were also investigated. The testing of the performance was conducted by mathematical testing, problems were divided into single and multiple objectives. The five single objectives were then used to test the modified algorithm, the results showed that IGA performed better than all of the other methods. The DTLZ multi-objective testing functions were then used. The result also illustrated that the modified approach still had the best performance.
A Study on Genetic-Fuzzy Based Automatic Intrusion Detection on Network DatasetsDrjabez
1. The document proposes a genetic-fuzzy based method for automatic intrusion detection using network datasets. It combines fuzzy set theory with genetic algorithms to extract rules for both discrete and continuous attributes to detect normal and intrusion patterns.
2. The method was tested on KDD99 Cup and DARPA98 network intrusion detection datasets and showed high detection rates with low false alarm rates for both misuse detection and anomaly detection.
3. By extracting many rules to represent normal network behavior patterns, the proposed genetic-fuzzy approach can detect new or unknown intrusions based on anomalies without requiring prior domain expertise on intrusion patterns.
- The document discusses various approaches for applying machine learning and artificial intelligence to drug discovery.
- It describes how molecules and proteins can be represented as graphs, fingerprints, or sequences to be used as input for models.
- Different tasks in drug discovery like target binding prediction, generative design of new molecules, and drug repurposing are framed as questions that AI models can aim to answer.
- Techniques discussed include graph neural networks, reinforcement learning, and conditional generation using techniques like translation models.
- Several recent works applying these approaches for tasks like predicting drug-target interactions and generating synthesizable molecules are referenced.
Network embedding in biomedical data scienceArindam Ghosh
Excerpts from the paper:
What is it?
Network embedding aims at converting the network into a low-dimensional space while structural information of the network is preserved.
In this way, nodes and/or edges of the network can be represented as compacted yet informative vectors in the embedding space.
Advantages:
Typical non-network-based machine learning methods such as linear regression, Support Vector Machine (SVM) and decision forest, which have been demonstrated to be effective and efficient as the state-of-the-art techniques, can be applied to such vectors.
Current status:
Efforts of applying network embedding to improve biomedical data analysis are already planned or underway.
Difficulties:
The biomedical networks are sparse, noisy, incomplete, heterogeneous and usually consist of biomedical text and other domain knowledge. It makes embedding tasks more complicated than other application fields.
Drug discovery and development is a long and expensive process and over time has notoriously bucked Moore’s law that it now has its own law called Eroom’s Law named after it (the opposite of Moore’s). It is estimated that the attrition rate of drug candidates is up to 96% and the average cost to develop a new drug has reached almost $2.5 billion in recent years. One of the major causes for the high attrition rate is drug safety, which accounts for 30% of the failures.
Even if a drug is approved in market, it could be withdrawn due to safety problems. Therefore, evaluating drug safety extensively as early as possible is paramount in accelerating drug discovery and development. This talk provides a high-level overview of the current process of rational drug design that has been in place for many decades and covers some of the major areas where the application of AI, Deep learning and ML based techniques have had the most gains.
Specifically, this talk covers a variety of drug safety related AI and ML based techniques currently in use which can generally divided into 3 main categories:
1. Discovery,
2. Toxicity and Safety, and
3. Post-Market Monitoring.
We will address the recent progress in predictive models and techniques built for various toxicities. It will also cover some publicly available databases, tools and platforms available to easily leverage them.
We will also compare and contrast various modeling techniques including deep learning techniques and their accuracy using recent research. Finally, the talk will address some of the remaining challenges and limitations yet to be addressed in the area of drug discovery and safety assessment.
Solar Irradiation Prediction using back Propagation and Artificial Neural Net...ijtsrd
The document discusses using artificial neural networks to predict solar irradiation. It proposes a model using ANN with the Levenberg-Marquardt algorithm for backpropagation. The model aims to more accurately estimate available solar power by forecasting fluctuating solar irradiation levels. It achieves high accuracy of 97.74% and low error rate of 2.76% according to mean absolute percentage error and regression analysis. This performance improvement over contemporary techniques demonstrates ANN's effectiveness for nonlinear solar irradiation forecasting.
ANALYSIS OF MACHINE LEARNING ALGORITHMS WITH FEATURE SELECTION FOR INTRUSION ...IJNSA Journal
This document summarizes a research paper that analyzes machine learning algorithms for intrusion detection using the UNSW-NB15 dataset. It compares the performance of classifiers like KNN, SGD, Random Forest, Logistic Regression, and Naive Bayes, both with and without feature selection. Chi-Square feature selection is applied to reduce irrelevant features before training the classifiers. The classifiers' performance is evaluated based on metrics like accuracy, precision, recall, F1-score, true positive rate and false positive rate. The paper finds that feature selection can improve classifiers' performance for intrusion detection.
Inference of Nonlinear Gene Regulatory Networks through Optimized Ensemble of...Arinze Akutekwe
Comprehensive understanding of gene regulatory
networks (GRNs) is a major challenge in systems biology. Most
methods for modeling and inferring the dynamics of GRNs,
such as those based on state space models, vector autoregressive
models and G1DBN algorithm, assume linear dependencies
among genes. However, this strong assumption does not make
for true representation of time-course relationships across the
genes, which are inherently nonlinear. Nonlinear modeling
methods such as the S-systems and causal structure
identification (CSI) have been proposed, but are known to be
statistically inefficient and analytically intractable in high
dimensions. To overcome these limitations, we propose an
optimized ensemble approach based on support vector
regression (SVR) and dynamic Bayesian networks (DBNs). The
method called SVR-DBN, uses nonlinear kernels of the SVR to
infer the temporal relationships among genes within the DBN
framework. The two-stage ensemble is further improved by
SVR parameter optimization using Particle Swarm
Optimization. Results on eight insilico-generated datasets, and
two real world datasets of Drosophila Melanogaster and
Escherichia Coli, show that our method outperformed the
G1DBN algorithm by a total average accuracy of 12%. We
further applied our method to model the time-course
relationships of ovarian carcinoma. From our results, four hub
genes were discovered. Stratified analysis further showed that
the expression levels Prostrate differentiation factor and BTG
family member 2 genes, were significantly increased by the
cisplatin and oxaliplatin platinum drugs; while expression levels
of Polo-like kinase and Cyclin B1 genes, were both decreased by
the platinum drugs. These hub genes might be potential
biomarkers for ovarian carcinoma.
IRJET- Plant Disease Detection and Classification using Image Processing a...IRJET Journal
This document describes a method for detecting and classifying plant diseases using image processing and artificial neural networks. The method involves preprocessing images through grayscaling, resizing and filtering. K-means clustering is used to segment infected leaf regions. Features are extracted from segmented images and fed into feedforward and cascaded feedforward neural networks for disease classification. The method achieved accurate classification of several common plant diseases with fewer iterations and better performance than traditional feedforward backpropagation neural networks. This automatic disease detection approach could help improve agricultural productivity by facilitating early detection on large farms.
The National Resource for Network Biology (NRNB) held its External Advisory Council meeting on December 12, 2012. The NRNB is focused on developing network biology tools and collaborating with investigators. It oversees various technology research and development projects, software releases including Cytoscape 3.0, collaboration projects, and outreach/training events. The meeting agenda covered progress updates and sought advice on future plans.
Sample Work For Engineering Literature Review and Gap IdentificationPhD Assistance
Sample Work For Engineering Literature Review and Gap Identification - PhD Assistance - http://bit.ly/2E9fAVq
2.1 INTRODUCTION
2.2 RESEARCH GAPS IN EXISTING METHODS
2.3 OBJECTIVES OF THIS WORK
Read More : http://bit.ly/2Rl7XT5
#gapanalysis #strategicmanagement #datagapanalysis #gapanalysisppt #gapanalysishealthcare #gapanalysisfinance #gapanalysisEngineering
SURVEY ON MODELLING METHODS APPLICABLE TO GENE REGULATORY NETWORKijbbjournal
Gene Regulatory Network (GRN) plays an important role in knowing insight of cellular life cycle. It gives
information about at which different environmental conditions genes of particular interest get over
expressed or under expressed. Modelling of GRN is nothing but finding interactive relationships between
genes. Interaction can be positive or negative. For inference of GRN, time series data provided by
Microarray technology is used. Key factors to be considered while constructing GRN are scalability,
robustness, reliability and maximum detection of true positive interactions between genes. This paper
gives detailed technical review of existing methods applied for building of GRN along with scope for
future work.
This document provides an annual progress report for the National Resource for Network Biology (NRNB) for the period of May 1, 2011 to April 30, 2012. It summarizes the following:
1) Advances made in developing algorithms to identify network modules and use modules as biomarkers for disease. This includes methods to capture complex logical relationships within modules.
2) Progress on tools to enable new network analysis and visualization capabilities, including a new version of Cytoscape.
3) Growth of collaborations through the NRNB, which have nearly doubled over the past year to around 100 projects.
4) Continued development of the Cytoscape App Store to support the user and developer community.
Statistical analysis to identify the main parameters to effecting wwqi of sew...eSAT Journals
Abstract The present study was conducted to determine the wastewater quality index and to study statistical interrelationships amongst different parameters. The equation was developed to predict BOD and WWQI. A number of water quality physicochemical parameters were estimated quantitatively in wastewater samples following methods and procedures as per governing authority guidelines. Wastewater Quality Index (WWQI) is regarded as one of the most effective way to communicate wastewater quality in a collective way regarding wastewater quality parameters. The WWQI of wastewater samples was calculated with fuzzy MCDM methodology. The wastewater quality index for treated wastewater was evaluated considering eight parameters subscribed by Gujarat Pollution control Board (GPCB), a governing authority for environmental monitoring in Gujarat State, India. Considerable uncertainties are involved in the process of defining the treated wastewater quality for specific usage, like irrigation, reuse, etc.
The paper presents modeling of cognitive uncertainty in the field data, while dealing with these systems recourse to fuzzy logic. Also a statistical study is done to identify the main affecting variables to the WWQI. The Statistical Regression Analysis has been found to be highly useful tool for correlating different parameters. Correlation Analysis of the data suggests that TDS, SS, BOD, COD, O&G and Cl are significantly correlated with WWQI and DO of wastewater. The estimated BOD from independent variance DO for maximum, minimum and average is 25.35 mg/L, 2.65 mg/L and 13.56 mg/L respectively. While estimated WWQI from independent variance DO for maximum, minimum and average is 0.6212, 0.3074 and 0.4581 respectively. Out of eight parameters, TDS-BOD, TDS-COD, TDS-Cl, SS-BOD, SS-COD, and BOD-COD are significantly correlated. Present study shows that WWQI is influenced by BOD, COD, SS and TDS.
Software as a service for efficient cloud computingeSAT Journals
Abstract This Research paper explores importance of Software As A Service (SaaS) for efficient cloud computing in organizations and its implications. Enterprises now a days are betting big on SaaS and integrating this service delivery model of cloud computing architecture in their IT services. SaaS applications are service centric cloud computing delivery model used as IT Infrastructure which is multi-tenant architecture used to provide rich user experience with desired set of features requested by the cloud user. This research paper also discusses the importance of SaaS application architecture, functionality, efficiency, advantages and disadvantages. Keywords: Cloud Computing, Service Delivery Models, Software as a Service, SaaS Architecture.
An experimental study on mud concrete using soil as a fine aggrgate and ld sl...eSAT Journals
Abstract Aggregates are important ingredients of concrete. Sand is used abundantly after air and water. The extensive use of these natural resources is exploiting the environment every day. many alternative materials are being used, viz., slag sand, manufactured sand, quarry dust etc., as fine aggregates; Materials such as steel slag, blast furnace slag are being used as replacement for coarse aggregates. This paper reports the result of different mixes obtained by partial replacement of Natural coarse aggregates (NCA) and complete replacement of fine aggregates (FA) by alternative material such as LD slag and Natural soil respectively. This paper reports the result of different mixes obtained by partial replacement of natural coarse aggregates (CA) and complete replacement of fine aggregates (FA) by alternative material such as LD slag and Natural soil respectively. The wet compressive strength ranged from 16MPa to 20MPa for cubes made of Natural Sand and Natural Coarse Aggregates MIX-D. The wet compressive strength ranged from 18-26MPa for MIX-A; The value obtained for MIX-A was found to be 20% more compared to MIX-D. The split tensile strength ranged from 1.16-1.51MPa for MIX-A, it was concluded that, the mud concrete mix prepared with soil and LD slag gave the satisfactory result which was intended to achieve by normal conventional concrete mix MIX-D. The flexural strength ranged from 3.04-3.41MPa for MIX-A and 2.84-3.45MPa for M4, , it was concluded that, the mud concrete mix prepared with soil and LD slag gave the satisfactory result which was intended to achieve by normal conventional concrete mix. The mud concrete with Soil and LD slag cut down the cost of mix up to 43% when compared with normal conventional concrete of equivalent grade. Keywords: MUD Concrete, LD Slag, NCA, Alternative Materials, Wet Compressive Strength.
Reliability assessment for a four cylinder diesel engine by employing algebra...eSAT Journals
Abstract
In this paper, the authors have evaluated reliability of four cylinder diesel engine by employing the Algebra of logics which is easier
in comparison of old techniques. Here, a multi-component fuel system in diesel engine, comprised of four subsystems in series, has
considered. The authors, in this model, have considered a parallel redundant fuel injective device to improve the system’s
performance. The whole system can fail due to failure of atleast one component of all the routes of flow.
Boolean function technique has used to formulate and solve the mathematical model. Reliability and M.T.T.F. of the considered diesel
engine have been obtained to connect the model with physical situations. A numerical example and its graphical representation have
been appended at last to highlight important results.
Keywords: Boolean function, Algebra of logics, Parallel redundancy, Four cylinder diesel engine, Weibull time
distribution, Exponential time distribution, Reliability, MTTF etc.
Survey on securing outsourced storages in cloudeSAT Journals
Abstract Cloud computing is one of the buzzwords of technological developments in the IT industry and service sectors. Widening the social capabilities of servicing for a user on the internet while narrowing the insufficiency to store information and provide facilities locally, computing interests are shifting towards cloud services. Cloud services although contributes to major advantages for servicing also incurs notification to major security issues. The issues and the approaches that can be taken to minimise or even eliminate their effects are discussed in this paper to progress toward more secure storage services on the cloud. Keywords: Cloud computing, Cloud Security, Outsourced Storages, Storage as a Service
Security threats and detection technique in cognitive radio network with sens...eSAT Journals
This document discusses security threats and detection techniques in cognitive radio networks. It begins by providing background on cognitive radio networks and how secondary users can utilize unused spectrum bands of primary users. However, this can introduce security issues if malicious users emulate primary users. The document then discusses two main threats: primary user emulation attacks, where fake secondary users pretend to be primary users and disrupt communications, and jamming attacks. It proposes using energy spectrum sensing techniques for secondary users to detect unused spectrum bands while avoiding interfering with primary users. The document concludes that cognitive radio networks introduce practical wireless communication challenges, and future work is needed to improve security against identified threats.
Performance evaluation of tcp sack1 in wimax network asymmetryeSAT Journals
Abstract The WiMAX technology support to different channel bandwidth, cyclic prefix, modulation coding scheme, frame duration, simultaneous two way data transfer and propagation model. The WiMAX network asymmetry is largely depends on DL: UL ratio. This paper evaluate the performance of TCP Sack1 by considering Channel Bandwidth, Cyclic Prefix, Modulation Coding Scheme, Frame Duration, Two way transfer and Propagation model in WiMAX network with network asymmetry. The performance of TCP Sack1 in WiMAX network is evaluating by varying MAC layer parameter such as channel bandwidth, cyclic prefix, modulation coding scheme, frame duration, DL: UL ratio and physical layer parameter such as propagation model and full duplex mode of data transfer and other operating parameter such as downloading traffic and these parameters really affect the performance of TCP Sack1 in WiMAX network. The performance of WiMAX network is measured in terms of throughput, goodput and number of packets dropped. Keywords: World Wide interoperability for microwave access (WiMAX), Subscriber Stations (SSs), Downlink (DL), Uplink (UL), Medium access control (MAC), Transmission Control Protocol (TCP), OFDM, IEEE 802.16, Throughput, Goodput and Packets drop
Assessment of composting, energy and gas generation potential for msw at alla...eSAT Journals
The document analyzes the potential for composting, energy generation, and gas generation from municipal solid waste (MSW) in Allahabad City, India. Key findings include:
- The C/N ratio of MSW was found to be less than 30:1, indicating the waste is not suitable for composting.
- The energy content of MSW was estimated to be between 2495-2972 kcal/kg, below the minimum recommended value for incineration.
- Modeling showed a bioreactor landfill with leachate recirculation would generate more methane gas than a controlled sanitary landfill, making it the best disposal method for Allahabad's MSW
Design and manufacturing of drive center mandreleSAT Journals
Abstract In The Manufacturing Of Sleeve Yoke 1650 Many Problems Were Faced During Assembly And Disassembly Of Sleeve Yoke And Mandrel. The Assembly Being Too Heavy Requires Two Operators. Also, There Is Potential Danger Of Assembly Falling Down Leading To Damage To Life And Property. Moreover, The Idle Time Per Unit Production Is More Than Expected. Therefore The Aim Of Our Project Is To Design And Manufacture A New Mandrel Which Will Solve All The Problems And Increasing The Productivity. Splined Live Centre Technique Is Used To Design The New Drive Centre Along With The Mandrel Block . Keywords—Sleeve Yoke And Mandrel Assembly, Heavy, More Ideal Time, Splined Live Centre Technique, Mandrel Design, Mandrel Manufacturing.
Assessment of electromagnetic radiations from communication transmission towe...eSAT Journals
Abstract The effects of exposure from electromagnetic radiations of wireless cellular transmission towers to human health have attracted the attention of many researchers. Different works have revealed the harmful of electromagnetic radiation exposure to human health based on distance from the source and period of exposure. As one stays closer and at a pro-longed period from the transmission sites, the possibility of being affected by the radiation source becomes higher. In this work, we review some of the works on assessment of electromagnetic radiation exposure and propose measures for determining safety zones based on the cases of cellular transmission towers in the Tanzania environment to avoid extended exposure to electromagnetic radiation. Key words- Cellular transmission towers; Electromagnetic radiations; Health effects; Exposure limits
Numerical parametric study on interval shift variation in simo sstd technique...eSAT Journals
This document presents a parametric study on the time shift interval variation in the SIMO-SSTD technique for experimental modal analysis. The SSTD (Single Station Time Domain) technique extracts modal parameters from free decay responses without using Fourier transforms. The study investigates the accuracy of natural frequency and damping ratio results from the SSTD algorithm when using different time shift intervals between data matrices. Simulated data with known modal properties is used to calculate percentage errors for different shift intervals and noise levels. The goal is to determine the effect of time shift interval on the accuracy of the SSTD technique.
Abstract A usage of regular expressions to search text is well known and understood as a useful technique. Regular Expressions are generic representations for a string or a collection of strings. Regular expressions (regexps) are one of the most useful tools in computer science. NLP, as an area of computer science, has greatly benefitted from regexps: they are used in phonology, morphology, text analysis, information extraction, & speech recognition. This paper helps a reader to give a general review on usage of regular expressions illustrated with examples from natural language processing. In addition, there is a discussion on different approaches of regular expression in NLP. Keywords— Regular Expression, Natural Language Processing, Tokenization, Longest common subsequence alignment, POS tagging
----------------------------
A combined approach using triple des and blowfish research areaeSAT Journals
Abstract Payment card fraud is causing billions of dollars in losses for the card payment industry. Besides direct losses, the brand name can be affected by loss of consumer confidence due to the fraud. As a result of these growing losses, financial institutions and card issuers are continually seeking new techniques and innovation in payment card fraud detection and prevention. Credit card fraud falls broadly into two categories: behavioral fraud and application fraud. Credit card transactions continue to grow in number, taking an ever-larger share of the US payment system and leading to a higher rate of stolen account numbers and subsequent losses by banks. Improved fraud detection thus has become essential to maintain the viability of the US payment system. Increasingly, the card not present scenario, such as shopping on the internet poses a greater threat as the merchant (the web site) is no longer protected with advantages of physical verification such as signature check, photo identification, etc. In fact, it is almost impossible to perform any of the ‘physical world’ checks necessary to detect who is at the other end of the transaction. This makes the internet extremely attractive to fraud perpetrators. According to a recent survey, the rate at which internet fraud occurs is 20 to25 times higher than ‘physical world’ fraud. However, recent technical developments are showing some promise to check fraud in the card not present scenario. This paper provides an overview of payment card fraud and begins with payment card statistics and the definition of payment card fraud. It also describes various methods used by identity thieves to obtain personal and financial information for the purpose of payment card fraud. In addition, relationship between payment card fraud detection is provided. Finally, some solutions for detecting payment card fraud are also given. Index Terms: Online Frauds, Fraudsters, card fraud, CNP, CVV, AVS
Comparative analysis of dynamic programming algorithms to find similarity in ...eSAT Journals
Abstract There exist many computational methods for finding similarity in gene sequence, finding suitable methods that gives optimal similarity is difficult task. Objective of this project is to find an appropriate method to compute similarity in gene/protein sequence, both within the families and across the families. Many dynamic programming algorithms like Levenshtein edit distance; Longest Common Subsequence and Smith-waterman have used dynamic programming approach to find similarities between two sequences. But none of the method mentioned above have used real benchmark data sets. They have only used dynamic programming algorithms for synthetic data. We proposed a new method to compute similarity. The performance of the proposed algorithm is evaluated using number of data sets from various families, and similarity value is calculated both within the family and across the families. A comparative analysis and time complexity of the proposed method reveal that Smith-waterman approach is appropriate method when gene/protein sequence belongs to same family and Longest Common Subsequence is best suited when sequence belong to two different families. Keywords - Bioinformatics, Gene, Gene Sequencing, Edit distance, String Similarity.
Design and analysis of bodyworks of a formula style racecareSAT Journals
Abstract: The aim was to develop a body of the racecar with the proper studies and analyses, taking into account several factors, to present an optimum structure as a final result. These factors include, but are not limited to, weight, cost, drag resistance, functionality and aesthetics. The expected product is to not just be appealing to the eye but also increase the performance of the vehicle. Additional objectives include being able to accommodate the budget while maintaining a highly competitive level to perform well in on the race track. The new design will reduce the weight of the prototype and as well as the air drag, taking into consideration the ground effects desired to be implemented in the vehicle as a crucial factor. Moreover, the new body will be easier to dismantle reducing the service time. Keywords: drag resistance, aesthetics, performance, weight, cost.
History of gasoline direct compression ignition (gdci) engine a revieweSAT Journals
Abstract The first single-cylinder gasoline direct compression ignition (GDCI) engine was designed and built in 2010 by Delphi Companyfor testing performance, emissions and Brake specific fuel consumption (BSFC). Then after achieving the good results in performance, emissions and BSFCfrom single-cylinder engine, multi-cylinder GDCI engine was built in 2013. The compression ignition engine has limitations such as high noise, weight, PM and NOX emissions compared to gasoline engine. But the high efficiency, torque and better fuel economy of compression ignition engine are the reasons of Delphi Company to use compression ignition strategy for building a new combustion system. The objective of the present review study involves the reasons of building of the GDCI engine in detail. Keywords: Delphi Company,Emissions, Multi-Cylinder GDCI engine andSingle-CylinderGDCI Engine.
The automatic license plate recognition(alpr)eSAT Journals
Abstract Every country uses their own way of designing and allocating number plates to their country vehicles. This license number plate is then used by various government offices for their respective regular administrative task like- traffic police tracking the people who are violating the traffic rules, to identify the theft cars, in toll collection and parking allocation management etc. In India all motorized vehicle are assigned unique numbers. These numbers are assigned to the vehicles by district-level Regional Transport Office (RTO). In India the license plates must be kept in both front and back of the vehicle. These plates in general are easily readable by human due to their high level of intelligence on the contrary; it becomes an extremely difficult task for the computers to do the same. Many attributes like illumination, blur, background color, foreground color etc. will pose a problem. Index Terms: Automatic license plate recognition (ALPR) system, proposed methodology, reference
Eatrhquake response of reinforced cocrete multi storey building with base iso...eSAT Journals
This document summarizes a study on the earthquake response of base isolated reinforced concrete buildings. It describes the basic concept of base isolation, which aims to protect structures from seismic forces by isolating the building from ground movement using devices called isolators. It then discusses different types of isolators and presents the results of analyzing a 3D 8-story building model using different isolators, finding that base isolation substantially reduces base shear, displacements, and member forces compared to a fixed-base building.
Discrete wavelet transform based analysis of transformer differential currenteSAT Journals
This document analyzes transformer differential currents using discrete wavelet transform (DWT) during various operating conditions. It simulates a 2 kVA transformer system in MATLAB and analyzes the DWT of differential currents during magnetizing inrush, internal faults, switching, and over-fluxing. Statistical features are extracted from the DWT coefficients, which could be used as inputs for a classifier to distinguish between fault and non-fault conditions for improved differential protection. The analysis found that no single statistical feature was sufficient and that using multiple features from different decomposition levels may help classification.
IRJET- Gene Mutation Data using Multiplicative Adaptive Algorithm and Gene On...IRJET Journal
This document presents a methodology for analyzing gene mutation data using ontologies and association rule mining. It aims to develop a common knowledge base for genomic and proteomic analysis by integrating multiple data sources. The methodology involves using k-nearest neighbors algorithm to find similar genes, an iterative multiplicative updating algorithm to solve optimization problems, and SNCoNMF to identify co-regulatory modules between genes, microRNAs and transcription factors. The results are represented using a Bayesian rose tree for efficient visualization of associations between genetic components and diseases.
Clustering Approaches for Evaluation and Analysis on Formal Gene Expression C...rahulmonikasharma
Enormous generation of biological data and the need of analysis of that data led to the generation of the field Bioinformatics. Data mining is the stream which is used to derive, analyze the data by exploring the hidden patterns of the biological data. Though, data mining can be used in analyzing biological data such as genomic data, proteomic data here Gene Expression (GE) Data is considered for evaluation. GE is generated from Microarrays such as DNA and oligo micro arrays. The generated data is analyzed through the clustering techniques of data mining. This study deals with an implement the basic clustering approach K-Means and various clustering approaches like Hierarchal, Som, Click and basic fuzzy based clustering approach. Eventually, the comparative study of those approaches which lead to the effective approach of cluster analysis of GE.The experimental results shows that proposed algorithm achieve a higher clustering accuracy and takes less clustering time when compared with existing algorithms.
Model of Differential Equation for Genetic Algorithm with Neural Network (GAN...Sarvesh Kumar
The work is carried on the application of differential equation (DE) and its computational technique of genetic algorithm and neural (GANN) in C#, which is frequently used in globalised world by human wings. Diagrammatical and flow chart presentation is the major concerned for easy undertaking of these two concepts with indication of its present and future application is the new initiative taken in this paper along with computational approaches in C#. Little observation has been also pointed during working, functioning and development process of above algorithm in C# under given boundary value condition of DE for genetic and neural. Operations of fitness function and Genetic operations were completed for behavioural transmission of chromosome.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
CCC-Bicluster Analysis for Time Series Gene Expression DataIRJET Journal
The document presents a CCC-Biclustering (Contiguous Column Coherence) algorithm for identifying biclusters in time series gene expression data. The algorithm finds maximal biclusters with adjacent/contiguous columns in linear time using Ukkonen's suffix tree construction algorithm and discretized gene expression matrices. The algorithm was applied to a Saccharomyces cerevisiae gene expression time series in response to heat stress. It identifies coherent expression patterns shared among genes over contiguous time points, potentially revealing relevant regulatory modules.
Pattern recognition system based on support vector machinesAlexander Decker
This document describes a study that uses support vector machines (SVM) to develop quantitative structure-activity relationship (QSAR) models for predicting the anti-HIV activity of 1,3,4-oxadiazole substituted naphthyridine derivatives based on their molecular descriptors. The SVM model achieved a cross-validation R2 value of 0.90 and RMSE of 0.145, outperforming artificial neural network and multiple linear regression models. An external validation on an independent test set found the SVM model had an R value of 0.96 and RMSE of 0.166, demonstrating good predictive ability.
بعض (وليس الكل) ملخصات الأبحاث الجيدة المنشورة فى بعض المجلات الجيدة وفيها تنوع من الافكار الابحاث الابتكارية التى يخدم فيها علوم الحاسبات فيها - انها تطبيقات حياتية
Gene Selection for Sample Classification in Microarray: Clustering Based MethodIOSR Journals
This document describes a clustering-based method for gene selection to classify samples in microarray data. It involves calculating the relevance of each gene to class labels and the redundancy between genes using mutual information. Genes are clustered based on their relevance, with the most relevant gene selected as the cluster representative. Min-hash clustering is then applied to reduce redundant genes and cluster size. The goal is to select a minimal set of non-redundant genes that can accurately classify samples by reducing noise from irrelevant genes.
APPLICATION OF CLONAL SELECTION IMMUNE SYSTEM METHOD FOR OPTIMIZATION OF DIST...UniversitasGadjahMada
This paper proposes an application of clonal selection immune system method for optimization of distribution network. The distribution network with high-performance is a network that has a low power loss, better voltage profile, and loading balance among feeders. The task for improving the performance of the distribution network is optimization of network configuration. The optimization has become a necessary study with the presence of DG in entire networks. In this work, optimization of network configuration is based on an AIS algorithm. The methodology has been tested in a model of 33 bus IEEE radial distribution networks with and without DG integration. The results have been showed that the optimal configuration of the distribution network is able to reduce power loss and to improve the voltage profile of the distribution network significantly.
Performance analysis of neural network models for oxazolines and oxazoles der...ijistjournal
Neural networks have been used successfully to a br
oad range of areas such as business, data mining, d
rug
discovery and biology. In medicine, neural network
s have been applied widely in medical diagnosis,
detection and evaluation of new drugs and treatment
cost estimation. In addition, neural networks have
begin practice in data mining strategies for the a
im of prediction, knowledge discovery. This paper
will
present the application of neural networks for the
prediction and analysis of antitubercular activity
of
Oxazolines and Oxazoles derivatives. This study pre
sents techniques based on the development of Single
hidden layer neural network (SHLFFNN), Gradient Des
cent Back propagation neural network (GDBPNN),
Gradient Descent Back propagation with momentum neu
ral network (GDBPMNN), Back propagation with
Weight decay neural network (BPWDNN) and Quantile r
egression neural network (QRNN) of artificial
neural network (ANN) models Here, we comparatively
evaluate the performance of five neural network
techniques. The evaluation of the efficiency of eac
h model by ways of benchmark experiments is an
accepted application. Cross-validation and resampli
ng techniques are commonly used to derive point
estimates of the performances which are compared to
identify methods with good properties. Predictiv
e
accuracy was evaluated using the root mean squared
error (RMSE), Coefficient determination(
), mean
absolute error(MAE), mean percentage error(MPE) and
relative square error(RSE). We found that all five
neural network models were able to produce feasible
models. QRNN model is outperforms with all
statistical tests amongst other four models.
This document describes research on using Bayesian networks to model gene expression data related to breast cancer. The goals are to identify new or known gene interactions, examine network properties, and find significant genes. The methodology involves learning networks from 82 genes using different variable types and sample groups. Centrality metrics are used to identify important "hub" genes. Networks are analyzed to determine if they exhibit small-world or scale-free properties common in biological networks. The results could confirm known pathways or identify new ones relevant to breast cancer.
Peter Langfelder presented on weighted gene co-expression network analysis of HD data. Key points:
- WGCNA identified gene modules in mouse striatum associated with CAG repeat length. Neuronal modules were down with increasing repeats while oligodendrocyte modules were up.
- Human HD brain regions showed common and region-specific responses. A neuronal module was down across all regions while astrocyte and microglial modules were up.
- Consensus modules identified co-expressed genes consistently changed across multiple human HD datasets, providing robust modules for further investigation.
Applications of Artificial Neural Networks in Cancer PredictionIRJET Journal
This document discusses applications of artificial neural networks in cancer prediction and prognosis. It summarizes several studies that have used ANNs to predict breast cancer prognosis and recurrence, as well as classify types of lung cancer.
For breast cancer prognosis, a Maximum Entropy Estimation model was shown to outperform multi-layer perceptrons and probabilistic neural networks. For predicting breast cancer recurrence, an ANN achieved the best performance compared to other machine learning algorithms based on accuracy and AUC.
An ANN combined with a genetic algorithm was also able to successfully identify genes that classify lung cancer status. The ANN-GA model achieved over 97% accuracy in classifying different types of lung cancer based on gene expression data.
An Efficient PSO Based Ensemble Classification Model on High Dimensional Data...ijsc
This summary provides the high-level information from the document in 3 sentences:
The document proposes a Particle Swarm Optimization (PSO) based ensemble classification model to improve classification of high-dimensional biomedical datasets. It develops an optimized PSO technique to select optimal features and initialize weights for base classifiers in the ensemble model. Experimental results on microarray datasets show the proposed model achieves higher accuracy, true positive rate, and lower error rate compared to traditional feature selection based classification models.
Comparative study of artificial neural network based classification for liver...Alexander Decker
This document presents a comparative study of different artificial neural network (ANN) classification models for predicting liver disease in patients. It evaluates ANN models like backpropagation, radial basis function, self-organizing map, and support vector machine on liver patient data. The support vector machine model achieved the highest accuracy at 99.76% for men data and 97.7% for women data, indicating it may be effective as a predictive tool for liver patients.
PREDICTION OF MALIGNANCY IN SUSPECTED THYROID TUMOUR PATIENTS BY THREE DIFFER...cscpconf
This document compares three classification methods - artificial neural networks, decision trees, and logistic regression - for predicting malignancy in thyroid tumor patients using a clinical dataset. It describes each method and applies them to a dataset of 259 thyroid tumor patients. The artificial neural network achieved 98% accuracy on the training set and 92% on the validation set. The decision tree method used 150 cases to build a model and achieved 86% accuracy. Logistic regression analysis resulted in 88% accuracy. The artificial neural network was able to accurately predict malignancy and identified important attributes like multiple nodules and family cancer history.
Optimized Parameter of Wavelet Neural Network (WNN) using INGArahulmonikasharma
Genetic algorithm has been one of the most popular methods for many challenging optimization problems. It is a critical problem in which the evacuation time is an important issues. The continuous air traffic growth and limits of resources, there is a need for reducing the congestion of the airspace system. The main objective of this work is to automatically adapt the airspace configurations, according to the evolution of traffic Niche genetic algorithm(INGA) was used in reliability optimization of software system. And also the searching performance of the genetic algorithm was improved by the stochastic tournament model. The multi-module complex software system reliability allocation effectively. Genetic algorithm (GA) and FGA are compared though seven benchmark function. It can be applied to a wider range of problem including multi-level problem. The uniform schema crossover operator and the non-uniform mutation in the genetic algorithm.
Optimized Parameter of Wavelet Neural Network (WNN) using INGArahulmonikasharma
Genetic algorithm has been one of the most popular methods for many challenging optimization problems. It is a critical problem in which the evacuation time is an important issues. The continuous air traffic growth and limits of resources, there is a need for reducing the congestion of the airspace system. The main objective of this work is to automatically adapt the airspace configurations, according to the evolution of traffic Niche genetic algorithm(INGA) was used in reliability optimization of software system. And also the searching performance of the genetic algorithm was improved by the stochastic tournament model. The multi-module complex software system reliability allocation effectively. Genetic algorithm (GA) and FGA are compared though seven benchmark function. It can be applied to a wider range of problem including multi-level problem. The uniform schema crossover operator and the non-uniform mutation in the genetic algorithm.
The document proposes a novel hybrid method called PCA-BEL for classifying gene expression microarray data. PCA-BEL uses principal component analysis (PCA) for feature extraction followed by classification using a Brain Emotional Learning (BEL) network. PCA reduces the dimensionality of the microarray data to overcome the high dimensionality problem. BEL is then used for classification due its low computational complexity making it suitable for high dimensional data. The method is tested on several cancer gene expression datasets and achieves average accuracies of 100%, 96%, 98.32%, 87.40% and 88% on five datasets respectively, demonstrating its effectiveness for microarray classification tasks.
Single parent mating in genetic algorithm for real robotic system identificationIAESIJAI
System identification (SI) is a method of determining a mathematical model
for a system given a set of input-output data. A representation is made using
a mathematical model based on certain specified assumptions. In SI, model
structure selection is a step where a model structure perceived as an adequate
system representation is selected. A typical rule is that the final model must
have a good balance between parsimony and accuracy. As a popular search
method, genetic algorithm (GA) is used for selecting a model structure.
However, the optimality of the final model depends much on the
effectiveness of GA operators. This paper presents a mating technique
named single parent mating (SPM) in GA for use in a real robotic SI. This
technique is based on the chromosome structure of the parents such that a
single parent is sufficient in achieving mating that eases the search for the
optimal model. The results show that using three different objective
functions (Akaike information criterion, Bayesian information criterion and
parameter magnitude–based information criterion 2) respectively, GA with
the mating technique is able to find more optimal models than without the
mating technique. Validations show that the selected models using the
mating technique are acceptable.
Similar to A clonal based algorithm for the reconstruction of genetic network using s system - copy (2) (20)
Mechanical properties of hybrid fiber reinforced concrete for pavementseSAT Journals
Abstract
The effect of addition of mono fibers and hybrid fibers on the mechanical properties of concrete mixture is studied in the present
investigation. Steel fibers of 1% and polypropylene fibers 0.036% were added individually to the concrete mixture as mono fibers and
then they were added together to form a hybrid fiber reinforced concrete. Mechanical properties such as compressive, split tensile and
flexural strength were determined. The results show that hybrid fibers improve the compressive strength marginally as compared to
mono fibers. Whereas, hybridization improves split tensile strength and flexural strength noticeably.
Keywords:-Hybridization, mono fibers, steel fiber, polypropylene fiber, Improvement in mechanical properties.
Material management in construction – a case studyeSAT Journals
Abstract
The objective of the present study is to understand about all the problems occurring in the company because of improper application
of material management. In construction project operation, often there is a project cost variance in terms of the material, equipments,
manpower, subcontractor, overhead cost, and general condition. Material is the main component in construction projects. Therefore,
if the material management is not properly managed it will create a project cost variance. Project cost can be controlled by taking
corrective actions towards the cost variance. Therefore a methodology is used to diagnose and evaluate the procurement process
involved in material management and launch a continuous improvement was developed and applied. A thorough study was carried
out along with study of cases, surveys and interviews to professionals involved in this area. As a result, a methodology for diagnosis
and improvement was proposed and tested in selected projects. The results obtained show that the main problem of procurement is
related to schedule delays and lack of specified quality for the project. To prevent this situation it is often necessary to dedicate
important resources like money, personnel, time, etc. To monitor and control the process. A great potential for improvement was
detected if state of the art technologies such as, electronic mail, electronic data interchange (EDI), and analysis were applied to the
procurement process. These helped to eliminate the root causes for many types of problems that were detected.
Managing drought short term strategies in semi arid regions a case studyeSAT Journals
Abstract
Drought management needs multidisciplinary action. Interdisciplinary efforts among the experts in various fields of the droughts
prone areas are helpful to achieve tangible and permanent solution for this recurring problem. The Gulbarga district having the total
area around 16, 240 sq.km, and accounts 8.45 per cent of the Karnataka state area. The district has been situated with latitude 17º 19'
60" North and longitude of 76 º 49' 60" east. The district is situated entirely on the Deccan plateau positioned at a height of 300 to
750 m above MSL. Sub-tropical, semi-arid type is one among the drought prone districts of Karnataka State. The drought
management is very important for a district like Gulbarga. In this paper various short term strategies are discussed to mitigate the
drought condition in the district.
Keywords: Drought, South-West monsoon, Semi-Arid, Rainfall, Strategies etc.
Life cycle cost analysis of overlay for an urban road in bangaloreeSAT Journals
Abstract
Pavements are subjected to severe condition of stresses and weathering effects from the day they are constructed and opened to traffic
mainly due to its fatigue behavior and environmental effects. Therefore, pavement rehabilitation is one of the most important
components of entire road systems. This paper highlights the design of concrete pavement with added mono fibers like polypropylene,
steel and hybrid fibres for a widened portion of existing concrete pavement and various overlay alternatives for an existing
bituminous pavement in an urban road in Bangalore. Along with this, Life cycle cost analyses at these sections are done by Net
Present Value (NPV) method to identify the most feasible option. The results show that though the initial cost of construction of
concrete overlay is high, over a period of time it prove to be better than the bituminous overlay considering the whole life cycle cost.
The economic analysis also indicates that, out of the three fibre options, hybrid reinforced concrete would be economical without
compromising the performance of the pavement.
Keywords: - Fatigue, Life cycle cost analysis, Net Present Value method, Overlay, Rehabilitation
Laboratory studies of dense bituminous mixes ii with reclaimed asphalt materialseSAT Journals
Abstract
The issue of growing demand on our nation’s roadways over that past couple of decades, decreasing budgetary funds, and the need to
provide a safe, efficient, and cost effective roadway system has led to a dramatic increase in the need to rehabilitate our existing
pavements and the issue of building sustainable road infrastructure in India. With these emergency of the mentioned needs and this
are today’s burning issue and has become the purpose of the study.
In the present study, the samples of existing bituminous layer materials were collected from NH-48(Devahalli to Hassan) site.The
mixtures were designed by Marshall Method as per Asphalt institute (MS-II) at 20% and 30% Reclaimed Asphalt Pavement (RAP).
RAP material was blended with virgin aggregate such that all specimens tested for the, Dense Bituminous Macadam-II (DBM-II)
gradation as per Ministry of Roads, Transport, and Highways (MoRT&H) and cost analysis were carried out to know the economics.
Laboratory results and analysis showed the use of recycled materials showed significant variability in Marshall Stability, and the
variability increased with the increase in RAP content. The saving can be realized from utilization of recycled materials as per the
methodology, the reduction in the total cost is 19%, 30%, comparing with the virgin mixes.
Keywords: Reclaimed Asphalt Pavement, Marshall Stability, MS-II, Dense Bituminous Macadam-II
Laboratory investigation of expansive soil stabilized with natural inorganic ...eSAT Journals
This document summarizes a study on stabilizing expansive black cotton soil with the natural inorganic stabilizer RBI-81. Laboratory tests were conducted to evaluate the effect of RBI-81 on the soil's engineering properties. The tests showed that with 2% RBI-81 and 28 days of curing, the unconfined compressive strength increased by around 250% and the CBR value improved by approximately 400% compared to the untreated soil. Overall, the study found that RBI-81 effectively improved the strength properties of the black cotton soil and its suitability as a soil stabilizer was supported.
Influence of reinforcement on the behavior of hollow concrete block masonry p...eSAT Journals
Abstract
Reinforced masonry was developed to exploit the strength potential of masonry and to solve its lack of tensile strength. Experimental
and analytical studies have been carried out to investigate the effect of reinforcement on the behavior of hollow concrete block
masonry prisms under compression and to predict ultimate failure compressive strength. In the numerical program, three dimensional
non-linear finite elements (FE) model based on the micro-modeling approach is developed for both unreinforced and reinforced
masonry prisms using ANSYS (14.5). The proposed FE model uses multi-linear stress-strain relationships to model the non-linear
behavior of hollow concrete block, mortar, and grout. Willam-Warnke’s five parameter failure theory has been adopted to model the
failure of masonry materials. The comparison of the numerical and experimental results indicates that the FE models can successfully
capture the highly nonlinear behavior of the physical specimens and accurately predict their strength and failure mechanisms.
Keywords: Structural masonry, Hollow concrete block prism, grout, Compression failure, Finite element method,
Numerical modeling.
Influence of compaction energy on soil stabilized with chemical stabilizereSAT Journals
This document summarizes a study on the influence of compaction energy on soil stabilized with a chemical stabilizer. Laboratory tests were conducted on locally available loamy soil treated with a patented polymer liquid stabilizer and compacted at four different energy levels. The study found that increasing the compaction effort increased the density of both untreated and treated soil, but the rate of increase was lower for stabilized soil. Treating the soil with the stabilizer improved its unconfined compressive strength and resilient modulus, and reduced accumulated plastic strain, with these properties further improved by higher compaction efforts. The stabilized soil exhibited strength and performance benefits compared to the untreated soil.
Geographical information system (gis) for water resources managementeSAT Journals
This document describes a hydrological framework developed in the form of a Hydrologic Information System (HIS) to meet the information needs of various government departments related to water management in a state. The HIS consists of a hydrological database coupled with tools for collecting and analyzing spatial and non-spatial water resources data. It also incorporates a hydrological model to indirectly assess water balance components over space and time. A web-based GIS portal was created to allow users to access and visualize the hydrological data, as well as outputs from the SWAT hydrological model. The framework is intended to facilitate integrated water resources planning and management across different administrative levels.
Forest type mapping of bidar forest division, karnataka using geoinformatics ...eSAT Journals
Abstract
The study demonstrate the potentiality of satellite remote sensing technique for the generation of baseline information on forest types
including tree plantation details in Bidar forest division, Karnataka covering an area of 5814.60Sq.Kms. The Total Area of Bidar
forest division is 5814Sq.Kms analysis of the satellite data in the study area reveals that about 84% of the total area is Covered by
crop land, 1.778% of the area is covered by dry deciduous forest, 1.38 % of mixed plantation, which is very threatening to the
environmental stability of the forest, future plantation site has been mapped. With the use of latest Geo-informatics technology proper
and exact condition of the trees can be observed and necessary precautions can be taken for future plantation works in an appropriate
manner
Keywords:-RS, GIS, GPS, Forest Type, Tree Plantation
Factors influencing compressive strength of geopolymer concreteeSAT Journals
Abstract
To study effects of several factors on the properties of fly ash based geopolymer concrete on the compressive strength and also the
cost comparison with the normal concrete. The test variables were molarities of sodium hydroxide(NaOH) 8M,14M and 16M, ratio of
NaOH to sodium silicate (Na2SiO3) 1, 1.5, 2 and 2.5, alkaline liquid to fly ash ratio 0.35 and 0.40 and replacement of water in
Na2SiO3 solution by 10%, 20% and 30% were used in the present study. The test results indicated that the highest compressive
strength 54 MPa was observed for 16M of NaOH, ratio of NaOH to Na2SiO3 2.5 and alkaline liquid to fly ash ratio of 0.35. Lowest
compressive strength of 27 MPa was observed for 8M of NaOH, ratio of NaOH to Na2SiO3 is 1 and alkaline liquid to fly ash ratio of
0.40. Alkaline liquid to fly ash ratio of 0.35, water replacement of 10% and 30% for 8 and 16 molarity of NaOH and has resulted in
compressive strength of 36 MPa and 20 MPa respectively. Superplasticiser dosage of 2 % by weight of fly ash has given higher
strength in all cases.
Keywords: compressive strength, alkaline liquid, fly ash
Experimental investigation on circular hollow steel columns in filled with li...eSAT Journals
Abstract
Composite Circular hollow Steel tubes with and without GFRP infill for three different grades of Light weight concrete are tested for
ultimate load capacity and axial shortening , under Cyclic loading. Steel tubes are compared for different lengths, cross sections and
thickness. Specimens were tested separately after adopting Taguchi’s L9 (Latin Squares) Orthogonal array in order to save the initial
experimental cost on number of specimens and experimental duration. Analysis was carried out using ANN (Artificial Neural
Network) technique with the assistance of Mini Tab- a statistical soft tool. Comparison for predicted, experimental & ANN output is
obtained from linear regression plots. From this research study, it can be concluded that *Cross sectional area of steel tube has most
significant effect on ultimate load carrying capacity, *as length of steel tube increased- load carrying capacity decreased & *ANN
modeling predicted acceptable results. Thus ANN tool can be utilized for predicting ultimate load carrying capacity for composite
columns.
Keywords: Light weight concrete, GFRP, Artificial Neural Network, Linear Regression, Back propagation, orthogonal
Array, Latin Squares
Experimental behavior of circular hsscfrc filled steel tubular columns under ...eSAT Journals
This document summarizes an experimental study that tested circular concrete-filled steel tube columns with varying parameters. 45 specimens were tested with different fiber percentages (0-2%), tube diameter-to-wall-thickness ratios (D/t from 15-25), and length-to-diameter (L/d) ratios (from 2.97-7.04). The results found that columns filled with fiber-reinforced concrete exhibited higher stiffness, equal ductility, and enhanced energy absorption compared to those filled with plain concrete. The load carrying capacity increased with fiber content up to 1.5% but not at 2.0%. The analytical predictions of failure load closely matched the experimental values.
Evaluation of punching shear in flat slabseSAT Journals
Abstract
Flat-slab construction has been widely used in construction today because of many advantages that it offers. The basic philosophy in
the design of flat slab is to consider only gravity forces; this method ignores the effect of punching shear due to unbalanced moments
at the slab column junction which is critical. An attempt has been made to generate generalized design sheets which accounts both
punching shear due to gravity loads and unbalanced moments for cases (a) interior column; (b) edge column (bending perpendicular
to shorter edge); (c) edge column (bending parallel to shorter edge); (d) corner column. These design sheets are prepared as per
codal provisions of IS 456-2000. These design sheets will be helpful in calculating the shear reinforcement to be provided at the
critical section which is ignored in many design offices. Apart from its usefulness in evaluating punching shear and the necessary
shear reinforcement, the design sheets developed will enable the designer to fix the depth of flat slab during the initial phase of the
design.
Keywords: Flat slabs, punching shear, unbalanced moment.
Evaluation of performance of intake tower dam for recent earthquake in indiaeSAT Journals
Abstract
Intake towers are typically tall, hollow, reinforced concrete structures and form entrance to reservoir outlet works. A parametric
study on dynamic behavior of circular cylindrical towers can be carried out to study the effect of depth of submergence, wall thickness
and slenderness ratio, and also effect on tower considering dynamic analysis for time history function of different soil condition and
by Goyal and Chopra accounting interaction effects of added hydrodynamic mass of surrounding and inside water in intake tower of
dam
Key words: Hydrodynamic mass, Depth of submergence, Reservoir, Time history analysis,
Evaluation of operational efficiency of urban road network using travel time ...eSAT Journals
This document evaluates the operational efficiency of an urban road network in Tiruchirappalli, India using travel time reliability measures. Traffic volume and travel times were collected using video data from 8-10 AM on various roads. Average travel times, 95th percentile travel times, and buffer time indexes were calculated to assess reliability. Non-motorized vehicles were found to most impact reliability on one road. A relationship between buffer time index and traffic volume was developed. Finally, a travel time model was created and validated based on length, speed, and volume.
Estimation of surface runoff in nallur amanikere watershed using scs cn methodeSAT Journals
Abstract
The development of watershed aims at productive utilization of all the available natural resources in the entire area extending from
ridge line to stream outlet. The per capita availability of land for cultivation has been decreasing over the years. Therefore, water and
the related land resources must be developed, utilized and managed in an integrated and comprehensive manner. Remote sensing and
GIS techniques are being increasingly used for planning, management and development of natural resources. The study area, Nallur
Amanikere watershed geographically lies between 110 38’ and 110 52’ N latitude and 760 30’ and 760 50’ E longitude with an area of
415.68 Sq. km. The thematic layers such as land use/land cover and soil maps were derived from remotely sensed data and overlayed
through ArcGIS software to assign the curve number on polygon wise. The daily rainfall data of six rain gauge stations in and around
the watershed (2001-2011) was used to estimate the daily runoff from the watershed using Soil Conservation Service - Curve Number
(SCS-CN) method. The runoff estimated from the SCS-CN model was then used to know the variation of runoff potential with different
land use/land cover and with different soil conditions.
Keywords: Watershed, Nallur watershed, Surface runoff, Rainfall-Runoff, SCS-CN, Remote Sensing, GIS.
Estimation of morphometric parameters and runoff using rs & gis techniqueseSAT Journals
This document summarizes a study that used remote sensing and GIS techniques to estimate morphometric parameters and runoff for the Yagachi catchment area in India over a 10-year period. Morphometric analysis was conducted to understand the hydrological response at the micro-watershed level. Daily runoff was estimated using the SCS curve number model. The results showed a positive correlation between rainfall and runoff. Land use/land cover changes between 2001-2010 were found to impact estimated runoff amounts. Remote sensing approaches provided an effective means to model runoff for this large, ungauged area.
Effect of variation of plastic hinge length on the results of non linear anal...eSAT Journals
Abstract The nonlinear Static procedure also well known as pushover analysis is method where in monotonically increasing loads are applied to the structure till the structure is unable to resist any further load. It is a popular tool for seismic performance evaluation of existing and new structures. In literature lot of research has been carried out on conventional pushover analysis and after knowing deficiency efforts have been made to improve it. But actual test results to verify the analytically obtained pushover results are rarely available. It has been found that some amount of variation is always expected to exist in seismic demand prediction of pushover analysis. Initial study is carried out by considering user defined hinge properties and default hinge length. Attempt is being made to assess the variation of pushover analysis results by considering user defined hinge properties and various hinge length formulations available in literature and results compared with experimentally obtained results based on test carried out on a G+2 storied RCC framed structure. For the present study two geometric models viz bare frame and rigid frame model is considered and it is found that the results of pushover analysis are very sensitive to geometric model and hinge length adopted. Keywords: Pushover analysis, Base shear, Displacement, hinge length, moment curvature analysis
Effect of use of recycled materials on indirect tensile strength of asphalt c...eSAT Journals
Abstract
Depletion of natural resources and aggregate quarries for the road construction is a serious problem to procure materials. Hence
recycling or reuse of material is beneficial. On emphasizing development in sustainable construction in the present era, recycling of
asphalt pavements is one of the effective and proven rehabilitation processes. For the laboratory investigations reclaimed asphalt
pavement (RAP) from NH-4 and crumb rubber modified binder (CRMB-55) was used. Foundry waste was used as a replacement to
conventional filler. Laboratory tests were conducted on asphalt concrete mixes with 30, 40, 50, and 60 percent replacement with RAP.
These test results were compared with conventional mixes and asphalt concrete mixes with complete binder extracted RAP
aggregates. Mix design was carried out by Marshall Method. The Marshall Tests indicated highest stability values for asphalt
concrete (AC) mixes with 60% RAP. The optimum binder content (OBC) decreased with increased in RAP in AC mixes. The Indirect
Tensile Strength (ITS) for AC mixes with RAP also was found to be higher when compared to conventional AC mixes at 300C.
Keywords: Reclaimed asphalt pavement, Foundry waste, Recycling, Marshall Stability, Indirect tensile strength.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
The CBC machine is a common diagnostic tool used by doctors to measure a patient's red blood cell count, white blood cell count and platelet count. The machine uses a small sample of the patient's blood, which is then placed into special tubes and analyzed. The results of the analysis are then displayed on a screen for the doctor to review. The CBC machine is an important tool for diagnosing various conditions, such as anemia, infection and leukemia. It can also help to monitor a patient's response to treatment.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Advanced control scheme of doubly fed induction generator for wind turbine us...
A clonal based algorithm for the reconstruction of genetic network using s system - copy (2)
1. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 02 Issue: 08 | Aug-2013, Available @ http://www.ijret.org 44
A CLONAL BASED ALGORITHM FOR THE RECONSTRUCTION OF
GENETIC NETWORK USING S-SYSTEM
Jereesh A S 1
, Govindan V K 2
1
Research scholar, 2
Professor, Department of Computer Science & Engineering, National Institute of Technology,
Calicut, Kerala, India, jereesh.a.s@gmail.com, vkg@nitc.ac.in
Abstract
Motivation: Gene regulatory network is the network based approach to represent the interactions between genes. DNA microarray is
the most widely used technology for extracting the relationships between thousands of genes simultaneously. Gene microarray
experiment provides the gene expression data for a particular condition and varying time periods. The expression of a particular gene
depends upon the biological conditions and other genes. In this paper, we propose a new method for the analysis of microarray data.
The proposed method makes use of S-system, which is a well-accepted model for the gene regulatory network reconstruction. Since
the problem has multiple solutions, we have to identify an optimized solution. Evolutionary algorithms have been used to solve such
problems. Though there are a number of attempts already been carried out by various researchers, the solutions are still not that
satisfactory with respect to the time taken and the degree of accuracy achieved. Therefore, there is a need of huge amount further
work in this topic for achieving solutions with improved performances.
Results: In this work, we have proposed Clonal selection algorithm for identifying optimal gene regulatory network. The approach is
tested on the real life data: SOS Ecoli DNA repairing gene expression data. It is observed that the proposed algorithm converges
much faster and provides better results than the existing algorithms.
Index Terms: Microarray analysis, Evolutionary Algorithm, Artificial Immune System, S-system, Gene Regulatory
Network, SOS Ecoli DNA repairing, Clonal Selection Algorithm.
-----------------------------------------------------------------------***-----------------------------------------------------------------------
1. INTRODUCTION
DNA microarray is a modern technology, which is used to
analyze the interactions between thousands of genes in parallel
[7]. Exploiting the hybridization property of CDNA, the
transcript abundance information is measured in microarray
experiment. Microarrays have numerous applications. A
particular set of genes are activated for a particular condition.
Identification of activated genes will be useful for recovering
or activating the conditions artificially. Even though the
technology is well developed, direct biological methods
available for finding gene expression are complex. Analysis of
protein expression data is very expensive due to the complex
structures of proteins.
Microarray data analysis involves methodologies and
techniques to analyze the data obtained after the microarray
experiments. The major part of the microarray data analysis is
the numerical analysis of normalized data matrix. Gene
expression analysis is a large-scale experiment, which comes
under functional genomics. Functional genomics deals with
the analysis of large data sets to identify the functions and
interactions between genes [24]. A set of algorithms and
methods are defined for the analysis of microarray data. There
is a tradeoff between the time and accuracy for using an
algorithm for analyzing the microarray data.
Gene Regulatory Network (GRN) is a network of set of genes,
which are involved, in a particular process. In GRN, each node
represents gene and links between genes define the
relationships between those genes. Gene regulatory network is
the network based approach to represent the interactions
between genes. The expression of a particular gene depends
upon the biological conditions and other genes. Gene
microarray experiment identifies the gene expression data for
a particular condition and varying time periods. Identifying
such network will lead to various applications in biological
and medical areas. Objective of this paper is to propose a new
method, which leads to substantial improvements in
processing time and accuracy. High dimensionality of the
microarray data matrix makes the identification of GRN
complex. In this paper optimization of S-system model using
artificial immune system is proposed.
The rest of this paper is organized as follows. A brief survey
of some of the existing work is given in Section 2. Section 3
presents the mathematical model used for the modeling of
gene regulatory network and algorithm for the optimization
2. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 02 Issue: 08 | Aug-2013, Available @ http://www.ijret.org 45
process. Section 4 describes the experimental setup and
compares the results of the new proposal with the existing
approach. Section 5 is a discussion based on the results
obtained by the proposed method on the real life data set
called SOS Ecoli DNA repairing gene expression. Finally, the
paper is concluded in Section 6.
2. LITERATURE SURVEY
There have been several mathematical models applied for the
gene regulatory network reconstruction. One of the basic
mathematical models identified was based on Random
Boolean Network [1]. According to this model, the state of a
particular gene will be either in on or off state. The state space
for Boolean network is 2N
where N is the number of genes in
microarray. This model gives the information about gene
states, but does not provide expression levels of genes.
Zhang et al. [25] suggested Bayesian network model based on
joint probability distribution. This model uses DAG (Directed
Acyclic Graph) structure for modeling. Since the gene
regulatory network is having the property of cyclic
dependency between gene nodes, this type of model is not
efficient for inferring gene network.
Another important work [17] proposed is the modeling of
Gene regulatory network using ANN (Artificial Neural
Network) with the standard back propagation method. The
number of inputs and outputs required for this model is N,
where N is the number of genes in microarray data set. The
structural complexity of ANN model will increase as the
number of genes increases; hence, this model is not efficient
for large data sets.
Reverse engineering using the evolutionary algorithms can be
applied for solving the optimization problems. Genetic
algorithm is one of the major evolutionary algorithms that can
be used to construct the gene network. Spieth et al. [21]
proposed a memetic inference method for gene regulatory
network based on S-system. This is a popular mathematical
model proposed by Savageau [20]. The memetic algorithm
uses a combination of genetic algorithm and evolution
strategies [21].
A multi objective phenomic algorithm proposed by Rio
D’Souza et al. [8] is an advanced method, which concentrates
on multiple objectives like Number of Links (NoL) and Small
World Similarity Factor (SWSF). Rio DSouza et al. in [9]
proposes an Integrated Pheneto-Genetic Algorithm (IPGA),
which makes use of the approach of S-system model [20] with
memetic algorithm proposed by Spiethet al. [21]. The memetic
algorithm [21] makes use of genetic algorithm to identify the
populations of structures of possible networks. For N genes,
out of N combinations of solutions, GA is used to identify the
best solution by optimizing the error or fitness value. Memetic
algorithm is a superior method than the existing evolutionary
algorithms such as standard evolutionary strategy and
skeletalizing (extension of standard GA) for the particular
problem [21]
Nonetheless, the above algorithms are standard algorithms, the
tradeoff between time, space and accuracy factors of the
algorithms are still issues need to be addressed. In this paper,
we make a new proposal to optimize the model parameters for
the reconstruction of gene network for achieving improved
performance.
3. PROPOSED METHOD
3.1 MODEL
S-systems are a type of power law formalism, which was
suggested by Savageau [20] and defined as follows.
Where Gij and Hij are kinetic exponents, 𝜶i and 𝜷i are positive
rate constants and those values are optimized using Evolution
strategies. According to the S-system equation [1], 2N*(1+N)
values are to be optimized for each individual in a population,
where N is the total number of genes in a microarray data set.
We propose to employ an optimization technique known as
Clonal selection algorithm, which is faster than the genetic
algorithm. Clonal selection algorithm is a technique used in
artificial immune systems. A brief description of artificial
immune system and Clonal selection algorithm is given in the
following:
3.2 ARTIFICIAL IMMUNE SYSTEM (AIS)
Artificial Immune System is based on the theory of biological
immune system. In biological immune system, the foreign
materials, which are trying to intrude the body, will be
identified and prevented. These foreign materials are called
pathogens. Each pathogen has molecules called antigen which
will be identified by the antibody. There are two types of
immune systems in body called innate immune system and
adaptive immune system [2]. Innate immune system is a static
method, which is generic to all bodies. These are the basic
level of protection from pathogen [6]. Adaptive immune
systems are self-adaptive natured immunities, which work
with the antigens. This type of immunity remembers previous
attacks and strengthens the immunity process. In artificial
immune system, the principles of biological immune system
are used to solve the various computational problems. Clonal
selection is one of the theories, which explain the process of
immunity.
3.3 CLONAL SELECTION ALGORITHM
The response of immune system to infection explained by
Burnet is a well-known theory in immunology [4]. In this
3. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 02 Issue: 08 | Aug-2013, Available @ http://www.ijret.org 46
work, Clonal selection is used to explain the processing of
adaptive immune system to antigens. In 2002, Castro and
Zuben proposed a Clonal based algorithm called CLONALG
[6]. Clonal selection algorithms follow the biological adaptive
immune system, which consists of antibodies and antigens [2].
This type of algorithms considers solution set as antibody. The
set of antibodies is called as population. At each generation
selection, cloning, affinity maturation and reselection are
happening to the population and trying to generate new
population with better affinity. In this algorithm, affinity is
calculated with the help of fitness value. As there are no
recombination/ crossover steps in Clonal selection algorithm,
it is faster than the genetic algorithm and hence the basic
Clonal selection algorithm is used to optimize the S-systems
model. The Clonal algorithm for the optimization of the S-
systems model is given below:
Algorithm 1: CLONAL based Algorithm
Require: Max N of Generation; error tolerance
Ensure: Optimal antibody
a. Start.
b. Generation: = 0
c. Pop(Generation) := Init(Clonal pop)
d. Evaluate_Fitness (Pop (Generation))
e. while termination criteria not met do
i. Selected_Pop(Generation):=Selection(Pop(Generatio
n))
ii. Cloned_Pop(Generation):=Clone(Selected_Pop(Gene
ration)
iii. Pop(Generation):=Maturation(Cloned_pop(Generatio
n))
iv. Evaluate_Fitness (Pop(Generation)
v. Pop(Generation+1):=Re_Selection(Pop(Generation))
vi. Generation := Generation + 1
f. end while
g. Stop.
Fitness function: The proposed method uses the following
fitness function proposed by Tominaga et al. [23]:
Where Xcal
i,t, Xexp
i,t are the expression value of gene i at time t
from the estimated (calculated) and experimental data
respectively.
4. EXPERIMENTAL SET UP AND RESULTS
For the experimentation, the standard artificial gene regulatory
network, given in Table 1, used by various researchers [12, 13,
14, 16, 18, 19] is made use of. This network consists of 5
genes. The Runge-kutta algorithm is used to infer standard
microarray data using the S-system model [13]. In order to
confirm the ability of proposed method to infer the gene
regulatory network we generated 10 sets of expression data
artificially. Initial values of these sets are randomly generated
in the range [0, 1] as shown in Table 2.The 10 sets of time
series data are obtained using equation(1) and S-system
parameters given in Table 1,with T=11 and G=5; so totally
10*11*5=550expression values are observed. A sample Time
dynamics of the 5 dimensional regulatory system inferred is
shown in Fig.1where duration of 0.0 to 0.5 is divided into 11
equi-distance samples, and 10 points are computed between
each sampling point.
In order to confirm the effectiveness of the proposed model,
both the proposed algorithm and the standard memetic
algorithm have been implemented and applied to a standard
artificial genetic network [12, 13, 14, 16, 18, 19]. Since these
algorithms are stochastic in nature, we have to test on multiple
data sets for the experiment. After computing the model
parameters, the microarray data set is regenerated and
compared with the original. We have used350000 fitness
evaluations in the comparative study. Mean Squared Error
(MSE) [23] is used as the error evaluation measurement
metric.
Fig. 1: A sample Time dynamics of the 5-dim regulatory
system using parameters in Table 1.
Fig. 2 shows comparison of average error (MSE) versus
fitness evaluation courses obtained for memetic and proposed
method for 3.5 lakhs fitness evaluation. Since memetic
algorithm uses genetic algorithm for the optimization purpose,
over all error will be reduced after some iterations. In memetic
algorithm, S-system parameters are optimized for the
reconstruction of gene regulatory network. In this algorithm
for each generation in genetic algorithm, evolutionary strategy
with covariance matrix adaptation (CMA) has to be
performed. Evolutionary strategy is a local optimal
evolutionary algorithm, which is much similar to genetic
algorithm. Due to hybrid nature of the algorithm, huge amount
of computation is required for the processing. For the memetic
algorithm, convergence happens after 20 lakhs fitness
evaluations [21]. The proposed method converged after 3.5
lakhs fitness evaluations whereas, at this point, standard
memetic algorithm is far away from convergence. Hence, it is
also observed that the proposed algorithm converges much
faster than the existing memetic algorithm
5. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 02 Issue: 08 | Aug-2013, Available @ http://www.ijret.org 48
Fig. 2: Comparison of average error (MSE) obtained for
memetic algorithm and the proposed approach; the proposed
algorithm converges at about 3.5 lakhs fitness evaluations.
5. DISCUSSION
5.1 ANALYSIS OF REAL LIFE DATA USING THE
PROPOSED METHOD
In order to assure the performance of a method, it should be
evaluated on a real life data. We employed a famous real life
dataset called SOS DNA repair system in E.coli [22] to study
the performance of the proposed method. Fig.3 graphically
describes the interactions during the repairing of DNA of
E.coli., when DNA damage is occurred. According to this
system, when a damage happens immediately RecA protein
will identify the damage and will invoke the processing of
cleavage of LexA protein without any help of enzymes. Thus,
the concentration of LexA will be decreased. Due to reduction
of LexA other proteins in the SOS system will activate the
repairing process of the DNA. LexA protein is acting as a
repressor in the system. After the repairing of DNA,
concentration of RecA will be dropped; in effect, automatic
cleavage of RecA will stop. Finally, concentration of LexA
will increase and repress the other genes. This will lead to a
stable state and will continue in this state till the next damage
happens.
SOS Data is obtained from the website www.weizmann
.ac.il/mcb/UriAlon/Papers/SOSData/ as a result of
experiments done by Uri Alon lab of Weizmann institute of
science. They have 4experimental results obtained, each of 8
proteins and 50 time points. As the first time-point represents
0 seconds all initial expression values are zeros. Since the first
time-point contains no information it was removed and the
remaining 49 time points were used for the modeling. From
the previous literatures [3, 5, 10, 12, 15, 16] it was identified
that out of 8 genes, 6 major genes (uvrD, umuD, lexA,recA,
uvrA and polB) and last 2 experimental results are required for
the accurate prediction of SOS Gene regulatory system. Each
values of the gene expression values are normalized in the
interval [0,1].
Fig3. SOS DNA repair system of E.coli.
Implementation of the proposed approach on SOS Data set
inferred the gene network of Fig.4. Since in the given
microarray, data is real one it is concealed with noise, and
hence the accuracy of the proposed algorithm depends on the
degree of noise. As the biological systems are so complex,
even with the biological experiments it is difficult to extract
all the hidden facts in the system. Therefore, the SOS DNA
repair system of Ecoli identified in Fig. 4 may not contain all
the relationships. There is still a possibility of finding new
relationships. The gene network obtained by the proposed
method identified inhibitions from LexA to LexA, uvrD,
uvrA, recA and polB. Proposed method also identified
regulations from recA to recA and recA to lexA correctly.
There are also some more relations, as given in Table 3
reported by other researchers, identified by the proposed
method.
Table3. Relations identified by the proposed approach that are also already identified by previous researchers
Gene Predicted relation and the references where these are already identified
uvrD uvrD -| uvrD(12, 5, 15, 11), uvrD -| umuDc (15), uvrD -| lexA (15), uvrD→polB (10, 11)
LexA LexA-| LexA(3, 5, 10, 12, 16), LexA-| uvrD(5, 10, 12, 16), LexA-| recA(3, 12, 16), LexA-| uvrA(5, 10, 11, 12),
LexA→ uvrA(11, 15), LexA-| PolB(5, 11, 12, 16), LexA→ PolB(11, 15),
umuDc umuDc -| umuDc (3, 5, 15, 11), umuDc -| recA(16, 3, 11), umuDc -| polB(11), umuDc→uvrA(11), umuDc -|
lexA (3, 15, 11)
recA recA→uvrA(11), recA -| uvrA (15, 10), recA -| umuDc(12, 15, 10, 11)
uvrA uvrA -| uvrA (16, 12, 5, 15, 11), uvrA -| recA (11), uvrA -| umuDc (16, 10), uvrA -| lexA (15, 10),
uvrA→uvrD(16, 12, 10, 11), uvrA -| polB(16, 11)
polB polB -| polB(12, 5, 11), polB→uvrD (11), polB -| recA(16, 11), polB→uvrA (11), polB -| uvrA(11)
6. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 02 Issue: 08 | Aug-2013, Available @ http://www.ijret.org 49
Therefore, out of the total 33 relations, 30 relations are already
proposed by previous researchers. The remaining may be the
relations, which were not found yet, or false positives. Hence,
it is demonstrated that the proposed algorithm can be used for
the real life applications.
Fig4. SOS DNA repair system of E.coli. Identified by the
proposed method (dashed lines indicate the inhibition and
solid lines indicate the activation); 33 relations are identified.
CONCLUSIONS
Gene regulatory network reconstruction is a major issue in
bioinformatics. Existing methods for GRN reconstruction
either take longer computations for convergence or poor in
accuracy of identifying the relations. This paper proposes a
Clonal based approach using S-system model. The model
parameters are computed using optimization employing the
basic Clonal selection algorithm. Performance of the model is
compared with the existing standard memetic algorithm and
found to be superior with respect to execution time and
accuracy. Convergence is achieved with much lesser number
of fitness evaluations than the standard memetic algorithm.
The results obtained on SOS DNA repair system of E.coli.
demonstrate that the proposed approach identified most of the
relations identified by the previous researchers. This amply
proves that the approach is powerful and applicable to real life
data.
REFERENCES
[1]. Akutsu, T., Miyano, S., Kuhara, S., et al. (1999).
Identification of genetic networks from a small number of
gene expression patterns under the boolean network model.In
Pacific Symposium on Biocomputing, volume 4, pages 17–28.
World ScientificMaui, Hawaii.
[2]. Al-Enezi, J., Abbod, M., and Alsharhan, S. (2010).
Artificial immunesystemsmodels,algorithms and applications.
International Journal.
[3]. Bansal, M., Della Gatta, G., and Di Bernardo, D. (2006).
Inference of generegulatory networks and compound mode of
action from time course gene expressionprofiles.
Bioinformatics, 22(7), 815–822.
[4]. Burnet, F. (2008). A modification of jerne’s theory of
antibody production using theconcept of clonal selection. CA:
A Cancer Journal for Clinicians, 26(2), 119–121.
[5]. Cho, D., Cho, K., and Zhang, B. (2006). Identification of
biochemical networks by s-tree based genetic programming.
Bioinformatics, 22(13), 1631–1640.
[6]. De Castro, L. and Von Zuben, F. (2002). Learning and
optimization using the Clonal selection principle. Evolutionary
Computation, IEEE Transactions on, 6(3), 239–251.
[7]. Dubitzky, W., Granzow, M., Downes, C., and Berrar, D.
(2003). Introduction to microarray data analysis. A Practical
Approach to Microarray Data Analysis, pages1–46.
[8]. DSouza, R., Sekaran, K., and Kandasamy, A. (2012a). A
multiobjective phenomic algorithm for inference of gene
networks. Bio-Inspired Models of Network, Information, and
Computing Systems, pages 440–451.
[9]. DSouza, R., Sekaran, K., and Kandasamy, A. (2012b). A
phenomic algorithm for inference of gene networks using s-
systems and memetic search. Bio-Inspired Modelsof Network,
Information, and Computing Systems, pages 229–237.
[10]. Hsiao, Y. and Lee, W. (2012). Inferring robust gene
networks from expression databy a sensitivity-based
incremental evolution method. BMC bioinformatics, 13, 1–21.
[11]. Huang, H., Chen, K., Ho, S., and Ho, S. (2008). Inferring
s-systemmodels of genetic networks from a time-series real
data set of gene expression profiles. In Evolutionary
Computation, 2008. CEC 2008.(IEEE World Congress
onComputational Intelligence). IEEE Congress on, pages
2788–2793. IEEE.
[12]. Kabir, M., Noman, N., and Iba, H. (2010). Reverse
engineering gene regulatory network from microarray data
using linear time-variant model. BMC bioinformatics,
11(Suppl 1), S56.
[13]. Kikuchi, S., Tominaga, D., Arita, M., Takahashi, K., and
Tomita, M. (2003). Dynamic modeling of genetic networks
using genetic algorithm and s-system. Bioinformatics, 19(5),
643–650.
[14]. Kimura, S., Ide, K., Kashihara, A., Kano, M.,
Hatakeyama, M., Masui, R., Nakagawa, N., Yokoyama, S.,
Kuramitsu, S., and Konagaya, A. (2005). Inference of s-
system models of genetic networks using a cooperative
coevolutionary algorithm.Bioinformatics, 21(7), 1154–1163.
[15]. Kimura, S., Nakayama, S., and Hatakeyama, M. (2009).
Genetic network inference as a series of discrimination
tasks.Bioinformatics, 25(7), 918–925.
[16]. Kimura, S., Sonoda, K., Yamane, S., Maeda, H.,
Matsumura, K., and Hatakeyama, M. (2008). Function
approximation approach to the inference of reduced ngnet
models of genetic networks.BMC bioinformatics, 9(1), 23.
[17]. Narayanan, A., Keedwell, E., Gamalielsson, J., and
Tatineni, S. (2004). Singlelayer artificial neural networks for
gene expression analysis.Neurocomputing, 61, 217–240.
7. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 02 Issue: 08 | Aug-2013, Available @ http://www.ijret.org 50
[18]. Noman, N. and Iba, H. (2007). Inferring gene regulatory
networks using differential evolution with local search
heuristics.IEEE/ACM Transactions on Computational Biology
and Bioinformatics (TCBB), 4(4), 634–647.
[19]. Perrin, B., Ralaivola, L., Mazurie, A., Bottani, S., Mallet,
J., and dAlche Buc, F. (2003). Gene networks inference using
dynamic bayesian networks.Bioinformatics, 19(suppl 2),
ii138–ii148.
[20]. Savageau, M. (20). years of s-systems. Canonical
Nonlinear Modeling. S-systems Approach to Understand
Complexity, pages 1–44.
[21]. Spieth, C., Streichert, F., Speer, N., and Zell, A. (2004).
A memetic inference method for gene regulatory networks
based on s-systems. InEvolutionary Computation, 2004.
CEC2004. Congress on, volume 1, pages 152–157. IEEE.
[22]. Sutton, M., Smith, B., Godoy, V., and Walker, G. (2000).
The sos response: recent insights into umudc-dependent
mutagenesis and dna damage tolerance.Annual review of
genetics, 34(1), 479–497.
[23]. Tominaga, D., Koga, N., and Okamoto, N. (2000).
Efficient numerical optimization algorithm based on genetic
algorithm for inverse problem. In Proceedings of the Genetic
and Evolutionary Computation Conference, pages 251–258.
[24]. Wikipedia (2012). Epistasis and functional genomics—
Wikipedia, the free encyclopedia. [Online; accessed 22-June-
2012].
[25]. Zhang, B. and Hwang, K. (2003). Bayesian network
classifiers for gene expression analysis. A Practical Approach
to Microarray Data Analysis, pages 150–165.
BIOGRAPHIES
Jereesh A S received Bachelor’s degree in
Computer science and engineering from the
Rajiv Gandhi Institute of technology Kottayam
in the year 2007 and received Master’s degree
in Computer science and engineering
(Information Security) from the National
Institute of technology Calicut in the year 2010. He is
currently a research scholar pursuing for Ph.D degree in the
Department of Computer science and engineering at National
institute of Technology Calicut. His research interests include
the Bioinformatics, data mining and evolutionary algorithms.
V K Govindan received Bachelor’s and
Master’s degrees in electrical engineering from
the National Institute of technology Calicut in
the year 1975 and 1978, respectively. He was
awarded PhD in Character Recognition from
the Indian Institute of Science, Bangalore, in
1989. His research areas include Image processing, pattern
recognition, data compression, document imaging and
operating systems. He has more than 125 research publications
in international journals and conferences, and authored ten
books. He has produced seven PhDs and reviewed papers for
many Journals and conferences. He has more than 34 years of
teaching experience at UG and PG levels and he was the
Professor and Head of the Department of Computer Science
and Engineering, NIT Calicut during years 2000 to 2005. He
is currently working as Professor in the Department of
Computer Science and Engineering, and Dean Academic at
National Institute of Technology Calicut, India