Customary characterization calculations can be constrained in their execution on exceedingly uneven informational collections. A famous stream of work for countering the substance of class inelegance has been the use of an assorted of inspecting methodologies. In this correspondence, we center on planning alterations neural system to properly handle the issue of class irregularity. We consolidate distinctive rebalance heuristics in ANN demonstrating, including cost delicate learning, and over and under testing. These ANN based systems are contrasted and different best in class approaches on an assortment of informational collections by utilizing different measurements, including G mean, region under the collector working trademark curve, F measure, and region under the exactness review curve. Numerous regular strategies, which can be classified into testing, cost delicate, or gathering, incorporate heuristic and task subordinate procedures. So as to accomplish a superior arrangement execution by detailing without heuristics and errand reliance, presently propose RBF based Network RBF NN . Its target work is the symphonious mean of different assessment criteria got from a perplexity grid, such criteria as affectability, positive prescient esteem, and others for negatives. This target capacity and its enhancement are reliably detailed on the system of CM KLOGR, in light of least characterization mistake and summed up probabilistic plunge MCE GPD learning. Because of the benefits of the consonant mean, CM KLOGR, and MCE GPD, RBF NN improves the multifaceted exhibitions in a very much adjusted way. It shows the definition of RBF NN and its adequacy through trials that nearly assessed RBF NN utilizing benchmark imbalanced datasets. Nitesh Kumar | Dr. Shailja Sharma "Adaptive Classification of Imbalanced Data using ANN with Particle of Swarm Optimization" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd25255.pdfPaper URL: https://www.ijtsrd.com/computer-science/other/25255/adaptive-classification-of-imbalanced-data-using-ann-with-particle-of-swarm-optimization/nitesh-kumar
Analysis of Imbalanced Classification Algorithms A Perspective Viewijtsrd
Classification of data has become an important research area. The process of classifying documents into predefined categories Unbalanced data set, a problem often found in real world application, can cause seriously negative effect on classification performance of machine learning algorithms. There have been many attempts at dealing with classification of unbalanced data sets. In this paper we present a brief review of existing solutions to the class-imbalance problem proposed both at the data and algorithmic levels. Even though a common practice to handle the problem of imbalanced data is to rebalance them artificially by oversampling and or under-sampling, some researchers proved that modified support vector machine, rough set based minority class oriented rule learning methods, cost sensitive classifier perform good on imbalanced data set. We observed that current research in imbalance data problem is moving to hybrid algorithms. Priyanka Singh | Prof. Avinash Sharma "Analysis of Imbalanced Classification Algorithms: A Perspective View" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-2 , February 2019, URL: https://www.ijtsrd.com/papers/ijtsrd21574.pdf
Paper URL: https://www.ijtsrd.com/engineering/computer-engineering/21574/analysis-of-imbalanced-classification-algorithms-a-perspective-view/priyanka-singh
Improving Prediction Accuracy Results by Using Q-Statistic Algorithm in High ...rahulmonikasharma
Classification problems in high dimensional information with little sort of observations became furthercommon significantly in microarray information. The increasing amount of text data on internet sites affects the agglomerationanalysis. The text agglomeration could also be a positive analysis technique used for partitioning a huge amount of datainto clusters. Hence, the most necessary draw back that affects the text agglomeration technique is that the presenceuninformative and distributed choices in text documents. A broad class of boosting algorithms is known as actingcoordinate-wise gradient descent to attenuate some potential performs of the margins of a data set. This paperproposes a novel analysis live Q-statistic that comes with the soundness of the chosen feature set to boot to theprediction accuracy. Then we've a bent to propose the Booster of associate degree FS algorithm that enhances theworth of the Q-statistic of the algorithm applied.
Controlling informative features for improved accuracy and faster predictions...Damian R. Mingle, MBA
Identification of suitable biomarkers for accurate prediction of phenotypic outcomes is a goal for personalized medicine. However, current machine learning approaches are either too complex or perform poorly.
For more information:
http://societyofdatascientists.com/controlling-informative-features-for-improved-accuracy-and-faster-predictions-in-omentum-cancer-models/?src=slideshare
USING NLP APPROACH FOR ANALYZING CUSTOMER REVIEWScsandit
The Web considers one of the main sources of customer opinions and reviews which they are represented in two formats; structured data (numeric ratings) and unstructured data (textual comments). Millions of textual comments about goods and services are posted on the web by customers and every day thousands are added, make it a big challenge to read and understand them to make them a useful structured data for customers and decision makers. Sentiment
analysis or Opinion mining is a popular technique for summarizing and analyzing those opinions and reviews. In this paper, we use natural language processing techniques to generate some rules to help us understand customer opinions and reviews (textual comments) written in the Arabic language for the purpose of understanding each one of them and then convert them to a structured data. We use adjectives as a key point to highlight important information in the text then we work around them to tag attributes that describe the subject of the reviews, and we associate them with their values (adjectives).
Analysis of Imbalanced Classification Algorithms A Perspective Viewijtsrd
Classification of data has become an important research area. The process of classifying documents into predefined categories Unbalanced data set, a problem often found in real world application, can cause seriously negative effect on classification performance of machine learning algorithms. There have been many attempts at dealing with classification of unbalanced data sets. In this paper we present a brief review of existing solutions to the class-imbalance problem proposed both at the data and algorithmic levels. Even though a common practice to handle the problem of imbalanced data is to rebalance them artificially by oversampling and or under-sampling, some researchers proved that modified support vector machine, rough set based minority class oriented rule learning methods, cost sensitive classifier perform good on imbalanced data set. We observed that current research in imbalance data problem is moving to hybrid algorithms. Priyanka Singh | Prof. Avinash Sharma "Analysis of Imbalanced Classification Algorithms: A Perspective View" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-2 , February 2019, URL: https://www.ijtsrd.com/papers/ijtsrd21574.pdf
Paper URL: https://www.ijtsrd.com/engineering/computer-engineering/21574/analysis-of-imbalanced-classification-algorithms-a-perspective-view/priyanka-singh
Improving Prediction Accuracy Results by Using Q-Statistic Algorithm in High ...rahulmonikasharma
Classification problems in high dimensional information with little sort of observations became furthercommon significantly in microarray information. The increasing amount of text data on internet sites affects the agglomerationanalysis. The text agglomeration could also be a positive analysis technique used for partitioning a huge amount of datainto clusters. Hence, the most necessary draw back that affects the text agglomeration technique is that the presenceuninformative and distributed choices in text documents. A broad class of boosting algorithms is known as actingcoordinate-wise gradient descent to attenuate some potential performs of the margins of a data set. This paperproposes a novel analysis live Q-statistic that comes with the soundness of the chosen feature set to boot to theprediction accuracy. Then we've a bent to propose the Booster of associate degree FS algorithm that enhances theworth of the Q-statistic of the algorithm applied.
Controlling informative features for improved accuracy and faster predictions...Damian R. Mingle, MBA
Identification of suitable biomarkers for accurate prediction of phenotypic outcomes is a goal for personalized medicine. However, current machine learning approaches are either too complex or perform poorly.
For more information:
http://societyofdatascientists.com/controlling-informative-features-for-improved-accuracy-and-faster-predictions-in-omentum-cancer-models/?src=slideshare
USING NLP APPROACH FOR ANALYZING CUSTOMER REVIEWScsandit
The Web considers one of the main sources of customer opinions and reviews which they are represented in two formats; structured data (numeric ratings) and unstructured data (textual comments). Millions of textual comments about goods and services are posted on the web by customers and every day thousands are added, make it a big challenge to read and understand them to make them a useful structured data for customers and decision makers. Sentiment
analysis or Opinion mining is a popular technique for summarizing and analyzing those opinions and reviews. In this paper, we use natural language processing techniques to generate some rules to help us understand customer opinions and reviews (textual comments) written in the Arabic language for the purpose of understanding each one of them and then convert them to a structured data. We use adjectives as a key point to highlight important information in the text then we work around them to tag attributes that describe the subject of the reviews, and we associate them with their values (adjectives).
Drug discovery and development is a long and expensive process and over time has notoriously bucked Moore’s law that it now has its own law called Eroom’s Law named after it (the opposite of Moore’s). It is estimated that the attrition rate of drug candidates is up to 96% and the average cost to develop a new drug has reached almost $2.5 billion in recent years. One of the major causes for the high attrition rate is drug safety, which accounts for 30% of the failures.
Even if a drug is approved in market, it could be withdrawn due to safety problems. Therefore, evaluating drug safety extensively as early as possible is paramount in accelerating drug discovery and development. This talk provides a high-level overview of the current process of rational drug design that has been in place for many decades and covers some of the major areas where the application of AI, Deep learning and ML based techniques have had the most gains.
Specifically, this talk covers a variety of drug safety related AI and ML based techniques currently in use which can generally divided into 3 main categories:
1. Discovery,
2. Toxicity and Safety, and
3. Post-Market Monitoring.
We will address the recent progress in predictive models and techniques built for various toxicities. It will also cover some publicly available databases, tools and platforms available to easily leverage them.
We will also compare and contrast various modeling techniques including deep learning techniques and their accuracy using recent research. Finally, the talk will address some of the remaining challenges and limitations yet to be addressed in the area of drug discovery and safety assessment.
Classification problems specified in high dimensional data with smallnumber of observation are generally becoming common in specific microarray data. In the time of last two periods of years, manyefficient classification standard models and also Feature Selection (FS) algorithm which isalso referred as FS technique have basically been proposed for higher prediction accuracies. Although, the outcome of FS algorithm related to predicting accuracy is going to be unstable over the variations in considered trainingset, in high dimensional data. In this paperwe present a latest evaluation measure Q-statistic that includes the stability of the selected feature subset in inclusion to prediction accuracy. Then we are going to propose the standard Booster of a FS algorithm that boosts the basic value of the preferred Q-statistic of the algorithm applied. Therefore study on synthetic data and 14 microarray data sets shows that Booster boosts not only the value of Q-statistics but also the prediction accuracy of the algorithm applied.
Accounting for variance in machine learning benchmarksDevansh16
Accounting for Variance in Machine Learning Benchmarks
Xavier Bouthillier, Pierre Delaunay, Mirko Bronzi, Assya Trofimov, Brennan Nichyporuk, Justin Szeto, Naz Sepah, Edward Raff, Kanika Madan, Vikram Voleti, Samira Ebrahimi Kahou, Vincent Michalski, Dmitriy Serdyuk, Tal Arbel, Chris Pal, Gaël Varoquaux, Pascal Vincent
Strong empirical evidence that one machine-learning algorithm A outperforms another one B ideally calls for multiple trials optimizing the learning pipeline over sources of variation such as data sampling, data augmentation, parameter initialization, and hyperparameters choices. This is prohibitively expensive, and corners are cut to reach conclusions. We model the whole benchmarking process, revealing that variance due to data sampling, parameter initialization and hyperparameter choice impact markedly the results. We analyze the predominant comparison methods used today in the light of this variance. We show a counter-intuitive result that adding more sources of variation to an imperfect estimator approaches better the ideal estimator at a 51 times reduction in compute cost. Building on these results, we study the error rate of detecting improvements, on five different deep-learning tasks/architectures. This study leads us to propose recommendations for performance comparisons.
Using ID3 Decision Tree Algorithm to the Student Grade Analysis and Predictionijtsrd
Data mining techniques play an important role in data analysis. For the construction of a classification model which could predict performance of students, particularly for engineering branches, a decision tree algorithm associated with the data mining techniques have been used in the research. A number of factors may affect the performance of students. Data mining technology which can related to this student grade well and we also used classification algorithms prediction. In this paper, we used educational data mining to predict students final grade based on their performance. We proposed student data classification using ID3 Iterative Dichotomiser 3 Decision Tree Algorithm Khin Khin Lay | San San Nwe "Using ID3 Decision Tree Algorithm to the Student Grade Analysis and Prediction" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26545.pdfPaper URL: https://www.ijtsrd.com/computer-science/data-miining/26545/using-id3-decision-tree-algorithm-to-the-student-grade-analysis-and-prediction/khin-khin-lay
We conducted comparative analysis of different supervised dimension reduction techniques by integrating a set of different data splitting algorithms and demonstrate the relative efficacy of learning algorithms dependence of sample complexity. The issue of sample complexity discussed in the dependence of data splitting algorithms. In line with the expectations, every supervised learning classifier demonstrated different capability for different data splitting algorithms and no way to calculate overall ranking of techniques was directly available. We specifically focused the classifier ranking dependence of data splitting algorithms and devised a model built on weighted average rank Weighted Mean Rank Risk Adjusted Model (WMRRAM) for consent ranking of learning classifier algorithms.
Improving the effectiveness of information retrieval system using adaptive ge...ijcsit
Traditional Genetic Algorithm which is used in previous studies depends on fixed control parameters
especially crossover and mutation probabilities, but in this research we tried to use adaptive genetic
algorithm.
Genetic algorithm started to be applied in information retrieval system in order to optimize the query by
genetic algorithm, a good query is a set of terms that express accurately the information need while being
usable within collection corpus, the last part of this specification is critical for the matching process to be
efficient, that is why most research efforts are actually put toward the query improvement.
We investigated the use of adaptive genetic algorithm (AGA) under vector space model, Extended Boolean
model, and Language model in information retrieval (IR), the algorithm used crossover and mutation
operators with variable probability, where a traditional genetic algorithm (GA) uses fixed values of those,
and remain unchanged during execution. GA is developed to support adaptive adjustment of mutation and
crossover probability; this allows faster attainment of better solutions. The paper has been tested using
242 Arabic abstracts collected from the proceedings of the Saudi Arabian National conference.
The development of data mining is inseparable from the recent developments in information technology that enables the accumulation of large amounts of data. For example, a shopping mall that records every sales transaction of goods using various POS (point of sales). Database data from these sales could reach a large storage capacity, even more being added each day, especially when the shopping center will develop into a nationwide network. The development of the internet at the moment also has a share large enough in the accumulation of data occurs. But the rapid growth of data accumulation it has created conditions that are often referred to as "data rich but information poor" because the data collected can not be used optimally for useful applications. Not infrequently the data set was left just seemed to be a "grave data". There are several techniques used in data mining which includes association, classification, and clustering. In this paper, the author will do a comparison between the performance of the technical classification methods naïve Bayes and C4.5 algorithms.
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
A Survey on Various Disease Prediction Techniquesijtsrd
An analysis of various diseases have been predicted using multiple data mining and text mining techniques. In this article we are going to discuss about 6 prediction techniques. Using gene expression pattern we predict the disease outcome and implementation of pathway based approach for classifying disease based on hyper box principles, we also present a novel hybrid prediction model with missing value imputation HPM-MI which analyze imputation using simple k-means clustering. A technique based on CCAR Constraint Class Association Rule has been used for reducing time consumption in prediction of a particular disease. We have discussed about text mining technique and their applications. Another technique has also been studied about hyper triglyceride mia from anthropometric measures which diverge according to age and gender. Using multilayer classifiers for disease prediction we can achieve high diagnosis accuracy and high performance. C. Leancy Jannet | G. V. Sumalatha "A Survey on Various Disease Prediction Techniques" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-6 , October 2018, URL: http://www.ijtsrd.com/papers/ijtsrd18624.pdf
In this research, a hybrid wrapper model is proposed to identify the featured gene subset from the gene expression data. To balance the gap between exploration
and exploitation, a hybrid model with a popular meta-heuristic algorithm named
spider monkey optimizer (SMO) and simulated annealing (SA) is applied. In the proposed model, ReliefF is used as a filter to obtain the relevant gene subset
from dataset by removing the noise and outliers prior to feeding the data to the
wrapper SMO. To enhance the quality of the solution, simulated annealing is
deployed as local search with the SMO in the second phase, which will guide to the detection of the most optimal feature subset. To evaluate the performance of the proposed model, support vector machine (SVM) as a fitness function to recognize the most informative biomarker gene from the cancer datasets along with University of California, Irvine (UCI) datasets. To further evaluate the model, 4 different classifiers (SVM, na¨ıve Bayes (NB), decision tree (DT), and k-nearest neighbors (KNN)) are used. From the experimental results and analysis, it’s noteworthy to accept that the ReliefF-SMO-SA-SVM performs relatively better than its state-of-the-art counterparts. For cancer datasets, our model performs better in terms of accuracy with a maximum of 99.45%.
AN EFFICIENT PSO BASED ENSEMBLE CLASSIFICATION MODEL ON HIGH DIMENSIONAL DATA...ijsc
As the size of the biomedical databases are growing day by day, finding an essential features in the disease prediction have become more complex due to high dimensionality and sparsity problems. Also, due to the
availability of a large number of micro-array datasets in the biomedical repositories, it is difficult to analyze, predict and interpret the feature information using the traditional feature selection based classification models. Most of the traditional feature selection based classification algorithms have computational issues such as dimension reduction, uncertainty and class imbalance on microarray datasets. Ensemble classifier is one of the scalable models for extreme learning machine due to its high efficiency, the fast processing speed for real-time applications. The main objective of the feature selection
based ensemble learning models is to classify the high dimensional data with high computational efficiency
and high true positive rate on high dimensional datasets. In this proposed model an optimized Particle swarm optimization (PSO) based Ensemble classification model was developed on high dimensional microarray
datasets. Experimental results proved that the proposed model has high computational efficiency compared to the traditional feature selection based classification models in terms of accuracy , true positive rate and error rate are concerned.
Drug discovery and development is a long and expensive process and over time has notoriously bucked Moore’s law that it now has its own law called Eroom’s Law named after it (the opposite of Moore’s). It is estimated that the attrition rate of drug candidates is up to 96% and the average cost to develop a new drug has reached almost $2.5 billion in recent years. One of the major causes for the high attrition rate is drug safety, which accounts for 30% of the failures.
Even if a drug is approved in market, it could be withdrawn due to safety problems. Therefore, evaluating drug safety extensively as early as possible is paramount in accelerating drug discovery and development. This talk provides a high-level overview of the current process of rational drug design that has been in place for many decades and covers some of the major areas where the application of AI, Deep learning and ML based techniques have had the most gains.
Specifically, this talk covers a variety of drug safety related AI and ML based techniques currently in use which can generally divided into 3 main categories:
1. Discovery,
2. Toxicity and Safety, and
3. Post-Market Monitoring.
We will address the recent progress in predictive models and techniques built for various toxicities. It will also cover some publicly available databases, tools and platforms available to easily leverage them.
We will also compare and contrast various modeling techniques including deep learning techniques and their accuracy using recent research. Finally, the talk will address some of the remaining challenges and limitations yet to be addressed in the area of drug discovery and safety assessment.
Classification problems specified in high dimensional data with smallnumber of observation are generally becoming common in specific microarray data. In the time of last two periods of years, manyefficient classification standard models and also Feature Selection (FS) algorithm which isalso referred as FS technique have basically been proposed for higher prediction accuracies. Although, the outcome of FS algorithm related to predicting accuracy is going to be unstable over the variations in considered trainingset, in high dimensional data. In this paperwe present a latest evaluation measure Q-statistic that includes the stability of the selected feature subset in inclusion to prediction accuracy. Then we are going to propose the standard Booster of a FS algorithm that boosts the basic value of the preferred Q-statistic of the algorithm applied. Therefore study on synthetic data and 14 microarray data sets shows that Booster boosts not only the value of Q-statistics but also the prediction accuracy of the algorithm applied.
Accounting for variance in machine learning benchmarksDevansh16
Accounting for Variance in Machine Learning Benchmarks
Xavier Bouthillier, Pierre Delaunay, Mirko Bronzi, Assya Trofimov, Brennan Nichyporuk, Justin Szeto, Naz Sepah, Edward Raff, Kanika Madan, Vikram Voleti, Samira Ebrahimi Kahou, Vincent Michalski, Dmitriy Serdyuk, Tal Arbel, Chris Pal, Gaël Varoquaux, Pascal Vincent
Strong empirical evidence that one machine-learning algorithm A outperforms another one B ideally calls for multiple trials optimizing the learning pipeline over sources of variation such as data sampling, data augmentation, parameter initialization, and hyperparameters choices. This is prohibitively expensive, and corners are cut to reach conclusions. We model the whole benchmarking process, revealing that variance due to data sampling, parameter initialization and hyperparameter choice impact markedly the results. We analyze the predominant comparison methods used today in the light of this variance. We show a counter-intuitive result that adding more sources of variation to an imperfect estimator approaches better the ideal estimator at a 51 times reduction in compute cost. Building on these results, we study the error rate of detecting improvements, on five different deep-learning tasks/architectures. This study leads us to propose recommendations for performance comparisons.
Using ID3 Decision Tree Algorithm to the Student Grade Analysis and Predictionijtsrd
Data mining techniques play an important role in data analysis. For the construction of a classification model which could predict performance of students, particularly for engineering branches, a decision tree algorithm associated with the data mining techniques have been used in the research. A number of factors may affect the performance of students. Data mining technology which can related to this student grade well and we also used classification algorithms prediction. In this paper, we used educational data mining to predict students final grade based on their performance. We proposed student data classification using ID3 Iterative Dichotomiser 3 Decision Tree Algorithm Khin Khin Lay | San San Nwe "Using ID3 Decision Tree Algorithm to the Student Grade Analysis and Prediction" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26545.pdfPaper URL: https://www.ijtsrd.com/computer-science/data-miining/26545/using-id3-decision-tree-algorithm-to-the-student-grade-analysis-and-prediction/khin-khin-lay
We conducted comparative analysis of different supervised dimension reduction techniques by integrating a set of different data splitting algorithms and demonstrate the relative efficacy of learning algorithms dependence of sample complexity. The issue of sample complexity discussed in the dependence of data splitting algorithms. In line with the expectations, every supervised learning classifier demonstrated different capability for different data splitting algorithms and no way to calculate overall ranking of techniques was directly available. We specifically focused the classifier ranking dependence of data splitting algorithms and devised a model built on weighted average rank Weighted Mean Rank Risk Adjusted Model (WMRRAM) for consent ranking of learning classifier algorithms.
Improving the effectiveness of information retrieval system using adaptive ge...ijcsit
Traditional Genetic Algorithm which is used in previous studies depends on fixed control parameters
especially crossover and mutation probabilities, but in this research we tried to use adaptive genetic
algorithm.
Genetic algorithm started to be applied in information retrieval system in order to optimize the query by
genetic algorithm, a good query is a set of terms that express accurately the information need while being
usable within collection corpus, the last part of this specification is critical for the matching process to be
efficient, that is why most research efforts are actually put toward the query improvement.
We investigated the use of adaptive genetic algorithm (AGA) under vector space model, Extended Boolean
model, and Language model in information retrieval (IR), the algorithm used crossover and mutation
operators with variable probability, where a traditional genetic algorithm (GA) uses fixed values of those,
and remain unchanged during execution. GA is developed to support adaptive adjustment of mutation and
crossover probability; this allows faster attainment of better solutions. The paper has been tested using
242 Arabic abstracts collected from the proceedings of the Saudi Arabian National conference.
The development of data mining is inseparable from the recent developments in information technology that enables the accumulation of large amounts of data. For example, a shopping mall that records every sales transaction of goods using various POS (point of sales). Database data from these sales could reach a large storage capacity, even more being added each day, especially when the shopping center will develop into a nationwide network. The development of the internet at the moment also has a share large enough in the accumulation of data occurs. But the rapid growth of data accumulation it has created conditions that are often referred to as "data rich but information poor" because the data collected can not be used optimally for useful applications. Not infrequently the data set was left just seemed to be a "grave data". There are several techniques used in data mining which includes association, classification, and clustering. In this paper, the author will do a comparison between the performance of the technical classification methods naïve Bayes and C4.5 algorithms.
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
A Survey on Various Disease Prediction Techniquesijtsrd
An analysis of various diseases have been predicted using multiple data mining and text mining techniques. In this article we are going to discuss about 6 prediction techniques. Using gene expression pattern we predict the disease outcome and implementation of pathway based approach for classifying disease based on hyper box principles, we also present a novel hybrid prediction model with missing value imputation HPM-MI which analyze imputation using simple k-means clustering. A technique based on CCAR Constraint Class Association Rule has been used for reducing time consumption in prediction of a particular disease. We have discussed about text mining technique and their applications. Another technique has also been studied about hyper triglyceride mia from anthropometric measures which diverge according to age and gender. Using multilayer classifiers for disease prediction we can achieve high diagnosis accuracy and high performance. C. Leancy Jannet | G. V. Sumalatha "A Survey on Various Disease Prediction Techniques" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-6 , October 2018, URL: http://www.ijtsrd.com/papers/ijtsrd18624.pdf
In this research, a hybrid wrapper model is proposed to identify the featured gene subset from the gene expression data. To balance the gap between exploration
and exploitation, a hybrid model with a popular meta-heuristic algorithm named
spider monkey optimizer (SMO) and simulated annealing (SA) is applied. In the proposed model, ReliefF is used as a filter to obtain the relevant gene subset
from dataset by removing the noise and outliers prior to feeding the data to the
wrapper SMO. To enhance the quality of the solution, simulated annealing is
deployed as local search with the SMO in the second phase, which will guide to the detection of the most optimal feature subset. To evaluate the performance of the proposed model, support vector machine (SVM) as a fitness function to recognize the most informative biomarker gene from the cancer datasets along with University of California, Irvine (UCI) datasets. To further evaluate the model, 4 different classifiers (SVM, na¨ıve Bayes (NB), decision tree (DT), and k-nearest neighbors (KNN)) are used. From the experimental results and analysis, it’s noteworthy to accept that the ReliefF-SMO-SA-SVM performs relatively better than its state-of-the-art counterparts. For cancer datasets, our model performs better in terms of accuracy with a maximum of 99.45%.
AN EFFICIENT PSO BASED ENSEMBLE CLASSIFICATION MODEL ON HIGH DIMENSIONAL DATA...ijsc
As the size of the biomedical databases are growing day by day, finding an essential features in the disease prediction have become more complex due to high dimensionality and sparsity problems. Also, due to the
availability of a large number of micro-array datasets in the biomedical repositories, it is difficult to analyze, predict and interpret the feature information using the traditional feature selection based classification models. Most of the traditional feature selection based classification algorithms have computational issues such as dimension reduction, uncertainty and class imbalance on microarray datasets. Ensemble classifier is one of the scalable models for extreme learning machine due to its high efficiency, the fast processing speed for real-time applications. The main objective of the feature selection
based ensemble learning models is to classify the high dimensional data with high computational efficiency
and high true positive rate on high dimensional datasets. In this proposed model an optimized Particle swarm optimization (PSO) based Ensemble classification model was developed on high dimensional microarray
datasets. Experimental results proved that the proposed model has high computational efficiency compared to the traditional feature selection based classification models in terms of accuracy , true positive rate and error rate are concerned.
An Efficient PSO Based Ensemble Classification Model on High Dimensional Data...ijsc
As the size of the biomedical databases are growing day by day, finding an essential features in the disease prediction have become more complex due to high dimensionality and sparsity problems. Also, due to the availability of a large number of micro-array datasets in the biomedical repositories, it is difficult to
analyze, predict and interpret the feature information using the traditional feature selection based classification models. Most of the traditional feature selection based classification algorithms have computational issues such as dimension reduction, uncertainty and class imbalance on microarray datasets. Ensemble classifier is one of the scalable models for extreme learning machine due to its high efficiency, the fast processing speed for real-time applications. The main objective of the feature selection based ensemble learning models is to classify the high dimensional data with high computational efficiency and high true positive rate on high dimensional datasets. In this proposed model an optimized Particle swarm optimization (PSO) based Ensemble classification model was developed on high dimensional microarray datasets. Experimental results proved that the proposed model has high computational efficiency compared to the traditional feature selection based classification models in terms of accuracy , true positive rate and error rate are concerned.
An overview on data mining designed for imbalanced datasetseSAT Journals
Abstract The imbalanced datasets with the classifying categories are not around equally characterized. A problem in imbalanced dataset occurs in categorization, where the amount of illustration of single class will be greatly lesser than the illustrations of the previous classes. Current existence brought improved awareness during implementation of machine learning methods to complex real world exertion, which is considered by several through imbalanced data. In machine learning the imbalanced datasets has become a critical problem and also usually found in many implementation such as detection of fraudulent calls, bio-medical, engineering, remote-sensing, computer society and manufacturing industries. In order to overcome the problems several approaches have been proposed. In this paper a study on Imbalanced dataset problem and examine various sampling method utilized in favour of evaluation of the datasets, moreover the interpretation methods are further suitable for imbalanced datasets mining. Keywords: Imbalance Problems, Imbalanced datasets, sampling strategies, Machine Learning.
SVM &GA-CLUSTERING BASED FEATURE SELECTION APPROACH FOR BREAST CANCER DETECTIONijscai
Mortality leading among women in developed countries is breast cancer. Breast cancer is women's second
most prominent cause of cancer mortality worldwide. In recent decades, women's high prevalence of breast
cancer has risen dramatically. This paper discussed several data analysis methods used to detect breast
cancer early. Breast cancer diagnosis distinguishes benign and malignant breast lumps. Using data
processing tools, we tackled this disease analysis. Data mining is an important step of library discovery
where intelligent methods are used to detect patterns. Several clinical breast cancer studies were
conducted using soft computing and machine learning techniques. Sometimes their algorithms are easier,
easier, or more comprehensive than others. This research is focused on genetic programming and machine
learning algorithms to reliably identify benign and malignant breast cancer. This study aimed to optimise
the testing algorithm. We used genetic programming methods to choose classification machines' best
features and parameter values. Data mining is an important step of library discovery where intelligent
methods are used to detect patterns. We are analysing data accessible from the U.C.I. deep-learning data
set in Wisconsin. In this experiment, we equate four Weka clustering strategies with genetic clustering. A
comparison of results reveals that sequential minimal optimization (S.M.O.) is better than I.B.K. and B.F.
Tree processes, i.e. 97.71%.
SVM &GA-CLUSTERING BASED FEATURE SELECTION APPROACH FOR BREAST CANCER DETECTIONijscai
Mortality leading among women in developed countries is breast cancer. Breast cancer is women's second most prominent cause of cancer mortality worldwide. In recent decades, women's high prevalence of breast cancer has risen dramatically. This paper discussed several data analysis methods used to detect breast
cancer early. Breast cancer diagnosis distinguishes benign and malignant breast lumps. Using data processing tools, we tackled this disease analysis. Data mining is an important step of library discovery where intelligent methods are used to detect patterns. Several clinical breast cancer studies were conducted using soft computing and machine learning techniques. Sometimes their algorithms are easier, easier, or more comprehensive than others. This research is focused on genetic programming and machine learning algorithms to reliably identify benign and malignant breast cancer. This study aimed to optimise
the testing algorithm. We used genetic programming methods to choose classification machines' best features and parameter values. Data mining is an important step of library discovery where intelligent methods are used to detect patterns. We are analysing data accessible from the U.C.I. deep-learning data
set in Wisconsin. In this experiment, we equate four Weka clustering strategies with genetic clustering. A comparison of results reveals that sequential minimal optimization (S.M.O.) is better than I.B.K. and B.F. Tree processes, i.e. 97.71%
SVM &GA-CLUSTERING BASED FEATURE SELECTION APPROACH FOR BREAST CANCER DETECTIONijscai
Mortality leading among women in developed countries is breast cancer. Breast cancer is women's second most prominent cause of cancer mortality worldwide. In recent decades, women's high prevalence of breast cancer has risen dramatically. This paper discussed several data analysis methods used to detect breast cancer early. Breast cancer diagnosis distinguishes benign and malignant breast lumps. Using data processing tools, we tackled this disease analysis. Data mining is an important step of library discovery where intelligent methods are used to detect patterns. Several clinical breast cancer studies were conducted using soft computing and machine learning techniques. Sometimes their algorithms are easier, easier, or more comprehensive than others. This research is focused on genetic programming and machine
learning algorithms to reliably identify benign and malignant breast cancer. This study aimed to optimise the testing algorithm. We used genetic programming methods to choose classification machines' best features and parameter values. Data mining is an important step of library discovery where intelligent methods are used to detect patterns. We are analysing data accessible from the U.C.I. deep-learning data
set in Wisconsin. In this experiment, we equate four Weka clustering strategies with genetic clustering. A comparison of results reveals that sequential minimal optimization (S.M.O.) is better than I.B.K. and B.F. Tree processes, i.e. 97.71%.
SVM &GA-CLUSTERING BASED FEATURE SELECTION APPROACH FOR BREAST CANCER DETECTIONijscai
Mortality leading among women in developed countries is breast cancer. Breast cancer is women's second most prominent cause of cancer mortality worldwide. In recent decades, women's high prevalence of breast cancer has risen dramatically. This paper discussed several data analysis methods used to detect breast cancer early. Breast cancer diagnosis distinguishes benign and malignant breast lumps. Using data processing tools, we tackled this disease analysis. Data mining is an important step of library discovery where intelligent methods are used to detect patterns. Several clinical breast cancer studies were conducted using soft computing and machine learning techniques. Sometimes their algorithms are easier, easier, or more comprehensive than others. This research is focused on genetic programming and machine learning algorithms to reliably identify benign and malignant breast cancer. This study aimed to optimise the testing algorithm. We used genetic programming methods to choose classification machines' best features and parameter values. Data mining is an important step of library discovery where intelligent methods are used to detect patterns. We are analysing data accessible from the U.C.I. deep-learning data set in Wisconsin. In this experiment, we equate four Weka clustering strategies with genetic clustering. A comparison of results reveals that sequential minimal optimization (S.M.O.) is better than I.B.K. and B.F. Tree processes, i.e. 97.71%.
MULTI MODEL DATA MINING APPROACH FOR HEART FAILURE PREDICTIONIJDKP
Developing predictive modelling solutions for risk estimation is extremely challenging in health-care
informatics. Risk estimation involves integration of heterogeneous clinical sources having different
representation from different health-care provider making the task increasingly complex. Such sources are
typically voluminous, diverse, and significantly change over the time. Therefore, distributed and parallel
computing tools collectively termed big data tools are in need which can synthesize and assist the physician
to make right clinical decisions. In this work we propose multi-model predictive architecture, a novel
approach for combining the predictive ability of multiple models for better prediction accuracy. We
demonstrate the effectiveness and efficiency of the proposed work on data from Framingham Heart study.
Results show that the proposed multi-model predictive architecture is able to provide better accuracy than
best model approach. By modelling the error of predictive models we are able to choose sub set of models
which yields accurate results. More information was modelled into system by multi-level mining which has
resulted in enhanced predictive accuracy.
A class skew-insensitive ACO-based decision tree algorithm for imbalanced dat...nooriasukmaningtyas
Ant-tree-miner (ATM) has an advantage over the conventional decision tree algorithm in terms of feature selection. However, real world applications commonly involved imbalanced class problem where the classes have different importance. This condition impeded the entropy-based heuristic of existing ATM algorithm to develop effective decision boundaries due to its biasness towards the dominant class. Consequently, the induced decision trees are dominated by the majority class which lack in predictive ability on the rare class. This study proposed an enhanced algorithm called hellingerant-tree-miner (HATM) which is inspired by ant colony optimization (ACO) metaheuristic for imbalanced learning using decision tree classification algorithm. The proposed algorithm was compared to the existing algorithm, ATM in nine (9) publicly available imbalanced data sets. Simulation study reveals the superiority of HATM when the sample size increases with skewed class (Imbalanced Ratio < 50%). Experimental results demonstrate the performance of the existing algorithm measured by BACC has been improved due to the class skew-insensitiveness of hellinger distance. The statistical significance test shows that HATM has higher mean BACC score than ATM.
An efficient feature selection algorithm for health care data analysisjournalBEEI
Diabete is a silent killer, which will slowly kill the person if it goes undetected. The existing system which uses F-score method and K-means clustering of checking whether a person has diabetes or not are 100% accurate, and anything which isn't a 100% is not acceptable in the medical field, as it could cost the lives of many people. Our proposed system aims at using some of the best features of the existing algorithms to predict diabetes, and combine these and based on these features; This research work turns them into a novel algorithm, which will be 100% accurate in its prediction. With the surge in technological advancements, we can use data mining to predict when a person would be diagnosed with diabetes. Specifically, we analyze the best features of chi-square algorithm and advanced clustering algorithm (ACA). This research work is done using the Pima Indian Diabetes dataset provided by National Institutes of Diabetes and Digestive and Kidney Diseases. Using classification theorems and methods we can consider different factors like age, BMI, blood pressure and the importance given to these attributes overall, and singles these attributes out, and use them for the prediction of diabetes.
Enhanced Detection System for Trust Aware P2P Communication NetworksEditor IJCATR
Botnet is a number of computers that have been set up to forward transmissions to other computers unknowingly to the user
of the system and it is most significant to detect the botnets. However, peer-to-peer (P2P) structured botnets are very difficult to detect
because, it doesn’t have any centralized server. In this paper, we deliver an infrastructure of P2P that will improve the trust of the peers
and its data. In order to identify the botnets we provide a technique called data provenance integrity. It will ensure the correct origin or
source of information and prevents opponents from using host resources. A reputation based trust model is used for selecting the
trusted peer. In this model, each peer has a reputation value which is calculated based on its past activity. Here a hash table is used for
efficient file searching and data stored in it is based on the reputation value.
C omparative S tudy of D iabetic P atient D ata’s U sing C lassification A lg...Editor IJCATR
Data mining refers to extracting knowledge from large amount of data. Real life data mining approaches are
interesting because they often present a different se
t of problems for
diabetic
patient’s
data
.
The
research area to solve
various problems and classification is one of main problem in the field. The research describes algorithmic discussion of J48
,
J48 Graft, Random tree, REP, LAD. Here used to compare the
performance of computing time, correctly classified
instances, kappa statistics, MAE, RMSE, RAE, RRSE and
to find the error rate measurement for different classifiers in
weka .In this paper the
data
classification is diabetic patients data set is develope
d by collecting data from hospital repository
consists of 1865 instances with different attributes. The instances in the dataset are two categories of blood tests, urine t
ests.
Weka tool is used to classify the data is evaluated using 10 fold cross validat
ion and the results are compared. When the
performance of algorithms
,
we found J48 is better algorithm in most of the cases
Comparative Study of Diabetic Patient Data’s Using Classification Algorithm i...Editor IJCATR
Data mining refers to extracting knowledge from large amount of data. Real life data mining approaches are
interesting because they often present a different set of problems for diabetic patient’s data. The research area to solve
various problems and classification is one of main problem in the field. The research describes algorithmic discussion of J48,
J48 Graft, Random tree, REP, LAD. Here used to compare the performance of computing time, correctly classified
instances, kappa statistics, MAE, RMSE, RAE, RRSE and to find the error rate measurement for different classifiers in
weka .In this paper the data classification is diabetic patients data set is developed by collecting data from hospital repository
consists of 1865 instances with different attributes. The instances in the dataset are two categories of blood tests, urine tests.
Weka tool is used to classify the data is evaluated using 10 fold cross validation and the results are compared. When the
performance of algorithms, we found J48 is better algorithm in most of the cases.
Robust Breast Cancer Diagnosis on Four Different Datasets Using Multi-Classif...ahmad abdelhafeez
Abstract- The goal of this paper is to compare between different classifiers or multi-classifiers fusion with respect to accuracy in discovering breast cancer for four different data sets. We present an implementation among various classification techniques which represent the most known algorithms in this field on four different datasets of breast cancer two for diagnosis and two for prognosis. We present a fusion between classifiers to get the best multi-classifier fusion approach to each data set individually. By using confusion matrix to get classification accuracy which built in 10-fold cross validation technique. Also, using fusion majority voting (the mode of the classifier output). The experimental results show that no classification technique is better than the other if used for all datasets, since the classification task is affected by the type of dataset. By using multi-classifiers fusion the results show that accuracy improved in three datasets out of four.
A SURVEY OF METHODS FOR HANDLING DISK DATA IMBALANCEIJCI JOURNAL
Class imbalance exists in many classification problems, and since the data is designed for accuracy,
imbalance in data classes can lead to classification challenges with a few classes having higher
misclassification costs. The Backblaze dataset, a widely used dataset related to hard discs, has a small
amount of failure data and a large amount of health data, which exhibits a serious class imbalance. This
paper provides a comprehensive overview of research in the field of imbalanced data classification. The
discussion is organized into three main aspects: data-level methods, algorithmic-level methods, and hybrid
methods. For each type of method, we summarize and analyze the existing problems, algorithmic ideas,
strengths, and weaknesses. Additionally, the challenges of unbalanced data classification are discussed,
along with strategies to address them. It is convenient for researchers to choose the appropriate method
according to their needs.
‘Six Sigma Technique’ A Journey Through its Implementationijtsrd
The manufacturing industries all over the world are facing tough challenges for growth, development and sustainability in today’s competitive environment. They have to achieve apex position by adapting with the global competitive environment by delivering goods and services at low cost, prime quality and better price to increase wealth and consumer satisfaction. Cost Management ensures profit, growth and sustainability of the business with implementation of Continuous Improvement Technique like Six Sigma. This leads to optimize Business performance. The method drives for customer satisfaction, low variation, reduction in waste and cycle time resulting into a competitive advantage over other industries which did not implement it. The main objective of this paper ‘Six Sigma Technique A Journey Through Its Implementation’ is to conceptualize the effectiveness of Six Sigma Technique through the journey of its implementation. Aditi Sunilkumar Ghosalkar "‘Six Sigma Technique’: A Journey Through its Implementation" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd64546.pdf Paper Url: https://www.ijtsrd.com/other-scientific-research-area/other/64546/‘six-sigma-technique’-a-journey-through-its-implementation/aditi-sunilkumar-ghosalkar
Edge Computing in Space Enhancing Data Processing and Communication for Space...ijtsrd
Edge computing, a paradigm that involves processing data closer to its source, has gained significant attention for its potential to revolutionize data processing and communication in space missions. With the increasing complexity and data volume generated by modern space missions, traditional centralized computing approaches face challenges related to latency, bandwidth, and security. Edge computing in space, involving on board processing and analysis of data, offers promising solutions to these challenges. This paper explores the concept of edge computing in space, its benefits, applications, and future prospects in enhancing space missions. Manish Verma "Edge Computing in Space: Enhancing Data Processing and Communication for Space Missions" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd64541.pdf Paper Url: https://www.ijtsrd.com/computer-science/artificial-intelligence/64541/edge-computing-in-space-enhancing-data-processing-and-communication-for-space-missions/manish-verma
Dynamics of Communal Politics in 21st Century India Challenges and Prospectsijtsrd
Communal politics in India has evolved through centuries, weaving a complex tapestry shaped by historical legacies, colonial influences, and contemporary socio political transformations. This research comprehensively examines the dynamics of communal politics in 21st century India, emphasizing its historical roots, socio political dynamics, economic implications, challenges, and prospects for mitigation. The historical perspective unravels the intricate interplay of religious identities and power dynamics from ancient civilizations to the impact of colonial rule, providing insights into the evolution of communalism. The socio political dynamics section delves into the contemporary manifestations, exploring the roles of identity politics, socio economic disparities, and globalization. The economic implications section highlights how communal politics intersects with economic issues, perpetuating disparities and influencing resource allocation. Challenges posed by communal politics are scrutinized, revealing multifaceted issues ranging from social fragmentation to threats against democratic values. The prospects for mitigation present a multifaceted approach, incorporating policy interventions, community engagement, and educational initiatives. The paper conducts a comparative analysis with international examples, identifying common patterns such as identity politics and economic disparities. It also examines unique challenges, emphasizing Indias diverse religious landscape, historical legacy, and secular framework. Lessons for effective strategies are drawn from international experiences, offering insights into inclusive policies, interfaith dialogue, media regulation, and global cooperation. By scrutinizing historical epochs, contemporary dynamics, economic implications, and international comparisons, this research provides a comprehensive understanding of communal politics in India. The proposed strategies for mitigation underscore the importance of a holistic approach to foster social harmony, inclusivity, and democratic values. Rose Hossain "Dynamics of Communal Politics in 21st Century India: Challenges and Prospects" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd64528.pdf Paper Url: https://www.ijtsrd.com/humanities-and-the-arts/history/64528/dynamics-of-communal-politics-in-21st-century-india-challenges-and-prospects/rose-hossain
Assess Perspective and Knowledge of Healthcare Providers Towards Elehealth in...ijtsrd
Background and Objective Telehealth has become a well known tool for the delivery of health care in Saudi Arabia, and the perspective and knowledge of healthcare providers are influential in the implementation, adoption and advancement of the method. This systematic review was conducted to examine the current literature base regarding telehealth and the related healthcare professional perspective and knowledge in the Kingdom of Saudi Arabia. Materials and Methods This systematic review was conducted by searching 7 databases including, MEDLINE, CINHAL, Web of Science, Scopus, PubMed, PsycINFO, and ProQuest Central. Studies on healthcare practitioners telehealth knowledge and perspectives published in English in Saudi Arabia from 2000 to 2023 were included. Boland directed this comprehensive review. The researchers examined each connected study using the AXIS tool, which evaluates cross sectional systematic reviews. Narrative synthesis was used to summarise and convey the data. Results Out of 1840 search results, 10 studies were included. Positive outlook and limited knowledge among providers were seen across trials. Healthcare professionals like telehealth for its ability to improve quality, access, and delivery, save time and money, and be successful. Age, gender, occupation, and work experience also affect health workers knowledge. In Saudi Arabia, healthcare professionals face inadequate expert assistance, patient privacy, internet connection concerns, lack of training courses, lack of telehealth understanding, and high costs while performing telemedicine. Conclusions Healthcare practitioners telehealth perceptions and knowledge were examined in this systematic study. Its collection of concerned experts different personal attitudes and expertise would help enhance telehealths implementation in Saudi Arabia, develop its healthcare delivery alternative, and eliminate frequent problems. Badriah Mousa I Mulayhi | Dr. Jomin George | Judy Jenkins "Assess Perspective and Knowledge of Healthcare Providers Towards Elehealth in Saudi Arabia: A Systematic Review" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd64535.pdf Paper Url: https://www.ijtsrd.com/medicine/other/64535/assess-perspective-and-knowledge-of-healthcare-providers-towards-elehealth-in-saudi-arabia-a-systematic-review/badriah-mousa-i-mulayhi
The Impact of Digital Media on the Decentralization of Power and the Erosion ...ijtsrd
The impact of digital media on the distribution of power and the weakening of traditional gatekeepers has gained considerable attention in recent years. The adoption of digital technologies and the internet has resulted in declining influence and power for traditional gatekeepers such as publishing houses and news organizations. Simultaneously, digital media has facilitated the emergence of new voices and players in the media industry. Digital medias impact on power decentralization and gatekeeper erosion is visible in several ways. One significant aspect is the democratization of information, which enables anyone with an internet connection to publish and share content globally, leading to citizen journalism and bypassing traditional gatekeepers. Another aspect is the disruption of conventional media industry business models, as traditional organizations struggle to adjust to the decrease in advertising revenue and the rise of digital platforms. Alternative business models, such as subscription models and crowdfunding, have become more prevalent, leading to the emergence of new players. Overall, the impact of digital media on the distribution of power and the weakening of traditional gatekeepers has brought about significant changes in the media landscape and the way information is shared. Further research is required to fully comprehend the implications of these changes and their impact on society. Dr. Kusum Lata "The Impact of Digital Media on the Decentralization of Power and the Erosion of Traditional Gatekeepers" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd64544.pdf Paper Url: https://www.ijtsrd.com/humanities-and-the-arts/political-science/64544/the-impact-of-digital-media-on-the-decentralization-of-power-and-the-erosion-of-traditional-gatekeepers/dr-kusum-lata
Online Voices, Offline Impact Ambedkars Ideals and Socio Political Inclusion ...ijtsrd
This research investigates the nexus between online discussions on Dr. B.R. Ambedkars ideals and their impact on social inclusion among college students in Gurugram, Haryana. Surveying 240 students from 12 government colleges, findings indicate that 65 actively engage in online discussions, with 80 demonstrating moderate to high awareness of Ambedkars ideals. Statistically significant correlations reveal that higher online engagement correlates with increased awareness p 0.05 and perceived social inclusion. Variations across colleges and a notable effect of college type on perceived social inclusion highlight the influence of contextual factors. Furthermore, the intersectional analysis underscores nuanced differences based on gender, caste, and socio economic status. Dr. Kusum Lata "Online Voices, Offline Impact: Ambedkar's Ideals and Socio-Political Inclusion - A Study of Gurugram District" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd64543.pdf Paper Url: https://www.ijtsrd.com/humanities-and-the-arts/political-science/64543/online-voices-offline-impact-ambedkars-ideals-and-sociopolitical-inclusion--a-study-of-gurugram-district/dr-kusum-lata
Problems and Challenges of Agro Entreprenurship A Studyijtsrd
Noting calls for contextualizing Agro entrepreneurs problems and challenges of the agro entrepreneurs and for greater attention to the Role of entrepreneurs in agro entrepreneurship research, we conduct a systematic literature review of extent research in agriculture entrepreneurship to overcome the study objectives of complications of agro entrepreneurs through various factors, Development of agriculture products is a key factor for the overall economic growth of agro entrepreneurs Agro Entrepreneurs produces firsthand large scale employment, utilizes the labor and natural resources, This research outlines the problems of Weather and Soil Erosions, Market price fluctuation, stimulates labor cost problems, reduces concentration of Price volatility, Dependency on Intermediaries, induces Limited Bargaining Power, and Storage and Transportation Costs. This paper mainly devoted to highlight Problems and challenges faced for the sustainable of Agro Entrepreneurs in India. Vinay Prasad B "Problems and Challenges of Agro Entreprenurship - A Study" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd64540.pdf Paper Url: https://www.ijtsrd.com/other-scientific-research-area/other/64540/problems-and-challenges-of-agro-entreprenurship--a-study/vinay-prasad-b
Comparative Analysis of Total Corporate Disclosure of Selected IT Companies o...ijtsrd
Disclosure is a process through which a business enterprise communicates with external parties. A corporate disclosure is communication of financial and non financial information of the activities of a business enterprise to the interested entities. Corporate disclosure is done through publishing annual reports. So corporate disclosure through annual reports plays a vital role in the life of all the companies and provides valuable information to investors. The basic objectives of corporate disclosure is to give a true and fair view of companies to the parties related either directly or indirectly like owner, government, creditors, shareholders etc. in the companies act, provisions have been made about mandatory and voluntary disclosure. The IT sector in India is rapidly growing, the trend to invest in the IT sector is rising and employment opportunities in IT sectors are also increasing. Therefore the IT sector is expected to have fair, full and adequate disclosure of all information. Unfair and incomplete disclosure may adversely affect the entire economy. A research study on disclosure practices of IT companies could play an important role in this regard. Hence, the present research study has been done to study and review comparative analysis of total corporate disclosure of selected IT companies of India and to put forward overall findings and suggestions with a view to increase disclosure score of these companies. The researcher hopes that the present research study will be helpful to all selected Companies for improving level of corporate disclosure through annual reports as well as the government, creditors, investors, all business organizations and upcoming researcher for comparative analyses of level of corporate disclosure with special reference to selected IT companies. Dr. Vaibhavi D. Thaker "Comparative Analysis of Total Corporate Disclosure of Selected IT Companies of India" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd64539.pdf Paper Url: https://www.ijtsrd.com/other-scientific-research-area/other/64539/comparative-analysis-of-total-corporate-disclosure-of-selected-it-companies-of-india/dr-vaibhavi-d-thaker
The Impact of Educational Background and Professional Training on Human Right...ijtsrd
This study investigated the impact of educational background and professional training on human rights awareness among secondary school teachers in the Marathwada region of Maharashtra, India. The key findings reveal that higher levels of education, particularly a master’s degree, and fields of study related to education, humanities, or social sciences are associated with greater human rights awareness among teachers. Additionally, both pre service teacher training and in service professional development programs focused on human rights education significantly enhance teacher’s knowledge, skills, and competencies in promoting human rights principles in their classrooms. Baig Ameer Bee Mirza Abdul Aziz | Dr. Syed Azaz Ali Amjad Ali "The Impact of Educational Background and Professional Training on Human Rights Awareness among Secondary School Teachers" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd64529.pdf Paper Url: https://www.ijtsrd.com/humanities-and-the-arts/education/64529/the-impact-of-educational-background-and-professional-training-on-human-rights-awareness-among-secondary-school-teachers/baig-ameer-bee-mirza-abdul-aziz
A Study on the Effective Teaching Learning Process in English Curriculum at t...ijtsrd
“One Language sets you in a corridor for life. Two languages open every door along the way” Frank Smith English as a foreign language or as a second language has been ruling in India since the period of Lord Macaulay. But the question is how much we teach or learn English properly in our culture. Is there any scope to use English as a language rather than a subject How much we learn or teach English without any interference of mother language specially in the classroom teaching learning scenario in West Bengal By considering all these issues the researcher has attempted in this article to focus on the effective teaching learning process comparing to other traditional strategies in the field of English curriculum at the secondary level to investigate whether they fulfill the present teaching learning requirements or not by examining the validity of the present curriculum of English. The purpose of this study is to focus on the effectiveness of the systematic, scientific, sequential and logical transaction of the course between the teachers and the learners in the perspective of the 5Es programme that is engage, explore, explain, extend and evaluate. Sanchali Mondal | Santinath Sarkar "A Study on the Effective Teaching Learning Process in English Curriculum at the Secondary Level of West Bengal" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd62412.pdf Paper Url: https://www.ijtsrd.com/humanities-and-the-arts/education/62412/a-study-on-the-effective-teaching-learning-process-in-english-curriculum-at-the-secondary-level-of-west-bengal/sanchali-mondal
The Role of Mentoring and Its Influence on the Effectiveness of the Teaching ...ijtsrd
This paper reports on a study which was conducted to investigate the role of mentoring and its influence on the effectiveness of the teaching of Physics in secondary schools in the South West Region of Cameroon. The study adopted the convergent parallel mixed methods design, focusing on respondents in secondary schools in the South West Region of Cameroon. Both quantitative and qualitative data were collected, analysed separately, and the results were compared to see if the findings confirm or disconfirm each other. The quantitative analysis found that majority of the respondents 72 of Physics teachers affirmed that they had more experienced colleagues as mentors to help build their confidence, improve their teaching, and help them improve their effectiveness and efficiency in guiding learners’ achievements. Only 28 of the respondents disagreed with these statements. With majority respondents 72 agreeing with the statements, it implies that in most secondary schools, experienced Physics teachers act as mentors to build teachers’ confidence in teaching and improving students’ learning. The interview qualitative data analysis summarized how secondary school Principals use meetings with mentors and mentees to promote mentorship in the school milieu. This has helped strengthen teachers’ classroom practices in secondary schools in the South West Region of Cameroon. With the results confirming each other, the study recommends that mentoring should focus on helping teachers employ social interactions and instructional practices feedback and clarity in teaching that have direct measurable impact on students’ learning achievements. Andrew Ngeim Sumba | Frederick Ebot Ashu | Peter Agborbechem Tambi "The Role of Mentoring and Its Influence on the Effectiveness of the Teaching of Physics in Secondary Schools in the South West Region of Cameroon" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd64524.pdf Paper Url: https://www.ijtsrd.com/management/management-development/64524/the-role-of-mentoring-and-its-influence-on-the-effectiveness-of-the-teaching-of-physics-in-secondary-schools-in-the-south-west-region-of-cameroon/andrew-ngeim-sumba
Design Simulation and Hardware Construction of an Arduino Microcontroller Bas...ijtsrd
This study primarily focuses on the design of a high side buck converter using an Arduino microcontroller. The converter is specifically intended for use in DC DC applications, particularly in standalone solar PV systems where the PV output voltage exceeds the load or battery voltage. To evaluate the performance of the converter, simulation experiments are conducted using Proteus Software. These simulations provide insights into the input and output voltages, currents, powers, and efficiency under different state of charge SoC conditions of a 12V,70Ah rechargeable lead acid battery. Additionally, the hardware design of the converter is implemented, and practical data is collected through operation, monitoring, and recording. By comparing the simulation results with the practical results, the efficiency and performance of the designed converter are assessed. The findings indicate that while the buck converter is suitable for practical use in standalone PV systems, its efficiency is compromised due to a lower output current. Chan Myae Aung | Dr. Ei Mon "Design Simulation and Hardware Construction of an Arduino-Microcontroller Based DC-DC High-Side Buck Converter for Standalone PV System" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd64518.pdf Paper Url: https://www.ijtsrd.com/engineering/mechanical-engineering/64518/design-simulation-and-hardware-construction-of-an-arduinomicrocontroller-based-dcdc-highside-buck-converter-for-standalone-pv-system/chan-myae-aung
Sustainable Energy by Paul A. Adekunte | Matthew N. O. Sadiku | Janet O. Sadikuijtsrd
Energy becomes sustainable if it meets the needs of the present without compromising the ability of future generations to meet their own needs. Some of the definitions of sustainable energy include the considerations of environmental aspects such as greenhouse gas emissions, social, and economic aspects such as energy poverty. Generally far more sustainable than fossil fuel are renewable energy sources such as wind, hydroelectric power, solar, and geothermal energy sources. Worthy of note is that some renewable energy projects, like the clearing of forests to produce biofuels, can cause severe environmental damage. The sustainability of nuclear power which is a low carbon source is highly debated because of concerns about radioactive waste, nuclear proliferation, and accidents. The switching from coal to natural gas has environmental benefits, including a lower climate impact, but could lead to delay in switching to more sustainable options. “Carbon capture and storage” can be built into power plants to remove the carbon dioxide CO2 emissions, but this technology is expensive and has rarely been implemented. Leading non renewable energy sources around the world is fossil fuels, coal, petroleum, and natural gas. Nuclear energy is usually considered another non renewable energy source, although nuclear energy itself is a renewable energy source, but the material used in nuclear power plants is not. The paper addresses the issue of sustainable energy, its attendant benefits to the future generation, and humanity in general. Paul A. Adekunte | Matthew N. O. Sadiku | Janet O. Sadiku "Sustainable Energy" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd64534.pdf Paper Url: https://www.ijtsrd.com/engineering/electrical-engineering/64534/sustainable-energy/paul-a-adekunte
Concepts for Sudan Survey Act Implementations Executive Regulations and Stand...ijtsrd
This paper aims to outline the executive regulations, survey standards, and specifications required for the implementation of the Sudan Survey Act, and for regulating and organizing all surveying work activities in Sudan. The act has been discussed for more than 5 years. The Land Survey Act was initiated by the Sudan Survey Authority and all official legislations were headed by the Sudan Ministry of Justice till it was issued in 2022. The paper presents conceptual guidelines to be used for the Survey Act implementation and to regulate the survey work practice, standardizing the field surveys, processing, quality control, procedures, and the processes related to survey work carried out by the stakeholders and relevant authorities in Sudan. The conceptual guidelines are meant to improve the quality and harmonization of geospatial data and to aid decision making processes as well as geospatial information systems. The established comprehensive executive regulations will govern and regulate the implementation of the Sudan Survey Geomatics Act in all surveying and mapping practices undertaken by the Sudan Survey Authority SSA and state local survey departments for public or private sector organizations. The targeted standards and specifications include the reference frame, projection, coordinate systems, and the guidelines and specifications that must be followed in the field of survey work, processes, and mapping products. In the last few decades, there has been a growing awareness of the importance of geomatics activities and measurements on the Earths surface in space and time, together with observing and mapping the changes. In such cases, data must be captured promptly, standardized, and obtained with more accuracy and specified in much detail. The paper will also highlight the current situation in Sudan, the degree to which survey standards are used, the problems encountered, and the errors that arise from not using the standards and survey specifications. Kamal A. A. Sami "Concepts for Sudan Survey Act Implementations - Executive Regulations and Standards" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd63484.pdf Paper Url: https://www.ijtsrd.com/engineering/civil-engineering/63484/concepts-for-sudan-survey-act-implementations--executive-regulations-and-standards/kamal-a-a-sami
Towards the Implementation of the Sudan Interpolated Geoid Model Khartoum Sta...ijtsrd
The discussions between ellipsoid and geoid have invoked many researchers during the recent decades, especially during the GNSS technology era, which had witnessed a great deal of development but still geoid undulation requires more investigations. To figure out a solution for Sudans local geoid, this research has tried to intake the possibility of determining the geoid model by following two approaches, gravimetric and geometrical geoid model determination, by making use of GNSS leveling benchmarks at Khartoum state. The Benchmarks are well distributed in the study area, in which, the horizontal coordinates and the height above the ellipsoid have been observed by GNSS while orthometric heights were carried out using precise leveling. The Global Geopotential Model GGM represented in EGM2008 has been exploited to figure out the geoid undulation at the benchmarks in the study area. This is followed by a fitting process, that has been done to suit the geoid undulation data which has been computed using GNSS leveling data and geoid undulation inspired by the EGM2008. Two geoid surfaces were created after the fitting process to ensure that they are identical and both of them could be counted for getting the same geoid undulation with an acceptable accuracy. In this respect, statistical operation played an important role in ensuring the consistency and integrity of the model by applying cross validation techniques splitting the data into training and testing datasets for building the geoid model and testing its eligibility. The geometrical solution for geoid undulation computation has been utilized by applying straightforward equations that facilitate the calculation of the geoid undulation directly through applying statistical techniques for the GNSS leveling data of the study area to get the common equation parameters values that could be utilized to calculate geoid undulation of any position in the study area within the claimed accuracy. Both systems were checked and proved eligible to be used within the study area with acceptable accuracy which may contribute to solving the geoid undulation problem in the Khartoum area, and be further generalized to determine the geoid model over the entire country, and this could be considered in the future, for regional and continental geoid model. Ahmed M. A. Mohammed. | Kamal A. A. Sami "Towards the Implementation of the Sudan Interpolated Geoid Model (Khartoum State Case Study)" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd63483.pdf Paper Url: https://www.ijtsrd.com/engineering/civil-engineering/63483/towards-the-implementation-of-the-sudan-interpolated-geoid-model-khartoum-state-case-study/ahmed-m-a-mohammed
Activating Geospatial Information for Sudans Sustainable Investment Mapijtsrd
Sudan is witnessing an acceleration in the processes of development and transformation in the performance of government institutions to raise the productivity and investment efficiency of the government sector. The development plans and investment opportunities have focused on achieving national goals in various sectors. This paper aims to illuminate the path to the future and provide geospatial data and information to develop the investment climate and environment for all sized businesses, and to bridge the development gap between the Sudan states. The Sudan Survey Authority SSA is the main advisor to the Sudan Government in conducting surveying, mappings, designing, and developing systems related to geospatial data and information. In recent years, SSA made a strategic partnership with the Ministry of Investment to activate Geospatial Information for Sudans Sustainable Investment and in particular, for the preparation and implementation of the Sudan investment map, based on the directives and objectives of the Ministry of Investment MI in Sudan. This paper comes within the framework of activating the efforts of the Ministry of Investment to develop technical investment services by applying techniques adopted by the Ministry and its strategic partners for advancing investment processes in the country. Kamal A. A. Sami "Activating Geospatial Information for Sudan's Sustainable Investment Map" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd63482.pdf Paper Url: https://www.ijtsrd.com/engineering/information-technology/63482/activating-geospatial-information-for-sudans-sustainable-investment-map/kamal-a-a-sami
Educational Unity Embracing Diversity for a Stronger Societyijtsrd
In a rapidly changing global landscape, the importance of education as a unifying force cannot be overstated. This paper explores the crucial role of educational unity in fostering a stronger and more inclusive society through the embrace of diversity. By examining the benefits of diverse learning environments, the paper aims to highlight the positive impact on societal strength. The discussion encompasses various dimensions, from curriculum design to classroom dynamics, and emphasizes the need for educational institutions to become catalysts for unity in diversity. It highlights the need for a paradigm shift in educational policies, curricula, and pedagogical approaches to ensure that they are reflective of the diverse fabric of society. This paper also addresses the challenges associated with implementing inclusive educational practices and offers practical strategies for overcoming barriers. It advocates for collaborative efforts between educational institutions, policymakers, and communities to create a supportive ecosystem that promotes diversity and unity. Mr. Amit Adhikari | Madhumita Teli | Gopal Adhikari "Educational Unity: Embracing Diversity for a Stronger Society" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd64525.pdf Paper Url: https://www.ijtsrd.com/humanities-and-the-arts/education/64525/educational-unity-embracing-diversity-for-a-stronger-society/mr-amit-adhikari
Integration of Indian Indigenous Knowledge System in Management Prospects and...ijtsrd
The diversity of indigenous knowledge systems in India is vast and can vary significantly between different communities and regions. Preserving and respecting these knowledge systems is crucial for maintaining cultural heritage, promoting sustainable practices, and fostering cross cultural understanding. In this paper, an overview of the prospects and challenges associated with incorporating Indian indigenous knowledge into management is explored. It is found that IIKS helps in management in many areas like sustainable development, tourism, food security, natural resource management, cultural preservation and innovation, etc. However, IIKS integration with management faces some challenges in the form of a lack of documentation, cultural sensitivity, language barriers legal framework, etc. Savita Lathwal "Integration of Indian Indigenous Knowledge System in Management: Prospects and Challenges" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd63500.pdf Paper Url: https://www.ijtsrd.com/management/accounting-and-finance/63500/integration-of-indian-indigenous-knowledge-system-in-management-prospects-and-challenges/savita-lathwal
DeepMask Transforming Face Mask Identification for Better Pandemic Control in...ijtsrd
The COVID 19 pandemic has highlighted the crucial need of preventive measures, with widespread use of face masks being a key method for slowing the viruss spread. This research investigates face mask identification using deep learning as a technological solution to be reducing the risk of coronavirus transmission. The proposed method uses state of the art convolutional neural networks CNNs and transfer learning to automatically recognize persons who are not wearing masks in a variety of circumstances. We discuss how this strategy improves public health and safety by providing an efficient manner of enforcing mask wearing standards. The report also discusses the obstacles, ethical concerns, and prospective applications of face mask detection systems in the ongoing fight against the pandemic. Dilip Kumar Sharma | Aaditya Yadav "DeepMask: Transforming Face Mask Identification for Better Pandemic Control in the COVID-19 Era" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd64522.pdf Paper Url: https://www.ijtsrd.com/engineering/electronics-and-communication-engineering/64522/deepmask-transforming-face-mask-identification-for-better-pandemic-control-in-the-covid19-era/dilip-kumar-sharma
Streamlining Data Collection eCRF Design and Machine Learningijtsrd
Efficient and accurate data collection is paramount in clinical trials, and the design of Electronic Case Report Forms eCRFs plays a pivotal role in streamlining this process. This paper explores the integration of machine learning techniques in the design and implementation of eCRFs to enhance data collection efficiency. We delve into the synergies between eCRF design principles and machine learning algorithms, aiming to optimize data quality, reduce errors, and expedite the overall data collection process. The application of machine learning in eCRF design brings forth innovative approaches to data validation, anomaly detection, and real time adaptability. This paper discusses the benefits, challenges, and future prospects of leveraging machine learning in eCRF design for streamlined and advanced data collection in clinical trials. Dhanalakshmi D | Vijaya Lakshmi Kannareddy "Streamlining Data Collection: eCRF Design and Machine Learning" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd63515.pdf Paper Url: https://www.ijtsrd.com/biological-science/biotechnology/63515/streamlining-data-collection-ecrf-design-and-machine-learning/dhanalakshmi-d
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
This is a presentation by Dada Robert in a Your Skill Boost masterclass organised by the Excellence Foundation for South Sudan (EFSS) on Saturday, the 25th and Sunday, the 26th of May 2024.
He discussed the concept of quality improvement, emphasizing its applicability to various aspects of life, including personal, project, and program improvements. He defined quality as doing the right thing at the right time in the right way to achieve the best possible results and discussed the concept of the "gap" between what we know and what we do, and how this gap represents the areas we need to improve. He explained the scientific approach to quality improvement, which involves systematic performance analysis, testing and learning, and implementing change ideas. He also highlighted the importance of client focus and a team approach to quality improvement.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
Ethnobotany and Ethnopharmacology:
Ethnobotany in herbal drug evaluation,
Impact of Ethnobotany in traditional medicine,
New development in herbals,
Bio-prospecting tools for drug discovery,
Role of Ethnopharmacology in drug evaluation,
Reverse Pharmacology.
2. International Journal of Trend in Scientific Research and Development (IJTSRD) @ www.ijtsrd.com eISSN: 2456-6470
@ IJTSRD | Unique Paper ID – IJTSRD25255 | Volume – 3 | Issue – 5 | July - August 2019 Page 167
2. TEXT PRE-PROCESSING
Classification is an important task of pattern recognition. A
range of classification learning algorithms, such as decision
tree, back propagation neural network, Bayesian network,
nearest neighbor, support vector machines, and the newly
reported associative classification,havebeen welldeveloped
and successfully applied to many application domains.
However, imbalanced class distribution of a data set has
encountered a serious difficulty to most classifier learning
algorithms which assume a relatively balanced distribution.
The imbalanced data is characterized as having many more
instances of certain classes than others. As rare instances
occur infrequently, classification rules that predict thesmall
classes tend to be rare, undiscovered or ignored;
consequently, test samples belongingtothesmallclassesare
misclassified more often than those belonging to the
prevalent classes. In certain applications, the correct
classification of samples in the small classes often has a
greater value than the contrary case. For example, in a
disease diagnostic problem where the disease cases are
usually quite rare as compared with normalpopulations,the
recognition goal is to detect people with diseases. Hence, a
favorable classification model is one that provides a higher
identification rate on the disease category. Imbalanced or
skewed class distribution problem is therefore alsoreferred
to as small or rare class learning problem.
3. LITERATURE SURVEY
2017 [1], There have been many attempts to classify
imbalanced data, since this classification is critical in a wide
variety of applications related to the detection of anomalies,
failures, and risks.Manyconventionalmethods,which can be
categorized into sampling, cost-sensitive, or ensemble,
include heuristic and task dependent processes. In order to
achieve a better classification performance by formulation
without heuristics and task dependence, we propose
confusion-matrix-based kernel logistic regression (CM-
KLOGR).
2017 [2], BigData applications are emerging during the last
years, and researchers from many disciplines are aware of
the high advantages related to the knowledge extraction
from this type of problem. However, traditional learning
approaches cannot be directly applied due to scalability
issues. To overcome this issue, the MapReduce framework
has arisen as a “de facto” solution. Basically, it carries out a
“divide-and conquer” distributed procedure in a fault-
tolerant way to adapt for commodity hardware. Being still a
recent discipline, few researches has been conducted on
imbalanced classification for Big Data.
2014 [3], classifications of the data collected from students
of polytechnic institute has been discussed. This data is pre-
processed to remove unwanted and less meaningful
attributes. These students are then classified into different
categories like brilliant, average, weak using decision tree
and naïve Bayesian algorithms. The processingisdoneusing
WEKA data mining tool. This paper also compares results of
classification with respect to different performance
parameters.
2014 [4], Rough set theory provides a useful mathematical
concept to draw useful decisionsfromreallifedatainvolving
vagueness, uncertainty and impreciseness and is therefore
applied successfully in the field of pattern recognition,
machine learning and knowledge discovery.
2012 [5], Re-sampling method is a popular and effective
technique to imbalanced learning. However, most re-
sampling methods ignore data density information and may
lead to over fitting. A novel adaptive over-sampling
technique based on data density (ASMOBD) is proposed in
this paper. Compared with existing re-sampling algorithms,
ASMOBD can adaptively synthesize differentnumberofnew
samples around each minority sample according to its level
of learning difficulty.
2012 [6], Classifier learning with data-sets that suffer from
imbalanced class distributions is a challenging problem in
data mining community. This issue occurs when thenumber
of examples that represent one class is much lower than the
ones of the other classes. Its presence in many real-world
applications has brought along a growth of attention from
researchers.
4. PROBLEM IDENTIFICATION AND OBJECTIVES
The identified problem in current research work is as
follows
1. Unrelated data are classify for specific dataset
2. Inconsistency exists during classification process.
3. Due to low sampling irregular sampling rate generate
4. Uncertainty of classification
The desired objectives of proposed work is as follows
1. To improve precision and recall for related data are
classify.
2. To improve predictive positivity for reduce
inconsistency during classification process.
3. To improve F1-measure for regular sampling rate.
4. To improve harmonic mean for certainty of
classification.
5. METHODOLOGY
The algorithm of RBF-NN is perform in three phases
Phase 1: Particle of Swarm Optimization
Step 1: Initialize
// Initialize all particles
Step 2: Repeat
for each particle i in S do
// update the particle best solution
if(fxi) < f(pbi) then
pbi=xi
end if
// update the global best solution
if (pbi) > f(gb)
gb=pbi
end if
end for
Step 3:
// update particle’s velocity and position
for each particle i in S do
for each dimension d in D do
vi,d = vi,d + C1 * Rnd(0,1)*[pbi,d – xi,d] + C2 * Rnd(0,1) * [gbd-xi,d]
xi,d = xi,d+vi,d
end for
end for
Step 4:
//advance iteration
it = it+1
until it > MAX_ITERATIONS
3. International Journal of Trend in Scientific Research and Development (IJTSRD) @ www.ijtsrd.com eISSN: 2456-6470
@ IJTSRD | Unique Paper ID – IJTSRD25255 | Volume – 3 | Issue – 5 | July - August 2019 Page 168
Phase 2: Cuckoo Walk Optimization
Step 5: Objective function f(x), x= (x1, x2,…,xd)T
Step 6: Generate an initial population of n host nests xi
(i=1,2,..,n)
Step 7: while(t < MaxGeneration)
Get random value
Evaluate its quality / velocity Fi
Choose nest among n (say, j) randomly,
if (Fi>Fj)
Replace j by the new solution
end
A fraction (pa) of worth nests are replaced by new random
solutions
Keep the best solution
Rank the solution and find the current best
Pass the current best solution to the next generation
end while
return the best next
end
Phase 3: Neural Network
Step 8: Assign random weight to all the linkages
Step 9: Using the inputs and linkage find the activation
rate of hidden nodes
Step 10: Using the activation rate of hidden nodes and
linkages to output, find the activation rate of
output nodes
Step 11: find the error rate of output node and recalibrate
all the linkages between hiddennodesand output
nodes.
Step 12: Using the weights and errorfoundatoutputnode,
cascade down the error to hidden nodes
Step 13: Recalibrate the weights between hidden nodes
and input nodes.
Step 14: Repeat the process till the convergence criterion
is met.
Step 15: Using the final linkage weight scorethe activation
rate of the output nodes.
6. RESULTS AND ANALYSIS
It is difficult to precisely evaluate the classification
performance for imbalanced data, especially when the data
is small. Because of the imbalance and the small number of
instances, the distribution of instancesoften differsbetween
the training, validation, and test sets. The difference in
distribution makes performance evaluation imprecise, and
consequently, it sometimes leads to improper setting. For
this solution and the fair comparison of the classifiers, the
following processes were designed to divide and feed
datasets, to set the hyper-parameters, parameters, and
cutoff, and to estimate the performance.
Table 1: Comparative analysis of sensitivity in
between of CM-KLOGR [1] and RBF-NN (Proposed)
Dataset CM-KLOGR [1]
RBF-NN
(Proposed)
Breast 95.83 97.24
Haberman 75 78.18
Ecoli-pp 100 100
Ecoli-imu 50 53.21
Pop failures 100 100
Yeast-1 vs 7 100 100
Figure 1: Graphical analysis of sensitivity in between
of CM-KLOGR [1] and RBF-NN (Proposed)
In table 1, recall (sensitivity) of proposedworkisimprove as
compare than CM-KLOGR[1] for all considerable dataset.
Concretely speaking, RBF-NN perform best under low
imbalance, tied for first place with RBF-NN under moderate
imbalance, and performed best under high imbalance.
Table 2: Comparative analysis of specificity in
between of CM-KLOGR [1] and RBF-NN (Proposed)
Dataset CM-KLOGR [1] RBF-NN (Proposed)
Breast 95.65 97.21
Haberman 82.61 85.72
Ecoli-pp 93.10 95.41
Ecoli-imu 96.67 98.96
Pop failures 95.92 97.13
Yeast-1 vs 7 88.37 91.08
Figure 2: Graphical analysis of specificity in between
of CM-KLOGR [1] and RBF-NN (Proposed)
In table 2, specificity of proposed work is improve as
compare than CM-KLOGR[1] for all considerable dataset.
Concretely speaking, RBF-NN perform best relevant items
under predefined dataset, tied for first place with RBF-NN
under moderate imbalance, and performed best under high
imbalance.
Table 3: Comparative analysis of PPV in between of
CM-KLOGR [1] and RBF-NN (Proposed)
Dataset CM-KLOGR [1] RBF-NN (Proposed)
Breast 92.00 93.76
Haberman 60.00 62.13
Ecoli-pp 71.43 73.66
Ecoli-imu 66.67 69.82
Pop failures 71.43 73.64
Yeast-1 vs 7 37.50 39.83
4. International Journal of Trend in Scientific Research and Development (IJTSRD) @ www.ijtsrd.com eISSN: 2456-6470
@ IJTSRD | Unique Paper ID – IJTSRD25255 | Volume – 3 | Issue – 5 | July - August 2019 Page 169
Figure 3: Graphical analysis of PPV in between of CM-
KLOGR [1] and RBF-NN (Proposed)
In table 3, PPV of proposed work isimproveascomparethan
CM-KLOGR[1] for all considerable dataset.
Table 4: Comparative analysis of NPV in between of
CM-KLOGR [1] and RBF-NN (Proposed)
Dataset CM-KLOGR [1]
RBF-NN
(Proposed)
Breast 97.78 98.81
Haberman 90.48 93.08
Ecoli-pp 100.00 100.00
Ecoli-imu 93.55 95.14
Pop failures 100.00 100.00
Yeast-1 vs 7 100.00 100.00
Figure 4: Graphical analysis of NPV in between of CM-
KLOGR [1] and RBF-NN (Proposed)
In table 4, NPV of proposed work is improve as compare
than CM-KLOGR[1] for all considerable dataset. Concretely
speaking, RBF-NN performbestnegativityof unrelateditems
under predefined dataset, tied for first place with RBF-NN
under moderate imbalance, and performed best under high
imbalance.
Table 5: Comparative analysis of HM in between of
CM-KLOGR [1] and RBF-NN (Proposed)
Dataset CM-KLOGR [1]
RBF-NN
(Propose)
Breast 95.3061 96.06147
Haberman 75.2728 78.0031
Ecoli-pp 89.42544 90.8059
Ecoli-imu 71.4158 74.4601
Pop failures 90.0698 91.19927
Yeast-1 vs 7 69.0012 71.3393
Figure5: Graphical analysis of HM in between of CM-
KLOGR [1] and RBF-NN (Proposed)
7. CONCLUSIONS
In the experiments, RBF-NN outperformed CM-KLOGR and
KLOGR with or without sampling, for several datasetsunder
different settings of the weights on the evaluation criteria. It
was confirmed that RBF-NN can increase the values of the
evaluation criteria in a well balancedway, adaptivelytotheir
priorities.
1. Precision is improved by 1.9% and recallisimproved by
1.47% for related data are classified.
2. Predictive positivity isimproveforreduceinconsistency
during classification process.
3. F1-measure is improved by 1.2% for regular sampling
rate.
To compute density of each sample, reach ability-distance
and core-distance need to be computed firstly, which
consumes much more time than that of RBF-NN.
Computation complexity reduction is anotherwork weneed
to do in the future.
8. REFERENCES
[1] Miho Ohsaki, Peng Wang, Kenji Matsuda, Shigeru
Katagiri, Hideyuki Watanabe, and Anca Ralescu,
“Confusion-matrix-based Kernel Logistic Regression
for Imbalanced Data Classification”, IEEE Transactions
on Knowledge and Data Engineering, 2017.
[2] Alberto Fernández, Sara del Río, Nitesh V. Chawla,
Francisco Herrera, “An insight into imbalanced Big
Data classification: outcomes and challenges”,Springer
journal of bigdata, 2017.
[3] Vaibhav P. Vasani1, Rajendra D. Gawali, “Classification
and performance evaluation using data mining
algorithms”, International Journal of Innovative
Research in Science, Engineering and Technology,
2014.
[4] Kaile Su, Huijing Huang, Xindong Wu, Shichao Zhang,
“Rough Sets for FeatureSelectionand Classification:An
Overview with Applications”, International Journal of
Recent Technology and Engineering (IJRTE) ISSN:
2277-3878, Volume-3, Issue-5, November 2014.
[5] Senzhang Wang, Zhoujun Li, Wenhan Chao and
Qinghua Cao, “Applying Adaptive Over-sampling
Technique Based on Data Density and Cost-Sensitive
SVM to Imbalanced Learning”,IEEE World Congresson
Computational Intelligence June, 2012.
[6] Mikel Galar, Alberto Fernandez, Edurne Barrenechea,
Humberto Bustince and Francisco Herrera, “A Review
on Ensembles for the Class Imbalance Problem:
Bagging, Boosting, and Hybrid-Based Approaches”,
5. International Journal of Trend in Scientific Research and Development (IJTSRD) @ www.ijtsrd.com eISSN: 2456-6470
@ IJTSRD | Unique Paper ID – IJTSRD25255 | Volume – 3 | Issue – 5 | July - August 2019 Page 170
IEEE Transactions on Systems, Man and Cybernetics—
Part C: Applications and Reviews, Vol. 42, No. 4, July
2012.
[7] Nada M. A. Al Salami, “Mining High Speed Data
Streams”. UbiCC Journal, 2011.
[8] Dian Palupi Rini, Siti Mariyam Shamsuddin and Siti
Sophiyati, “Particle Swarm Optimization: Technique,
System and Challenges”, International Journal of
Computer Applications (0975 – 8887) Volume 14–
No.1, January 2011.
[9] Amit Saxena, Leeladhar Kumar Gavel, Madan Madhaw
Shrivas, “Online Streaming Feature Selection”, 27th
International Conference on Machine Learning, 2010.
[10] Yuchun Tang, Member, Yan-Qing Zhang, Nitesh V.
Chawla and Sven Krasser, “SVMs Modeling for Highly
Imbalanced Classification”, IEEE Transaction on
Systems, Man and Cybernetics,Vol.39, NO.1,Feb2009.
[11] Haibo He and Edwardo A. Garcia, “Learning from
Imbalanced Data”, IEEE Transactions on Knowledge
and Data Engineering, September 2009.
[12] Thair Nu Phyu, “Survey of Classification Techniques in
Data Mining”, International Multi Conference of
Engineers and Computer Scientists, IMECS 2009,
March, 2009.
[13] Haibo He, Yang Bai, Edwardo A. Garcia and Shutao Li,
“ADASYN: Adaptive Synthetic Sampling Approach for
Imbalanced Learning”, IEEE Transaction of Data
Mining, 2009.
[14] Swagatam Das, Ajith Abraham and Amit Konar,
“Particle Swarm Optimization and Differential
Evolution Algorithms: TechnicalAnalysis, Applications
and Hybridization Perspectives”, Springer journal on
knowledge engineering, 2008.
[15] “A logical framework for identifyingqualityknowledge
from different data sources”, International Conference
on Decision Support Systems, 2006.
[16] “Database classification for multi-database mining”,
International Conferenceon DecisionSupportSystems,
2005.
[17] Volker Roth, “Probabilistic Discriminative Kernel
Classifiers for Multi-class Problems”, Springer-Verlag
journal, 2001.
[18] R. Chen, K. Sivakumar and H. Kargupta “Collective
Mining of Bayesian Networks from Distributed
Heterogeneous Data”, Kluwer Academic Publishers,
2001.
[19] Shigeru Katagiri, Biing-Hwang Juang and Chin-HuiLee,
“Pattern Recognition Using a Family of Design
Algorithms Based Upon the Generalized Probabilistic
Descent Method”, IEEE Journal of Data Minig, 1998.
[20] I. Katakis, G. Tsoumakas, and I. Vlahavas. Tracking
recurring contexts using ensemble classifiers: an
application to email filtering. Knowledge and
Information Systems, Pp 371–391, 2010.
[21] J. Kolter and M. Maloof. Using additive expert
ensembles to cope with concept drift. In Proc. ICML,Pp
449–456, 2005.
[22] D. D. Lewis, Y. Yang, T. Rose, and F. Li. Rcv1: A new
benchmark collection for text categorization research.
Journal of Machine Learning Research, Pp 361–397,
2004.
[23] X. Li, P. S. Yu, B. Liu, and S.-K. Ng. Positive unlabeled
learning for data stream classification. In Proc.SDM,Pp
257–268, 2009.
[24] M. M. Masud, Q. Chen, J. Gao, L. Khan, J. Han, and B.
M.Thuraisingham. Classification and novel class
detection of data streamsin adynamicfeaturespace. In
Proc. ECML PKDD, volume II, Pp 337–352, 2010.
[25] P. Zhang, X. Zhu, J. Tan, and L. Guo, “Classifier and
Cluster Ensembles for Mining Concept Drifting Data
Streams,” Proc. 10th Int’l Conf. Data Mining, 2010.