Abstract — Nowadays healthcare field has additional data mining process became a crucial role to use for disease prediction. Data mining is that the process of investigate up info from the huge information sets. The medical information is extremely voluminous. Therefore the investigator is extremely difficult to predict the disease is challenging. To overcome this issue the researchers use data mining processing technique like classification, clustering, association rules so on. The most objective of this analysis work is to predict disease supported common attributes intake of alcohol, smoking, obesity, diabetes, consumption of contaminated food, case history of liver disease using classification algorithm. The algorithms employed in this analysis work are J48, Naive Bayes. These classification algorithms are compared base on the performance factors accuracy and execution time. The investigational results could be a improved classifier for predict the liver disease.
LIVER DISEASE PREDICTION BY USING DIFFERENT DECISION TREE TECHNIQUESIJDKP
Early prediction of liver disease is very important to save human life and take proper steps to control the
disease. Decision Tree algorithms have been successfully applied in various fields especially in medical
science. This research work explores the early prediction of liver disease using various decision tree
techniques. The liver disease dataset which is select for this study is consisting of attributes like total
bilirubin, direct bilirubin, age, gender, total proteins, albumin and globulin ratio. The main purpose of this
work is to calculate the performance of various decision tree techniques and compare their performance.
The decision tree techniques used in this study are J48, LMT, Random Forest, Random tree, REPTree,
Decision Stump, and Hoeffding Tree. The analysis proves that Decision Stump provides the highest
accuracy than other techniques
An Approach for Disease Data Classification Using Fuzzy Support Vector MachineIOSRJECE
: Data Mining has great scope in the field of medicine. In this article we introduced one new fuzzy approach for prediction of hepatitis disease. Many researchers have proposed the use of K-nearest neighbor (KNN) for diabetes disease prediction. Some have proposed a different approach by using K-means clustering for reprocessing and then using KNN for classification. In our approach Naive Bayes classifier is used to clean the data. Finally, the classification is done using Fuzzy SVM algorithm. Hepatitis diseases data set is used to test our method. We are able to obtain model more precise than any others available in the literature. The Fuzzy SVM approach produced better result than KNN with Fuzzy c-meansand Fuzzy KNN with Fuzzy c-means. Theintroduction of Fuzzy Support Vector Machine algorithm certainly has a positive effect on the outcome of hepatitis disease. This fuzzy SVM model led to remarkable increase in classification accuracy
AN IMPROVED MODEL FOR CLINICAL DECISION SUPPORT SYSTEMijaia
Misguided information in health care has caused much havoc that have led to the death of millions of people as a result of misclassification, and inconsistent health care records; hence the objective of this paper is to develop an improved clinical decision support system. This system incorporated hybrid system
of non-knowledge based and knowledge based decision support system for the diagnosis of diseases and proper health care delivery records using prostate cancer and diabetes datasets to train and validate the model. The min-max method was adopted in normalizing the datasets, while genetic algorithm was
deployed in initiating the training weights of the MLP. The result obtained in this paper yielded a classification accuracy of 98%, sensitivity of 0.98 and specificity of 100 for prostate cancer and accuracy of 94%, sensitivity of 0.94 and specificity of 0.67 for diabetes.
PREDICTION OF DIABETES MELLITUS USING MACHINE LEARNING TECHNIQUESIAEME Publication
Diabetes mellitus is a common disease caused by a set of metabolic ailments
where the sugar stages over drawn-out period is very high. It touches diverse organs
of the human body which therefore harm a huge number of the body's system, in
precise the blood strains and nerves. Early prediction in such disease can be exact
and save human life. To achieve the goal, this research work mainly discovers
numerous factors associated to this disease using machine learning techniques.
Machine learning methods provide effectual outcome to extract knowledge by building
predicting models from diagnostic medical datasets together from the diabetic
patients. Quarrying knowledge from such data can be valuable to predict diabetic
patients. In this research, six popular used machine learning techniques, namely
Random Forest (RF), Logistic Regression (LR), Naive Bayes (NB), C4.5 Decision
Tree (DT), K-Nearest Neighbor (KNN), and Support Vector Machine (SVM) are
compared in order to get outstanding machine learning techniques to forecast diabetic
mellitus. Our new outcome shows that Support Vector Machine (SVM) achieved
higher accuracy compared to other machine learning techniques.
LIVER DISEASE PREDICTION BY USING DIFFERENT DECISION TREE TECHNIQUESIJDKP
Early prediction of liver disease is very important to save human life and take proper steps to control the
disease. Decision Tree algorithms have been successfully applied in various fields especially in medical
science. This research work explores the early prediction of liver disease using various decision tree
techniques. The liver disease dataset which is select for this study is consisting of attributes like total
bilirubin, direct bilirubin, age, gender, total proteins, albumin and globulin ratio. The main purpose of this
work is to calculate the performance of various decision tree techniques and compare their performance.
The decision tree techniques used in this study are J48, LMT, Random Forest, Random tree, REPTree,
Decision Stump, and Hoeffding Tree. The analysis proves that Decision Stump provides the highest
accuracy than other techniques
An Approach for Disease Data Classification Using Fuzzy Support Vector MachineIOSRJECE
: Data Mining has great scope in the field of medicine. In this article we introduced one new fuzzy approach for prediction of hepatitis disease. Many researchers have proposed the use of K-nearest neighbor (KNN) for diabetes disease prediction. Some have proposed a different approach by using K-means clustering for reprocessing and then using KNN for classification. In our approach Naive Bayes classifier is used to clean the data. Finally, the classification is done using Fuzzy SVM algorithm. Hepatitis diseases data set is used to test our method. We are able to obtain model more precise than any others available in the literature. The Fuzzy SVM approach produced better result than KNN with Fuzzy c-meansand Fuzzy KNN with Fuzzy c-means. Theintroduction of Fuzzy Support Vector Machine algorithm certainly has a positive effect on the outcome of hepatitis disease. This fuzzy SVM model led to remarkable increase in classification accuracy
AN IMPROVED MODEL FOR CLINICAL DECISION SUPPORT SYSTEMijaia
Misguided information in health care has caused much havoc that have led to the death of millions of people as a result of misclassification, and inconsistent health care records; hence the objective of this paper is to develop an improved clinical decision support system. This system incorporated hybrid system
of non-knowledge based and knowledge based decision support system for the diagnosis of diseases and proper health care delivery records using prostate cancer and diabetes datasets to train and validate the model. The min-max method was adopted in normalizing the datasets, while genetic algorithm was
deployed in initiating the training weights of the MLP. The result obtained in this paper yielded a classification accuracy of 98%, sensitivity of 0.98 and specificity of 100 for prostate cancer and accuracy of 94%, sensitivity of 0.94 and specificity of 0.67 for diabetes.
PREDICTION OF DIABETES MELLITUS USING MACHINE LEARNING TECHNIQUESIAEME Publication
Diabetes mellitus is a common disease caused by a set of metabolic ailments
where the sugar stages over drawn-out period is very high. It touches diverse organs
of the human body which therefore harm a huge number of the body's system, in
precise the blood strains and nerves. Early prediction in such disease can be exact
and save human life. To achieve the goal, this research work mainly discovers
numerous factors associated to this disease using machine learning techniques.
Machine learning methods provide effectual outcome to extract knowledge by building
predicting models from diagnostic medical datasets together from the diabetic
patients. Quarrying knowledge from such data can be valuable to predict diabetic
patients. In this research, six popular used machine learning techniques, namely
Random Forest (RF), Logistic Regression (LR), Naive Bayes (NB), C4.5 Decision
Tree (DT), K-Nearest Neighbor (KNN), and Support Vector Machine (SVM) are
compared in order to get outstanding machine learning techniques to forecast diabetic
mellitus. Our new outcome shows that Support Vector Machine (SVM) achieved
higher accuracy compared to other machine learning techniques.
The Healthcare industry contains big and complex data that may be required in order to discover fascinating pattern of diseases & makes effective decisions with the help of different machine learning techniques. Advanced data mining techniques are used to discover knowledge in database and for medical research. This paper has analyzed prediction systems for Diabetes, Kidney and Liver disease using more number of input attributes. The data mining classification techniques, namely Support Vector Machine(SVM) and Random Forest (RF) are analyzed on Diabetes, Kidney and Liver disease database. The performance of these techniques is compared, based on precision, recall, accuracy, f_measure as well as time. As a result of study the proposed algorithm is designed using SVM and RF algorithm and the experimental result shows the accuracy of 99.35%, 99.37 and 99.14 on diabetes, kidney and liver disease respectively.
AN ALGORITHM FOR PREDICTIVE DATA MINING APPROACH IN MEDICAL DIAGNOSISijcsit
The Healthcare industry contains big and complex data that may be required in order to discover fascinating pattern of diseases & makes effective decisions with the help of different machine learning techniques. Advanced data mining techniques are used to discover knowledge in database and for medical
research. This paper has analyzed prediction systems for Diabetes, Kidney and Liver disease using more
number of input attributes. The data mining classification techniques, namely Support Vector Machine(SVM) and Random Forest (RF) are analyzed on Diabetes, Kidney and Liver disease database. The performance of these techniques is compared, based on precision, recall, accuracy, f_measure as well
as time. As a result of study the proposed algorithm is designed using SVM and RF algorithm and the experimental result shows the accuracy of 99.35%, 99.37 and 99.14 on diabetes, kidney and liver disease respectively.
Patterns discovered from based on collected molecular profiles of patient tumour samples, and also clinical metadata, could be used to provide personalized cancer treatment to patients with
similar molecular subtypes. Computational algorithms for cancer diagnosis, prognosis, and therapeutics that can recognize specific functions and aid in classifiers based on a plethora of
publicly accessible cancer research outcomes are needed. Machine learning, a branch of artificial intelligence, has a great deal of potential for problem solving in cryptic cancer
datasets, as per a literature study. We focus on the new state of machine learning applications in cancer research in this study, illustrating trends and analysing major accomplishments,
roadblocks, and challenges along the way to clinic implementation. In the context of noninvasive treating cancer using diet-based and natural biomarkers, we propose a novel machine learning algorithm.
Application of Support Vector Machine and Fuzzy Logic for Detecting and Ident...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Diabetes Prediction by Supervised and Unsupervised Approaches with Feature Se...IJARIIT
Two approaches to building models for prediction of the onset of Type diabetes mellitus in juvenile subjects were examined. A set of tests performed immediately before diagnosis was used to build classifiers to predict whether the subject would be diagnosed with juvenile diabetes. A modified training set consisting of differences between test results taken at different times was also used to build classifiers to predict whether a subject would be diagnosed with juvenile diabetes. Supervised were compared with decision trees and unsupervised of both types of classifiers. In this study, the system and the test most likely to confirm a diagnosis based on the pre-test probability computed from the patient's information including symptoms and the results of previous tests. If the patient's disease post-test probability is higher than the treatment threshold, a diagnostic decision will be made, and vice versa. Otherwise, the patient needs more tests to help make a decision. The system will then recommend the next optimal test and repeat the same process. In this thesis find out which approach is better on diabetes dataset in weka framework. Also use feature selection techniques which reduce the features and complexities of process
1 springer format chronic changed edit iqbal qcIAESIJEECS
In the present generation, majority of the people are highly affected by kidney diseases. Among them, chronic kidney is the most common life threatening disease which can be prevented by early detection.Histological grade in chronic kidney disease provides clinically important prognostic information. Therefore, machine learning techniques are applied on the information collected from previously diagnosed patients in order to discover the knowledge and patterns for making precise predictions.A large number of features exist in the raw data in which some may cause low information and error; hence feature selection techniques can be used to retrieve useful subset of features and to improve the computation performance. In this manuscript we use a set of Filter, Wrapper methods followed by Bagging and Boosting models with parameter tuning technique to classify chronic kidney disease.Capability of Bagging and Boosting classifiers are compared and the best ensemble classifier which attains high stability with better promising results is identified.
Metabolic associated fatty liver disease and continuous time Markov chains iman773407
This is dataset pf patients suffering from non-alcoholic fatty liver disease. These data are artificial data to illustrate depiction of the longitudinal study and the statsitical analysis of the results.
Application of Ancient Indian Agricultural Practices in Cloud Computing Envir...MangaiK4
Abstract - About 70% people of India are living in rural areas and are still dependent on Agriculture. Therefore transformation of the Agriculture through technology is quite important in today India’s Agriculture system. In olden days formers in India used to depend on clouds for rains, but today they are looking towards Cloud Computing where they are getting knowledge for cultivation of better crops, Expert advice for crop diseases and causes, Interactive learning service, Dynamic storage service, real time question answering service and many more in a nominal cost. With the evolution of cloud computing and its subsequent popularity, the service providers are coming up with very easy and affordable solution for our farmers. Just our farmers need to register with service provider like CloudCow and to pay for the agri services they got from service provider. In this study we proposed an Agri cloud model for rural Indian farmers, so that they can opt digital farming through cloud computing for better productivity of their agriculture products and to improve their life’s.
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...MangaiK4
Abstract -Computer vision is a dynamic research field which involves analyzing, modifying, and high-level understanding of images. Its goal is to determine what is happening in front of a camera and use the facts understood to control a computer or a robot, or to provide the users with new images that are more informative or esthetical pleasing than the original camera images. It uses many advanced techniques in image representation to obtain efficiency in computation. Sparse signal representation techniqueshave significant impact in computer vision, where the goal is to obtain a compact high-fidelity representation of the input signal and to extract meaningful information. Segmentation and optimal parallel processing algorithms are expected to further improve the efficiency and speed up in processing.
More Related Content
Similar to A Tentative analysis of Liver Disorder using Data Mining Algorithms J48, Decision Table and Naive Bayes
The Healthcare industry contains big and complex data that may be required in order to discover fascinating pattern of diseases & makes effective decisions with the help of different machine learning techniques. Advanced data mining techniques are used to discover knowledge in database and for medical research. This paper has analyzed prediction systems for Diabetes, Kidney and Liver disease using more number of input attributes. The data mining classification techniques, namely Support Vector Machine(SVM) and Random Forest (RF) are analyzed on Diabetes, Kidney and Liver disease database. The performance of these techniques is compared, based on precision, recall, accuracy, f_measure as well as time. As a result of study the proposed algorithm is designed using SVM and RF algorithm and the experimental result shows the accuracy of 99.35%, 99.37 and 99.14 on diabetes, kidney and liver disease respectively.
AN ALGORITHM FOR PREDICTIVE DATA MINING APPROACH IN MEDICAL DIAGNOSISijcsit
The Healthcare industry contains big and complex data that may be required in order to discover fascinating pattern of diseases & makes effective decisions with the help of different machine learning techniques. Advanced data mining techniques are used to discover knowledge in database and for medical
research. This paper has analyzed prediction systems for Diabetes, Kidney and Liver disease using more
number of input attributes. The data mining classification techniques, namely Support Vector Machine(SVM) and Random Forest (RF) are analyzed on Diabetes, Kidney and Liver disease database. The performance of these techniques is compared, based on precision, recall, accuracy, f_measure as well
as time. As a result of study the proposed algorithm is designed using SVM and RF algorithm and the experimental result shows the accuracy of 99.35%, 99.37 and 99.14 on diabetes, kidney and liver disease respectively.
Patterns discovered from based on collected molecular profiles of patient tumour samples, and also clinical metadata, could be used to provide personalized cancer treatment to patients with
similar molecular subtypes. Computational algorithms for cancer diagnosis, prognosis, and therapeutics that can recognize specific functions and aid in classifiers based on a plethora of
publicly accessible cancer research outcomes are needed. Machine learning, a branch of artificial intelligence, has a great deal of potential for problem solving in cryptic cancer
datasets, as per a literature study. We focus on the new state of machine learning applications in cancer research in this study, illustrating trends and analysing major accomplishments,
roadblocks, and challenges along the way to clinic implementation. In the context of noninvasive treating cancer using diet-based and natural biomarkers, we propose a novel machine learning algorithm.
Application of Support Vector Machine and Fuzzy Logic for Detecting and Ident...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Diabetes Prediction by Supervised and Unsupervised Approaches with Feature Se...IJARIIT
Two approaches to building models for prediction of the onset of Type diabetes mellitus in juvenile subjects were examined. A set of tests performed immediately before diagnosis was used to build classifiers to predict whether the subject would be diagnosed with juvenile diabetes. A modified training set consisting of differences between test results taken at different times was also used to build classifiers to predict whether a subject would be diagnosed with juvenile diabetes. Supervised were compared with decision trees and unsupervised of both types of classifiers. In this study, the system and the test most likely to confirm a diagnosis based on the pre-test probability computed from the patient's information including symptoms and the results of previous tests. If the patient's disease post-test probability is higher than the treatment threshold, a diagnostic decision will be made, and vice versa. Otherwise, the patient needs more tests to help make a decision. The system will then recommend the next optimal test and repeat the same process. In this thesis find out which approach is better on diabetes dataset in weka framework. Also use feature selection techniques which reduce the features and complexities of process
1 springer format chronic changed edit iqbal qcIAESIJEECS
In the present generation, majority of the people are highly affected by kidney diseases. Among them, chronic kidney is the most common life threatening disease which can be prevented by early detection.Histological grade in chronic kidney disease provides clinically important prognostic information. Therefore, machine learning techniques are applied on the information collected from previously diagnosed patients in order to discover the knowledge and patterns for making precise predictions.A large number of features exist in the raw data in which some may cause low information and error; hence feature selection techniques can be used to retrieve useful subset of features and to improve the computation performance. In this manuscript we use a set of Filter, Wrapper methods followed by Bagging and Boosting models with parameter tuning technique to classify chronic kidney disease.Capability of Bagging and Boosting classifiers are compared and the best ensemble classifier which attains high stability with better promising results is identified.
Metabolic associated fatty liver disease and continuous time Markov chains iman773407
This is dataset pf patients suffering from non-alcoholic fatty liver disease. These data are artificial data to illustrate depiction of the longitudinal study and the statsitical analysis of the results.
Similar to A Tentative analysis of Liver Disorder using Data Mining Algorithms J48, Decision Table and Naive Bayes (20)
Application of Ancient Indian Agricultural Practices in Cloud Computing Envir...MangaiK4
Abstract - About 70% people of India are living in rural areas and are still dependent on Agriculture. Therefore transformation of the Agriculture through technology is quite important in today India’s Agriculture system. In olden days formers in India used to depend on clouds for rains, but today they are looking towards Cloud Computing where they are getting knowledge for cultivation of better crops, Expert advice for crop diseases and causes, Interactive learning service, Dynamic storage service, real time question answering service and many more in a nominal cost. With the evolution of cloud computing and its subsequent popularity, the service providers are coming up with very easy and affordable solution for our farmers. Just our farmers need to register with service provider like CloudCow and to pay for the agri services they got from service provider. In this study we proposed an Agri cloud model for rural Indian farmers, so that they can opt digital farming through cloud computing for better productivity of their agriculture products and to improve their life’s.
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...MangaiK4
Abstract -Computer vision is a dynamic research field which involves analyzing, modifying, and high-level understanding of images. Its goal is to determine what is happening in front of a camera and use the facts understood to control a computer or a robot, or to provide the users with new images that are more informative or esthetical pleasing than the original camera images. It uses many advanced techniques in image representation to obtain efficiency in computation. Sparse signal representation techniqueshave significant impact in computer vision, where the goal is to obtain a compact high-fidelity representation of the input signal and to extract meaningful information. Segmentation and optimal parallel processing algorithms are expected to further improve the efficiency and speed up in processing.
High Speed Low-Power Viterbi Decoder Using Trellis Code ModulationMangaiK4
Abstract - High speed low power viterbi decoders for trellis code modulation is well known for the delay consumption in underwater communication. In transmission system wireless communication is the transfer of information between two or more points that are not connected by an electrical conductor. WiMAX is the wireless communication standard designed to provide 30 to 40 Mega bits per second data rates. WiMAX as a standards based technology enabling the delivery of last mile wireless broadband access as an alternative to cable and DSL. WiMAX can provide at home or mobile internet access across whole cities or countries. The address generation of WiMAX is carried out by interleaver and deinterleaver. Interleaving is used to overcome correlated channel noise such as burst error or fading. The interleaver/deinterleaver rearranges input data such that consecutive data are spaced apart. The interleaved memory is to improve the speed of access to memory. The viterbi technique reduces the bit error rate and delay using wimax.
Relation Extraction using Hybrid Approach and an Ensemble AlgorithmMangaiK4
Abstract: The amount of information stored in the form of digital has a gigantic growth in this electronic era. Extraction of heterogeneous entities from biomedical documents is being a big challenge as achieving high f-score is still a problem. Text Mining plays a vital role for extracting different types of entities and finding relationship between the entities from huge collection of biomedical documents. We have proposed hybrid approach which includes a Dictionary based approach and a Rule based approach to reduce the number of false negatives and the Random Forest Algorithm to improve f-score measure.
Saturation Throughput and Delay Aware Asynchronous Noc Using Fuzzy LogicMangaiK4
Abstract-Due to scaling in recent trends in modern semiconductor industries, critical instance such as voltage spikes, temperature, interlink between IC circuits has been improvised under the design. The saturation throughput and average message delay are used as performance metrics to evaluate the throughput. We introduced the fuzzy logic based asynchronous NoC to attain the high throughput and diminish the delay between the inter logic circuits aim to select the less congested paths to produce load balance in the network especially under realistic traffic loads. Our Fuzzy logic based routing algorithm is an adaptive, low cost, and scalable. It outperforms against different adaptive routing algorithms in the average delay and saturation throughput for various traffic patterns. STDR can achieve up to 12%–32% average message delay lower than that of other routing algorithms. Moreover, the proposed scheme yields improvements in saturation throughput by up to 11%–82% compared with other adaptive routing algorithms
Abstract - A password is a sequence of characters used to determine whether the user is authenticated or not. Nowadays most of the password is text-based. Since text based password is hard to remember people try to use simple memorable password such as pet names, phone number, etc. which are easy to break by intruders. The main idea behind the paper is to replace the text-based passwords by image based password and encrypt using RSA algorithm. Our experimental result shows that image passwords are easy to remember, better than the text.
A Framework for Desktop Virtual Reality Application for EducationMangaiK4
Abstract: Contemporary custom in education encourages students to gain exposure in the real world through student visits (field visits) to sites in count to conventional textbooks and lectures. This disclosure helps students to experience real world situations and integrate this experience into knowledge learned in class. This is important to students in various disciplines such as engineering, architecture and transportation. Students, however, have limited on-site access due to issues related to safety concerns, cost and effort. In an attempt to address such issues, Virtual Reality (VR) applications have been developed and implemented. With the growth in the number of VR applications, there is currently a lack of information about the design issues of VR applications from the standpoint of integrating different types of information associated with the real world. This paper aims to bridge this gap by evaluating VR applications with respect to these issues and highlights the lessons learned from the appropriate evaluations. The results demonstrate that VR application, which links different sources of information (as developed in this paper), promotes better learning than conventional printed materials and that students professed it positively as a precious complement to a physical field visit. The design recommendations for the development of similar VR learning applications are further discussed in this paper.
Smart Security For Data Sharing In Cloud ComputingMangaiK4
Abstract-- In these computational world security is the major Problem. Because the hackers are growing day by day. Here majorly attacker’s attacks mostly while sharing or sending the data. So here we are using PAAS i.e. windows azure as the storage which is one of the service in cloud. So to provide the security for the data here we come up with an RSA (encryption and decryption) algorithm. This is one of the security mechanism to protect the private data over the cloud.
BugLoc: Bug Localization in Multi Threaded Application via Graph Mining ApproachMangaiK4
Abstract - Detection of software bugs and its occurrences, repudiation and its root cause is a very difficult process in large multi threaded applications. It is a must for a software developer or software organization to identify bugs in their applications and to remove or overcome them. The application should be protected from malfunctioning. Many of the compilers and Integrated Development Environments are effectively identifying errors and bugs in applications while running or compiling, but they fail in detecting actual cause for the bugs in the running applications. The developer has to reframe or recreate the package with the new one without bugs. It is time consuming and effort is wasted in Software Development Life Cycle. There is a possibility to use graph mining techniques in detecting software bugs. But there are many problems in using graph mining techniques. Managing large graph data, processing nodes with links and processing subgraphs are the problems to be faced in graph mining approach. This paper presents a novel algorithm named BugLoc which is capable of detecting bugs from the multi threaded software application. The BugLoc uses object template to store graph data which reduces graph management complexities. It also uses substring analysis method in detecting frequent subgraphs. The BugLoc then analyses frequent subgraphs to detect exact location of the software bugs. The experimental results show that the algorithm is very efficient, accurate and scalable for large graph dataset.
Motion Object Detection Using BGS TechniqueMangaiK4
Abstract--- The detection of moving object is an important in many applications such as a vehicle identification in a traffic monitoring system,human detection in a crime branch.In this paper we identify a vehicle in a video sequence.This paper briefly explain the detection of moving vehicle in a video.We introduce a new algorithm BGS for idntifying vehicle in a video sequence.First, we differentiate the foreground from background in frames by learning the background.Then, the image is divided into many small nonoverlapped frames. The candidates of the vehicle part can be found from the frames if there is some change in gray level between the current image and the background.The extracted background subtraction method is used in subsequent analysis to detect a vehicle and classify moving vehicle
Encroachment in Data Processing using Big Data TechnologyMangaiK4
Abstract—The nature of big data is now growing and information is present all around us in different kind of forms. The big data information plays crucial role and it provides business value for the firms and its benefits sectors by accumulating knowledge. This growth of big data around all the concerns is high and challenge in data processing technique because it contains variety of data in enormous volume. The tools which are built on the data mining algorithm provides efficient data processing mechanisms, but not fulfill the pattern of heterogeneous, so the emerging tools such like Hadoop MapReduce, Pig, SPARK, Cloudera, Impala and Enterprise RTQ, IBM Netezza and Apache Giraphe as computing tools and HBase, Hive, Neo4j and Apache Cassendra as storage tools useful in classifying, clustering and discovering the knowledge. This study will focused on the comparative study of different data processing tools, in big data analytics and their benefits will be tabulated.
Performance Evaluation of Hybrid Method for Securing and Compressing ImagesMangaiK4
Abstract—Security is a most important field of research work for sending and receiving of data in secret way over the network.Cryptographyis a method for securing transformation like image, audio, video, text without any hacking problem.Encryption and Decryption are two methods used to secure the data. Image compression technique used to reducing the size of an image for effective data communication. There are variety of algorithms has been proposed in the literature for securing images using encryption/decryption techniques and reduce the size of images using image compression techniques. These techniques still need improvement to overcome issues, challenges and its limitations. Hence in this research work a hybrid method which combines securing image using RSA, hill cipher and 2bit rotation and compressing of images using lossless compression algorithm has been proposed. This method compared to execution time of existing method. This method secures the image and reduces the size of the image for data communication over the internet. This method is suitable for various applications uses images like remote sensing, medical and Spatio-temporal.
A Review on - Data Hiding using Cryptography and SteganographyMangaiK4
Abstract - Security and privacy for a data transmission become a major concern due to rise of internet usage. Many developers are working continuously to make an internet safe environment, but the intruders are very smart to hack the information. For that, two entities communicating need to communicate in a way which is not susceptible to listen in or interception. So every organization uses many data encryption techniques to secure their communication. There are two security mechanisms called, Cryptography and Steganography are being applied. By merging these techniques, two level of information security is achieved. This paper discuss about the way of working Cryptography and Steganography and their different approaches.
Modeling Originators for Event Forecasting Multi-Task Learning in Mil AlgorithmMangaiK4
Abstract - Multiple-instance learning (MIL) is a speculation of supervised learning which tends to the order of bags. Like customary administered adapting, the greater part of the current MIL work is proposed in light of the suspicion that a delegate preparing set is accessible for a legitimate learning of the classifier. To manage this issue, we propose a novel Sphere-Description-Based approach for Multiple-Instance Learning (SDB-MIL). SDB-MIL takes in an ideal circle by deciding a substantial edge among the examples, and in the meantime guaranteeing that every positive sack has no less than one occasion inside the circle and every negative bags are outside the circle. In genuine MIL applications, the negative information in the preparation set may not adequately speak to the dispersion of negative information in the testing set. Thus, how to take in a proper MIL classifier when a delegate preparing set isn't accessible turns into a key test for genuine MIL applications. From the viewpoint of human examiners and approach producers, determining calculations must influence precise expectations as well as give to sup porting proof, e.g., the causal components identified with the occasion of intrigue. We build up a novel different example learning based approach that mutually handles the issue of recognizing proof based originators and conjectures occasions into what's to come. In particular, given a gathering of spilling news articles from different sources we build up a settled various occurrence learning way to deal with figure noteworthy societal occasions, for example, protests. Substantial investigates the benchmark and true MIL datasets demonstrate that SDB-MIL gets measurably preferable arrangement execution over the MIL strategies thought about.
Abstract: The common challenge for machine learning and data mining tasks is the curse of High Dimensionality. Feature selection reduces the dimensionality by selecting the relevant and optimal features from the huge dataset. In this research work, a clustering and genetic algorithm based feature selection (CLUST-GA-FS) is proposed that has three stages namely irrelevant feature removal, redundant feature removal, and optimal feature generation. The performance of the feature selection algorithms are analyzed using the parameters like classification accuracy, precision, recall and error rate. Recently, an increasing attention is given to the stability of feature selection algorithms which is an indicator that requires that similar subsets of features are selected every time the algorithm is executed on the same dataset. This work analyzes the stability of the algorithm on four publicly available dataset using stability measurements Average normal hamming distance(ANHD), Dices coefficient, Tanimoto distance, Jaccards index and Kuncheva index.
The Relationship of Mathematics to the Performance of SHCT it Students in Pro...MangaiK4
Abstract—This correlational study investigates whether Mathematics has a role in the performance of the students in their programming courses. Students’ grade points from their basic mathematics and programming courses were collected and correlated, specifically the grade points of the exiting Advanced-Diploma students from Semester 1 of Academic year 2016-2017 of Shinas College of Technology in Oman. Determining the relationship was analyzed through the use of Pearson r. Hypothesis is tested and the results of the correlation were established.
Marine Object Recognition using Blob AnalysisMangaiK4
Abstract: In this paper, a new method of marine object recognition using blob analysis has been proposed, which is suitable to general objects recognition. A powerful foreground blob analysis is proposed to classify frontal areas. Conventionally, the main focus of the objects is recognized by prepared researchers through towed nets and human perception, which make much cost and hazard administrators and animals. Specific marine objects, box jellyfish and ocean snake, are effectively recognized in this work. Experiments conducted on picture datasets gathered by the Australian Institute of Marine Science (AIMS) demonstrate the adequacy of the proposed strategy.
Implementation of High Speed OFDM Transceiver using FPGAMangaiK4
Abstract - Proficient, multi mode and re-configurable architecture of interleaver/de-interleaver for multiple standards, like DVB, OFDM and WLAN is presented. Interleaver plays vital role in 4G technologies to recover symbols from burst errors. The aim of our work is to design a reconfigurable modulation technique called Adaptive modulation scheme uses QAM, QPSK and BPSK modulation that adapt themselves based on channel Signal to Noise ratio. Subcarrier allocation algorithm specifically used to focus on utilizing channels with high gains. Our proposed model can achieves a data rate of min 2.5 Gbps as per 3GPP standard by adaptive modulation technique using QAM, BPSK and QPSK.
Renewable Energy Based on Current Fed Switched Inverter for Smart Grid Applic...MangaiK4
Abstract - Renewable energy is used in the current fed switched inverter for high power production. High voltage support, wide yield ranges of operation, shoot-through resistance are a portion of the desired properties of an inverter for a reliable, versatile and less ripple AC inversion. This paper proposes a single stage, high boost inverter with buck-boost capacity which has a few particular advantages over traditional voltage source inverters (VSI) like better EMI noise, wide input and output voltage range of operation, and so on. The proposed inverter is named as Current-Fed Switched Inverter (CFSI). A renewable energy based converter structure of CFSI has been created which supplies both AC and DC loads, at the same time, from a single DC supply which makes it reasonable for DC smart grid application. This paper proposes the operation and control of a CFSI based converter which directs the AC and DC conversion voltages at their reference. The advancement of the proposed converter from essential current fed DC/DC topology is explained. The closed loop controller is verified by using the MATLAB/ Simulink environment.
Detection of Nutrient Deficiencies in Plant Leaves using Image ProcessingMangaiK4
Abstract—In this paper, we suggest a model for the automatic detection and classification of nutrient deficiencies in plant leaves. In an agricultural country like India, farmers are facing lot of problems in detecting the causes for the diseases in plants. Only when the causes are sorted out, the solution can be found to treat them. With naked-eye observation it is difficult to classify the deficiency present in leaves. So with the help of image processing algorithms, we have proposed a model to detect the type of deficiencies in the leaves. The color and texture features are used to recognize and classify the deficiencies. The combinations of features prove to be very effective in deficiency detection. This paper presents an effective method for detection of nutrient deficiencies in leaves using color-texture analysis and k-means clustering.
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
The Art Pastor's Guide to Sabbath | Steve ThomasonSteve Thomason
What is the purpose of the Sabbath Law in the Torah. It is interesting to compare how the context of the law shifts from Exodus to Deuteronomy. Who gets to rest, and why?
How to Create Map Views in the Odoo 17 ERPCeline George
The map views are useful for providing a geographical representation of data. They allow users to visualize and analyze the data in a more intuitive manner.
We all have good and bad thoughts from time to time and situation to situation. We are bombarded daily with spiraling thoughts(both negative and positive) creating all-consuming feel , making us difficult to manage with associated suffering. Good thoughts are like our Mob Signal (Positive thought) amidst noise(negative thought) in the atmosphere. Negative thoughts like noise outweigh positive thoughts. These thoughts often create unwanted confusion, trouble, stress and frustration in our mind as well as chaos in our physical world. Negative thoughts are also known as “distorted thinking”.
How to Split Bills in the Odoo 17 POS ModuleCeline George
Bills have a main role in point of sale procedure. It will help to track sales, handling payments and giving receipts to customers. Bill splitting also has an important role in POS. For example, If some friends come together for dinner and if they want to divide the bill then it is possible by POS bill splitting. This slide will show how to split bills in odoo 17 POS.
Chapter 3 - Islamic Banking Products and Services.pptx
A Tentative analysis of Liver Disorder using Data Mining Algorithms J48, Decision Table and Naive Bayes
1. Integrated Intelligent Research (IIR) International Journal of Computing Algorithm
Volume: 06 Issue: 01 June 2017 Page No.37-40
ISSN: 2278-2397
37
A Tentative analysis of Liver Disorder using Data
Mining Algorithms J48, Decision Table and Naive
Bayes
P. Kuppan1
, N.Manoharan2
1
Assistant Professor, Dept of Computer Science, Thiruvalluvar University College of Arts and Science, Thennangur, Vandavasi.
2
Head and Assistant Professor, Dept of Computer Science, Thiruvalluvar University College of Arts and Science, Thennangur,
Vandavasi
email: kuppan.anj@gmail.com, mano_n@rediffmail.com
Abstract — Nowadays healthcare field has additional data
mining process became a crucial role to use for disease
prediction. Data mining is that the process of investigate up
info from the huge information sets. The medical information
is extremely voluminous. Therefore the investigator is
extremely difficult to predict the disease is challenging. To
overcome this issue the researchers use data mining processing
technique like classification, clustering, association rules so on.
The most objective of this analysis work is to predict disease
supported common attributes intake of alcohol, smoking,
obesity, diabetes, consumption of contaminated food, case
history of liver disease using classification algorithm. The
algorithms employed in this analysis work are J48, Naive
Bayes. These classification algorithms are compared base on
the performance factors accuracy and execution time. The
investigational results could be a improved classifier for
predict the liver disease.
Keywords:— Datamining; classification; guided classification;
Liver Disorder(j48,Naïve Bayes)
I. INTRODUCTION
The liver is that the largest organ of our body. It performs over
than 500 completely different operate together with digestion,
metabolism, storing nutrients and removing toxins and waste
product. There are many disorders of the liver that need
clinical care by a way of a physician and other healthcare
professional as the records of patients clinical trial and other
attributes are available electronically. The purpose of finding
patterns in medical database is to identify those patients which
share common attributes intake of alcohol, smoking, obesity,
diabetes, consumption of contaminated food, family history of
liver disease. In data mining, classification techniques are
much popular in medical diagnosis and predicting diseases [1].
In this research work, Naïve Bays and support vector Machine
(SVM) classifier algorithms are employed for liver disease
prediction. There are several number of liver disorders that
required clinical care of the physician [3]. The main objective
of this research work is to predicate liver diseases such as
cirrhosis, Bile Duct, Chronic Hepatitis, Liver cancer and Acute
Hepatitis from Liver function test (LFT) dataset using above
classification algorithms. The liver is the next largest internal
organ in the human body, playing a major role in metabolism
and serving several vital functions, e.g. Decomposition of red
blood cells,etc.,.[7]
II. LITERATURE REVIEW
Dhamodharan etr.al [3] has predicted three major liver
diseases such as liver cancer, Cirrhosis and Hepatitis with the
help of distinct symptoms. They used naïve Bayes and FT tree
algorithms for disease prediction. Comparison of these two
algorithms has been done based on their classification accuracy
measure. From the experimental results they concluded the
Naïve bayes as the better algorithm which predicted diseases
with maximum classification accuracy than the other
algorithm. Rosalina et al [13] predicted a hepatitis prognosis
disease using support vector machine (SVM) and wrapper
method. Before classification process they used wrapper
method to remove the noise features. Firstly SVM carried out
feature selection to get better accuracy. Features selection were
implemented to minimize noisy or irrelevance data. From the
experimental results they observed the increased accuracy rate
in the clinical lab test cost with minimum execution time. They
have achieved the target by combining wrappers method and
SVM techniques.
Omar S.Solaiman et al [10] has proposed a hybrid
classification system for HCV diagnosis, using modified
particle swarm optimization algorithm and least squares
support vector machine (LS-SVM).Feature vectors are
extracted using principal component analysis algorithm. As
LS-SVM algorithm is sensitive to the changes of values of its
parameters, Modified –PSO algorithm was used to research for
the optimal values of LS-SVM parameters in less number of
iterations. The proposed system was implemented and
evaluated on the benchmark HCV data set from UCI repository
of machine learning database. It was compared with another
classification system, which utilized PCA and LS-SVM. From
the experimental results the proposed system obtained
maximum classification accuracy than the other systems.
Karthicket.al [7] were applied a soft computing technique for
intelligent diagnosis of liver disease .they have implemented
classification and its type detection in three phases. In the first
phase, they classified liver disease using Artificial Neural
network (ANN)classification algorithm. in the second phase
,they generated the classification rules with rough set rule
induction using learn by example (LEM) algorithm. In the third
phase fuzzy rules were applied to identify the types of the liver
disease.ChaitraliS.Dangareet.al [2] has analyzed prediction
system for heart disease using more number of input attributes.
The data mining classification techniques, namely decision
tree, Naïve bayes and Neural Networks are analyzed on Heart
disease database. The performances of these techniques are
compared, based on accuracy. Another analysis shows that out
2. Integrated Intelligent Research (IIR) International Journal of Computing Algorithm
Volume: 06 Issue: 01 June 2017 Page No.37-40
ISSN: 2278-2397
38
of these three classification models neural networks has
predicated the heart disease with highest accuracy.
III. ANALYSIS OF LIVER DISORDER
A. Risk Factors
Although alcohol is that the most typical reason cause of liver
disease, it is not the only issue which will damage your liver.
Here are measure another factors which will increase your risk
of liver damage.
1)Diabetes: Having diabetes increases the possibility of liver
disease by 50 percent. People who have diabetes as a result of
insulin resistance have high quantities of insulin inside their
blood that makes the abdominal weight gain. It causes the liver
to store fat internally, causing fatty liver disease. Here all
articles are measure customary connected to diabetes.
2).salt intake: High salt intake is acknowledge to cause
cardiovascular disease, however it may also cause fatty liver
disease by building up fluid within the liver (water retention)
and swelling it up. Here are some helpful tips to scale back salt
intake.
3).Smoking: Although cigarette smoke does not want a primary
impact on liver perform, harmful chemicals in cigarette smoke
increase aerobic stress of the system once attaining the liver,
inflicting irreversible damaging to the liver cells.
4). Use of nutritional supplements: Dietary or biological
process supplements will increase the assembly of sure liver
enzymes once taken in excess amounts, inflicting damage.
5). Pesticides and heavy metals: Expertise of chemicals in
pesticides and significant metals through vegetables, fruits and
adulterate foods and can also harm the liver. These toxins get
hold on in liver over a very long time to cause liver damage.
IV. EXTRACTION OF LIVER DISEASE DATA
WAREHOUSE
The liver disorder data warehouse provides the screening the
information of liver disorder patients. Initially the information
warehouse is pre-processed to help make the mining process
more efficient. In the paper WEKA tool can be used to
compare the performance accuracy of data mining algorithms
for diagnosis liver disease dataset .the pre-processed data
warehouse is then classified using WEKA tool The feature
selection in the liver disease. Using supervised machine
learning algorithm such as for example Naive Bayes, Decision
Table and J48 the end result are compared WEKA is an
accumulation of machine learning algorithm for data mining
tasks. The algorithm to be applied straight to a dataset. WEKA
contains tools for data classification, Associate, Clustering and
Visualization. It can also be suitable for developing new
machine learning schemes. This paper concentrates on
functional algorithms like naïve Bayes, Decision Table and
J48.
1. Classification:
The fundamental classification is dependent on supervised
algorithms. Algorithms are requested the input data
.Classification is performed to understand the just how data
will be classified. The classify tab can also be supported which
shows the set of machine learning algorithms. These
algorithms are generally operates on a classification algorithm
and run it multiple times manipulating algorithm parameters or
input data weight to improve the accuracy of the classifier .two
learning performance evaluators are includes with WEKA. The
every first simply splits a dataset into training and test data.
While the next performs cross-validation using folds.
Evaluation is normally described by the accuracy. The run
information can also be displayed. For quick inspection of how
well a classifier works.
2. Manifold machine learning algorithm:
The key motivation for different supervised machine learning
algorithm is accuracy improvement. Different algorithm use
different rule for generalizing different representation of the
knowledge. Therefore, they have a tendency to error on
different elements of the instance space. The combined
utilization of different algorithms could result in the correction
of the average person uncorrelated errors. As a result the error
rate and time taken to produce the algorithms is compared with
various algorithm
V. RESULT AND DISCUSSION
1. Classify
The consumer has the selection of applying sort of algorithms
to the knowledge set which will the idea is that build a
illustration of the knowledge used to produce observation
easier. It is tough to identify that of the choices would supply
the most effective output for the experiment. The most
effective approach is usually to independently apply a
combination of the on the market decisions and see what yields
one thing around the specified results. The Classify tab is
wherever a private selects the classifier decisions.
Again there area unit several choices to be elect at intervals the
classify tab. Check choices provides the user the choice
victimization four completely different check mode situation
on the knowledge set: 1. Use training set 2. Supplied training
set 3. Cross validation
2. Split percentage
There is the selection of applying any or each one of the modes
to supply results which may be compared by the user.
Additionally within the check choices toolbox there is a
dropdown menu that the user will opt for varied things to use
that with regard to the selection can give output choices like as
an example saving the outcomes to file or specifying the
random seed worth to be requested the classification.
3. Cluster
The Cluster tab opens the strategy that’s wont to determine
commonalties or clusters of occurrences at intervals info the
knowledge set and turn out information for the user to
research. There are always a couple of choices within the
cluster window that area unit kind of like those with in the
classifier tab. They are use training set, supplied test set,
percentage split. The fourth choice is categories to cluster
analysis that compares however well the information compares
with a pre-assigned category at intervals the information. This
can be often helpful if you can notice specific attributes
inflicting the outcomes to be out of vary and for big data sets.
4. Associate
The associate tab opens a screen to settle on the alternatives for
associations at intervals the information set. The user selects
3. Integrated Intelligent Research (IIR) International Journal of Computing Algorithm
Volume: 06 Issue: 01 June 2017 Page No.37-40
ISSN: 2278-2397
39
one in every of several decisions and method begins to yield
the results.
5. Select Attributes
The following tab may be wont to opt for the actual attributes
helpful for the calculation method. By default each one of the
available attributes are utilized within the analysis of the
information set. If the utilization desired to exclude sure style
of the information they might deselect the particular decision
from the list within the cluster window. It is helpful if a
number of the attributes area unit completely different type like
for instance alphanumeric data that may alter the results. The
application searches through the chosen attributes to settle on
that of them can best work the required calculation. To execute
this, the user has to choose two options, an attribute evaluator
and a research method. Once this is performed the program
evaluates the information on the basis of the sub group of the
attributes then performs the necessary seek out commonality
with the data.
6. Visualization
The last tab with in the window is that the visualization tab. At
intervals the program calculations and comparisons have
occurred on the information set. Picks of attributes and ways
in which of manipulation have already been chosen. The
ultimate piece of the puzzle is gazing the information that has
been derived throughout the method. The consumer are
currently able to truly begin to ascertain the fruit of those
efforts in a two dimensional illustration of the information.
The primary screen that the user sees once they choose the
visualization choice is basically a matrix of plots representing
the assorted attributes within the data set plotted against the
opposite attributes. If necessary there is a scroll bar to look at
all of the created plots. The consumer will opt for a precise plot
from the matrix to see its contents for analization. A grid
pattern of the plots permits the user to pick out the attribute
positioning to their feeling and for higher understanding. Once
a specific plot has been selected the consumer can remodel the
attributes from one read to a different providing flexibility.
VI. EXPERIMENTAL SETUP
Naïve Bayes
A Naïve bayes classifier is just a simple probabilistic classifier
based on applying bayes thermo with strong independent
assumption. An even more descriptive term for the underlying
probability model will be the self-determining feature model.
In basic terms, a naïve bayes classifier assumes that the clear
presence of a specific feature of a type of unrelated to the clear
presence of some other feature [11]. The naïve bayes classifier
performs reasonably well even though the underlying
assumption is not true.
The advantage of the naïve bayes classifier used only requires
a small amount of training data to estimate the means and
variances of the variable necessary for classification because of
independent variables are unspecified, only the variance of the
variable for each label; need to be determined and not entire
covariance matrix. In contrast to the naïve bayes operator, the
naïve bayes (Kernel) operator can be applied on numerical
attributes. This can be able in a clear –cut fashion using kernel
density estimation and bayes’ theorem:
Fig1.This Screen shows the Naive bayes Algorithm result
Fig2.This Screen shows the Output for J48 Algorithm result
Fig3.This Screen shows the Output for Decision Table
Algorithm
4. Integrated Intelligent Research (IIR) International Journal of Computing Algorithm
Volume: 06 Issue: 01 June 2017 Page No.37-40
ISSN: 2278-2397
40
Table1. Result for Naïve bayes, Decision table and J48
Algorithm
NAIVE BAYES DECISION J48
ACCURACY 97.5% 98.5% 98.5%
Thus the research gives the most appropriate for predicting
performance from the result. The J48 and Decision table
perception gives 98.5% of accuracy prediction which is
relatively higher than Naïve Bayes algorithm.
Fig4.This Figure shows the comparative analysis for the liver
Disorder
VII. CONCLUSION
This research demonstrates the data mining based approaches
that can be used to access factors of Liver Disorder Dataset.
To presented an intelligent and effective Liver Disorder system
using classification. Sample Data’s are collected from some
Hospitals who are all having Liver Disorder through for liver
function Testing. This pre-processed data was classified with
Classification algorithm using Weka tool. Using the Weka tool
to perform the classification of data for the five main phases
learning, experience and operation phases are carried out. Thus
the system provides the way of comparative solutions given by
various disorders we have got the result of liver Disorder
which is having highest number of Liver Disorder with any
sample collection. I proved that Liver Disorder is held by
maximum number of people. This research the Liver disorder
is mostly affected most of the people due to continuous
smoking and highly intake of alcohol. The more number of
male People is having more Liver Disorder than the female
People. Overall survey shows between 35-65 Age People is
highly affected in Liver Disorder. The Liver Disorder
maximum Affected for Alcohol Habit People for. 26% of
people are having their Liver Disorder, Smoking Habit People
for. 22% of people are having their Liver Disorder,Obesity
Patient People for. 4% of people are having their Liver
Disorder, Diabetes Patient People for. 5% of people are having
their Liver Disorder
References
[1] Bendi venkata Ramana, Surendra. Prasad Babu. M,
Venkateswarlu. N.B,A critical study of selected
classification Algorithms for Liver Disease Diagnosis,
International Journal of Database Management
Systems(IJDMS), Vol.3,No.2,May 2011 page no 101-
114.
[2] Chaitrali S. Dangare, Sulabha S.Apte,”Improvved Study
of Heart Disease prediction system using Data Mining
Classification Techniques”, International Journal of
computer Applications (0975-888), Volume 47-No.10,
june2012, pag3e no 44-48.
[3] Dhamodharan. S, Liver Disease prediction Using
Bayesian classification, special Issue, 4th National
conference on Advanced computing, Application, &
Technologies, may 2014,page no1-3.
[4] Grimaldi. M, Cunningham. P,Kokaram. A, An
evaluation of alternative feature selection strategies and
ensemble techniques for classifying music, in: Work-
shop in Multimedia discovery and Mining,
ECML/PKDD03, Dubrovnik, Croatia, 2003.
[5] Gur Emre Guraksin, Huseyin Hakli, Harun U guz,
Support vecto.r machines classification based on particle
swarm optimization for bone age determination, Elsevier
publications, Science direct, page no 597-602.
[6] Han,J.; Kamber,M., “Data Mining Concepts and
Techniques”. 2nd Edition, Morgan Kaufmann, san
Francisco.
[7] Karthim. S, Priyadarishini. Anuradha. J and Tripathi.
B.K, Classification and rule Extraction using Rough Set
for Diagnosis of Liver Disease and its Types, Advances
in Applied Science Research,2011, 2 (3):page no 334-
345.
[8] Kotsiantis. S.B, Increasing the Classification Accuracy
of Simple Bayesian classifier , AIMSA ,PP. 198-
207,2004.
[9] Milan kumarai, Sunila Godara, comparative study of data
mining Classification methods in cardiovascular Disease
Prediction, International journal of Computer Science
and technology, vol 2,Issue 2,June 2011, page no 304-
308.
[10] Omar s.soliman,Eman abo Elhamd, Classification of
Hepatities c virus using modified particle Swarm
optimization and Least Squares Support Vector Machine,
International Journal of Scientific & Engineering
Research, volume 5,Issue 3,March -2014 122.
[11] Adachi U,Moore L.E, B Radford B.U,Gao
W,Andthurman R.G Antibiotics to prevent liver injury in
following long-term exposure to ethanol
Gastroenterology 108:218-224,1995.
[12] Dufour M.C,S Tinson F.S,Andcaces M.F, Trends in
cirrhosis morbidity and mortality seminars in Liver
Disease 13(2):109-125,1993.
[13] Rosalina. A.H, Noraziah A. prediction of Hepatitis
prognosis Using Support vector Machine and Wrapper
Method, IEEE,(2010),2209-22.