This document discusses a proposed system for automated filling of patient forms using term-based personalization of feature selection. The system aims to reduce manual work for hospitals by automatically filling forms during patient handovers between shifts. It does this through a novel feature selection model that selects the most relevant features for a given term, rather than using the same features for all terms. This personalized approach helps eliminate the negative impact of noisy information that can occur when features are not tailored to specific terms. The system is meant to help streamline hospitals' acquisition and management of patient information.
Improving Prediction Accuracy Results by Using Q-Statistic Algorithm in High ...rahulmonikasharma
Classification problems in high dimensional information with little sort of observations became furthercommon significantly in microarray information. The increasing amount of text data on internet sites affects the agglomerationanalysis. The text agglomeration could also be a positive analysis technique used for partitioning a huge amount of datainto clusters. Hence, the most necessary draw back that affects the text agglomeration technique is that the presenceuninformative and distributed choices in text documents. A broad class of boosting algorithms is known as actingcoordinate-wise gradient descent to attenuate some potential performs of the margins of a data set. This paperproposes a novel analysis live Q-statistic that comes with the soundness of the chosen feature set to boot to theprediction accuracy. Then we've a bent to propose the Booster of associate degree FS algorithm that enhances theworth of the Q-statistic of the algorithm applied.
Automatic Query Expansion Using Word Embedding Based on Fuzzy Graph Connectiv...YogeshIJTSRD
The aim of information retrieval systems is to retrieve relevant information according to the query provided. The queries are often vague and uncertain. Thus, to improve the system, we propose an Automatic Query Expansion technique, to expand the query by adding new terms to the user s initial query so as to minimize query mismatch and thereby improving retrieval performance. Most of the existing techniques for expanding queries do not take into account the degree of semantic relationship among words. In this paper, the query is expanded by exploring terms which are semantically similar to the initial query terms as well as considering the degree of relationship, that is, “fuzzy membership- between them. The terms which seemed most relevant are used in expanded query and improve the information retrieval process. The experiments conducted on the queries set show that the proposed Automatic query expansion approach gave a higher precision, recall, and F measure then non fuzzy edge weights. Tarun Goyal | Ms. Shalini Bhadola | Ms. Kirti Bhatia "Automatic Query Expansion Using Word Embedding Based on Fuzzy Graph Connectivity Measures" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-5 , August 2021, URL: https://www.ijtsrd.com/papers/ijtsrd45074.pdf Paper URL: https://www.ijtsrd.com/computer-science/artificial-intelligence/45074/automatic-query-expansion-using-word-embedding-based-on-fuzzy-graph-connectivity-measures/tarun-goyal
Propose a Enhanced Framework for Prediction of Heart DiseaseIJERA Editor
Heart disease diagnosis requires more experience and it is a complex task. The Heart MRI, ECG and Stress Test etc are the numbers of medical tests are prescribed by the doctor for examining the heart disease and it is the way of tradition in the prediction of heart disease. Today world, the hidden information of the huge amount of health care data is contained by the health care industry. The effective decisions are made by means of this hidden information. For appropriate results, the advanced data mining techniques with the information which is based on the computer are used. In any empirical sciences, for the inference and categorisation, the new mathematical techniques to be used called Artificial neural networks (ANNs) it also be used to the modelling of the real neural networks. Acting, Wanting, knowing, remembering, perceiving, thinking and inferring are the nature of mental phenomena and these can be understand by using the theory of ANN. The problem of probability and induction can be arised for the inference and classification because these are the powerful instruments of ANN. In this paper, the classification techniques like Naive Bayes Classification algorithm and Artificial Neural Networks are used to classify the attributes in the given data set. The attribute filtering techniques like PCA (Principle Component Analysis) filtering and Information Gain Attribute Subset Evaluation technique for feature selection in the given data set to predict the heart disease symptoms. A new framework is proposed which is based on the above techniques, the framework will take the input dataset and fed into the feature selection techniques block, which selects any one techniques that gives the least number of attributes and then classification task is done using two algorithms, the same attributes that are selected by two classification task is taken for the prediction of heart disease. This framework consumes the time for predicting the symptoms of heart disease which make the user to know the important attributes based on the proposed framework.
Identification of important features and data mining classification technique...IJECEIAES
Employees absenteeism at the work costs organizations billions a year. Prediction of employees’ absenteeism and the reasons behind their absence help organizations in reducing expenses and increasing productivity. Data mining turns the vast volume of human resources data into information that can help in decision-making and prediction. Although the selection of features is a critical step in data mining to enhance the efficiency of the final prediction, it is not yet known which method of feature selection is better. Therefore, this paper aims to compare the performance of three well-known feature selection methods in absenteeism prediction, which are relief-based feature selection, correlation-based feature selection and information-gain feature selection. In addition, this paper aims to find the best combination of feature selection method and data mining technique in enhancing the absenteeism prediction accuracy. Seven classification techniques were used as the prediction model. Additionally, cross-validation approach was utilized to assess the applied prediction models to have more realistic and reliable results. The used dataset was built at a courier company in Brazil with records of absenteeism at work. Regarding experimental results, correlationbased feature selection surpasses the other methods through the performance measurements. Furthermore, bagging classifier was the best-performing data mining technique when features were selected using correlation-based feature selection with an accuracy rate of (92%).
Controlling informative features for improved accuracy and faster predictions...Damian R. Mingle, MBA
Identification of suitable biomarkers for accurate prediction of phenotypic outcomes is a goal for personalized medicine. However, current machine learning approaches are either too complex or perform poorly.
For more information:
http://societyofdatascientists.com/controlling-informative-features-for-improved-accuracy-and-faster-predictions-in-omentum-cancer-models/?src=slideshare
Improving Prediction Accuracy Results by Using Q-Statistic Algorithm in High ...rahulmonikasharma
Classification problems in high dimensional information with little sort of observations became furthercommon significantly in microarray information. The increasing amount of text data on internet sites affects the agglomerationanalysis. The text agglomeration could also be a positive analysis technique used for partitioning a huge amount of datainto clusters. Hence, the most necessary draw back that affects the text agglomeration technique is that the presenceuninformative and distributed choices in text documents. A broad class of boosting algorithms is known as actingcoordinate-wise gradient descent to attenuate some potential performs of the margins of a data set. This paperproposes a novel analysis live Q-statistic that comes with the soundness of the chosen feature set to boot to theprediction accuracy. Then we've a bent to propose the Booster of associate degree FS algorithm that enhances theworth of the Q-statistic of the algorithm applied.
Automatic Query Expansion Using Word Embedding Based on Fuzzy Graph Connectiv...YogeshIJTSRD
The aim of information retrieval systems is to retrieve relevant information according to the query provided. The queries are often vague and uncertain. Thus, to improve the system, we propose an Automatic Query Expansion technique, to expand the query by adding new terms to the user s initial query so as to minimize query mismatch and thereby improving retrieval performance. Most of the existing techniques for expanding queries do not take into account the degree of semantic relationship among words. In this paper, the query is expanded by exploring terms which are semantically similar to the initial query terms as well as considering the degree of relationship, that is, “fuzzy membership- between them. The terms which seemed most relevant are used in expanded query and improve the information retrieval process. The experiments conducted on the queries set show that the proposed Automatic query expansion approach gave a higher precision, recall, and F measure then non fuzzy edge weights. Tarun Goyal | Ms. Shalini Bhadola | Ms. Kirti Bhatia "Automatic Query Expansion Using Word Embedding Based on Fuzzy Graph Connectivity Measures" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-5 , August 2021, URL: https://www.ijtsrd.com/papers/ijtsrd45074.pdf Paper URL: https://www.ijtsrd.com/computer-science/artificial-intelligence/45074/automatic-query-expansion-using-word-embedding-based-on-fuzzy-graph-connectivity-measures/tarun-goyal
Propose a Enhanced Framework for Prediction of Heart DiseaseIJERA Editor
Heart disease diagnosis requires more experience and it is a complex task. The Heart MRI, ECG and Stress Test etc are the numbers of medical tests are prescribed by the doctor for examining the heart disease and it is the way of tradition in the prediction of heart disease. Today world, the hidden information of the huge amount of health care data is contained by the health care industry. The effective decisions are made by means of this hidden information. For appropriate results, the advanced data mining techniques with the information which is based on the computer are used. In any empirical sciences, for the inference and categorisation, the new mathematical techniques to be used called Artificial neural networks (ANNs) it also be used to the modelling of the real neural networks. Acting, Wanting, knowing, remembering, perceiving, thinking and inferring are the nature of mental phenomena and these can be understand by using the theory of ANN. The problem of probability and induction can be arised for the inference and classification because these are the powerful instruments of ANN. In this paper, the classification techniques like Naive Bayes Classification algorithm and Artificial Neural Networks are used to classify the attributes in the given data set. The attribute filtering techniques like PCA (Principle Component Analysis) filtering and Information Gain Attribute Subset Evaluation technique for feature selection in the given data set to predict the heart disease symptoms. A new framework is proposed which is based on the above techniques, the framework will take the input dataset and fed into the feature selection techniques block, which selects any one techniques that gives the least number of attributes and then classification task is done using two algorithms, the same attributes that are selected by two classification task is taken for the prediction of heart disease. This framework consumes the time for predicting the symptoms of heart disease which make the user to know the important attributes based on the proposed framework.
Identification of important features and data mining classification technique...IJECEIAES
Employees absenteeism at the work costs organizations billions a year. Prediction of employees’ absenteeism and the reasons behind their absence help organizations in reducing expenses and increasing productivity. Data mining turns the vast volume of human resources data into information that can help in decision-making and prediction. Although the selection of features is a critical step in data mining to enhance the efficiency of the final prediction, it is not yet known which method of feature selection is better. Therefore, this paper aims to compare the performance of three well-known feature selection methods in absenteeism prediction, which are relief-based feature selection, correlation-based feature selection and information-gain feature selection. In addition, this paper aims to find the best combination of feature selection method and data mining technique in enhancing the absenteeism prediction accuracy. Seven classification techniques were used as the prediction model. Additionally, cross-validation approach was utilized to assess the applied prediction models to have more realistic and reliable results. The used dataset was built at a courier company in Brazil with records of absenteeism at work. Regarding experimental results, correlationbased feature selection surpasses the other methods through the performance measurements. Furthermore, bagging classifier was the best-performing data mining technique when features were selected using correlation-based feature selection with an accuracy rate of (92%).
Controlling informative features for improved accuracy and faster predictions...Damian R. Mingle, MBA
Identification of suitable biomarkers for accurate prediction of phenotypic outcomes is a goal for personalized medicine. However, current machine learning approaches are either too complex or perform poorly.
For more information:
http://societyofdatascientists.com/controlling-informative-features-for-improved-accuracy-and-faster-predictions-in-omentum-cancer-models/?src=slideshare
Classification problems specified in high dimensional data with smallnumber of observation are generally becoming common in specific microarray data. In the time of last two periods of years, manyefficient classification standard models and also Feature Selection (FS) algorithm which isalso referred as FS technique have basically been proposed for higher prediction accuracies. Although, the outcome of FS algorithm related to predicting accuracy is going to be unstable over the variations in considered trainingset, in high dimensional data. In this paperwe present a latest evaluation measure Q-statistic that includes the stability of the selected feature subset in inclusion to prediction accuracy. Then we are going to propose the standard Booster of a FS algorithm that boosts the basic value of the preferred Q-statistic of the algorithm applied. Therefore study on synthetic data and 14 microarray data sets shows that Booster boosts not only the value of Q-statistics but also the prediction accuracy of the algorithm applied.
Petri Net Based Reliable Work Flow Framework for Nephrology Unit in Hospital ...rahulmonikasharma
The 21st century has witnessed a revolution in Biology and Medicine that has radically changed the way health, diagnosis, prognosis, etc., of a disease is monitored nowadays. Accordingly, hospital redesign, workforce planning and scheduling, patient flow, performance management, disease monitoring, and health care technology assessment need to be modeled efficiently. Mathematical modeling and computer simulation techniques have been shown to be increasingly valuable in providing useful information to aid planning and management. Petri Net (PN) is considered as a powerful model since it combines well-defined mathematical theory with a graphical representation which reflects the dynamic behavior of systems of interest. Due to dynamic characteristics, it is found to be more suitable for modeling Hospital Management System (HMS). In this paper, a Petri net model-based reliable workflow framework for Nephrology unit in hospital environment is proposed to track the movement of patients in the unit. The key objective of the proposed reliable workflow framework is to provide a well-organized health care unit to reduce the waiting time of the resource/ patient. The performance of the proposed Petri net model-based reliable workflow framework is simulated and validated through reachability graph using HPSim tool. The proposed Petri net workflow framework for the Nephrology unit can be used to deliver highly efficient and reliable healthcare services.
Biometric Identification and Authentication Providence using Fingerprint for ...IJECEIAES
The raise in the recent security incidents of cloud computing and its challenges is to secure the data. To solve this problem, the integration of mobile with cloud computing, Mobile biometric authentication in cloud computing is presented in this paper. To enhance the security, the biometric authentication is being used, since the Mobile cloud computing is popular among the mobile user. This paper examines how the mobile cloud computing (MCC) is used in security issue with finger biometric authentication model. Through this fingerprint biometric, the secret code is generated by entropy value. This enables the person to request for accessing the data in the desk computer. When the person requests the access to the authorized user through Bluetooth in mobile, the Authorized user sends the permit access through fingerprint secret code. Finally this fingerprint is verified with the database in the Desk computer. If it is matched, then the computer can be accessed by the requested person.
A comprehensive study on disease risk predictions in machine learning IJECEIAES
Over recent years, multiple disease risk prediction models have been developed. These models use various patient characteristics to estimate the probability of outcomes over a certain period of time and hold the potential to improve decision making and individualize care. Discovering hidden patterns and interactions from medical databases with growing evaluation of the disease prediction model has become crucial. It needs many trials in traditional clinical findings that could complicate disease prediction. A Comprehensive study on different strategies used to predict disease is conferred in this paper. Applying these techniques to healthcare data, has improvement of risk prediction models to find out the patients who would get benefit from disease management programs to reduce hospital readmission and healthcare cost, but the results of these endeavors have been shifted.
Topic: Critical review of an ERP post-implementation Article (Grade Mark: Distinction of 79%)
Module: Research Principles and Practices
Sheffield Hallam University
For the agriculture sector, detecting and identifying plant diseases at an early stage is extremely important and
still very challenging. Machine learning is an application of AI that helps us achieve this purpose effectively. It
uses a group of algorithms to analyze and interpret data, learn from it, and using it, smart decisions can be
made. For accomplishing this project, a dataset that contains a set of healthy & diseased plant leaf images are
used then using image processing we extract the features of the image. Then we model this dataset with
different machine learning algorithms like Random Forest, Support Vector Machine, Naïve Bayes etc. The aim is
to hold out a comparative study to spot which of those algorithm can predict diseases with the at most
accuracy. We compare factors like precision, accuracy, error rates as well as prediction time of different
machine learning algorithms. After all these comparison, valuable conclusions can be made for this project.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
MACHINE LEARNING ALGORITHMS FOR HETEROGENEOUS DATA: A COMPARATIVE STUDYIAEME Publication
In the present digital era massive amount of data is being continuously generated
at exceptional and increasing scales. This data has become an important and
indispensable part of every economy, industry, organization, business and individual.
Further handling of these large datasets due to the heterogeneity in their formats is
one of the major challenge. There is a need for efficient data processing techniques to
handle the heterogeneous data and also to meet the computational requirements to
process this huge volume of data. The objective of this paper is to review, describe
and reflect on heterogeneous data with its complexity in processing, and also the use
of machine learning algorithms which plays a major role in data analytics
ARTIFICIAL INTELLIGENCE BASED DATA GOVERNANCE FOR CHINESE ELECTRONIC HEALTH R...IJDKP
Electronic health record (EHR) analysis can leverage great insights to improve the quality of human
healthcare. However, the low data quality problems of missing values, inconsistency, and errors in the
data setseverely hinder buildingrobust machine learning models for data analysis. In this paper, we
develop a methodology ofartificial intelligence (AI)-based data governance to predict the missing values
or verify if the existing values are correct and what they should be when they are wrong. We demonstrate
the performance of this methodology through a case study ofpatient gender prediction and verification.
Experimental resultsshow that the deep learning algorithm of convolutional neural network (CNN) works
very wellaccording to the testing performance measured by the quantitative metric of F1-Score, and it
outperformsthe support vector machine (SVM) models with different vector representations for documents.
Classification problems specified in high dimensional data with smallnumber of observation are generally becoming common in specific microarray data. In the time of last two periods of years, manyefficient classification standard models and also Feature Selection (FS) algorithm which isalso referred as FS technique have basically been proposed for higher prediction accuracies. Although, the outcome of FS algorithm related to predicting accuracy is going to be unstable over the variations in considered trainingset, in high dimensional data. In this paperwe present a latest evaluation measure Q-statistic that includes the stability of the selected feature subset in inclusion to prediction accuracy. Then we are going to propose the standard Booster of a FS algorithm that boosts the basic value of the preferred Q-statistic of the algorithm applied. Therefore study on synthetic data and 14 microarray data sets shows that Booster boosts not only the value of Q-statistics but also the prediction accuracy of the algorithm applied.
Petri Net Based Reliable Work Flow Framework for Nephrology Unit in Hospital ...rahulmonikasharma
The 21st century has witnessed a revolution in Biology and Medicine that has radically changed the way health, diagnosis, prognosis, etc., of a disease is monitored nowadays. Accordingly, hospital redesign, workforce planning and scheduling, patient flow, performance management, disease monitoring, and health care technology assessment need to be modeled efficiently. Mathematical modeling and computer simulation techniques have been shown to be increasingly valuable in providing useful information to aid planning and management. Petri Net (PN) is considered as a powerful model since it combines well-defined mathematical theory with a graphical representation which reflects the dynamic behavior of systems of interest. Due to dynamic characteristics, it is found to be more suitable for modeling Hospital Management System (HMS). In this paper, a Petri net model-based reliable workflow framework for Nephrology unit in hospital environment is proposed to track the movement of patients in the unit. The key objective of the proposed reliable workflow framework is to provide a well-organized health care unit to reduce the waiting time of the resource/ patient. The performance of the proposed Petri net model-based reliable workflow framework is simulated and validated through reachability graph using HPSim tool. The proposed Petri net workflow framework for the Nephrology unit can be used to deliver highly efficient and reliable healthcare services.
Biometric Identification and Authentication Providence using Fingerprint for ...IJECEIAES
The raise in the recent security incidents of cloud computing and its challenges is to secure the data. To solve this problem, the integration of mobile with cloud computing, Mobile biometric authentication in cloud computing is presented in this paper. To enhance the security, the biometric authentication is being used, since the Mobile cloud computing is popular among the mobile user. This paper examines how the mobile cloud computing (MCC) is used in security issue with finger biometric authentication model. Through this fingerprint biometric, the secret code is generated by entropy value. This enables the person to request for accessing the data in the desk computer. When the person requests the access to the authorized user through Bluetooth in mobile, the Authorized user sends the permit access through fingerprint secret code. Finally this fingerprint is verified with the database in the Desk computer. If it is matched, then the computer can be accessed by the requested person.
A comprehensive study on disease risk predictions in machine learning IJECEIAES
Over recent years, multiple disease risk prediction models have been developed. These models use various patient characteristics to estimate the probability of outcomes over a certain period of time and hold the potential to improve decision making and individualize care. Discovering hidden patterns and interactions from medical databases with growing evaluation of the disease prediction model has become crucial. It needs many trials in traditional clinical findings that could complicate disease prediction. A Comprehensive study on different strategies used to predict disease is conferred in this paper. Applying these techniques to healthcare data, has improvement of risk prediction models to find out the patients who would get benefit from disease management programs to reduce hospital readmission and healthcare cost, but the results of these endeavors have been shifted.
Topic: Critical review of an ERP post-implementation Article (Grade Mark: Distinction of 79%)
Module: Research Principles and Practices
Sheffield Hallam University
For the agriculture sector, detecting and identifying plant diseases at an early stage is extremely important and
still very challenging. Machine learning is an application of AI that helps us achieve this purpose effectively. It
uses a group of algorithms to analyze and interpret data, learn from it, and using it, smart decisions can be
made. For accomplishing this project, a dataset that contains a set of healthy & diseased plant leaf images are
used then using image processing we extract the features of the image. Then we model this dataset with
different machine learning algorithms like Random Forest, Support Vector Machine, Naïve Bayes etc. The aim is
to hold out a comparative study to spot which of those algorithm can predict diseases with the at most
accuracy. We compare factors like precision, accuracy, error rates as well as prediction time of different
machine learning algorithms. After all these comparison, valuable conclusions can be made for this project.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
MACHINE LEARNING ALGORITHMS FOR HETEROGENEOUS DATA: A COMPARATIVE STUDYIAEME Publication
In the present digital era massive amount of data is being continuously generated
at exceptional and increasing scales. This data has become an important and
indispensable part of every economy, industry, organization, business and individual.
Further handling of these large datasets due to the heterogeneity in their formats is
one of the major challenge. There is a need for efficient data processing techniques to
handle the heterogeneous data and also to meet the computational requirements to
process this huge volume of data. The objective of this paper is to review, describe
and reflect on heterogeneous data with its complexity in processing, and also the use
of machine learning algorithms which plays a major role in data analytics
ARTIFICIAL INTELLIGENCE BASED DATA GOVERNANCE FOR CHINESE ELECTRONIC HEALTH R...IJDKP
Electronic health record (EHR) analysis can leverage great insights to improve the quality of human
healthcare. However, the low data quality problems of missing values, inconsistency, and errors in the
data setseverely hinder buildingrobust machine learning models for data analysis. In this paper, we
develop a methodology ofartificial intelligence (AI)-based data governance to predict the missing values
or verify if the existing values are correct and what they should be when they are wrong. We demonstrate
the performance of this methodology through a case study ofpatient gender prediction and verification.
Experimental resultsshow that the deep learning algorithm of convolutional neural network (CNN) works
very wellaccording to the testing performance measured by the quantitative metric of F1-Score, and it
outperformsthe support vector machine (SVM) models with different vector representations for documents.
ICU Patient Deterioration Prediction : A Data-Mining Approachcsandit
A huge amount of medical data is generated every da
y, which presents a challenge in analysing
these data. The obvious solution to this challenge
is to reduce the amount of data without
information loss. Dimension reduction is considered
the most popular approach for reducing
data size and also to reduce noise and redundancies
in data. In this paper, we investigate the
effect of feature selection in improving the predic
tion of patient deterioration in ICUs. We
consider lab tests as features. Thus, choosing a su
bset of features would mean choosing the
most important lab tests to perform. If the number
of tests can be reduced by identifying the
most important tests, then we could also identify t
he redundant tests. By omitting the redundant
tests, observation time could be reduced and early
treatment could be provided to avoid the risk.
Additionally, unnecessary monetary cost would be av
oided. Our approach uses state-of-the-art
feature selection for predicting ICU patient deteri
oration using the medical lab results. We
apply our technique on the publicly available MIMIC
-II database and show the effectiveness of
the feature selection. We also provide a detailed a
nalysis of the best features identified by our
approach.
ICU PATIENT DETERIORATION PREDICTION: A DATA-MINING APPROACHcscpconf
A huge amount of medical data is generated every day, which presents a challenge in analysing
these data. The obvious solution to this challenge is to reduce the amount of data without
information loss. Dimension reduction is considered the most popular approach for reducing
data size and also to reduce noise and redundancies in data. In this paper, we investigate the
effect of feature selection in improving the prediction of patient deterioration in ICUs. We
consider lab tests as features. Thus, choosing a subset of features would mean choosing the
most important lab tests to perform. If the number of tests can be reduced by identifying the
most important tests, then we could also identify the redundant tests. By omitting the redundant
tests, observation time could be reduced and early treatment could be provided to avoid the risk.
Additionally, unnecessary monetary cost would be avoided. Our approach uses state-of-the-art
feature selection for predicting ICU patient deterioration using the medical lab results. We
apply our technique on the publicly available MIMIC-II database and show the effectiveness of
the feature selection. We also provide a detailed analysis of the best features identified by our
approach.
Correlation of artificial neural network classification and nfrs attribute fi...eSAT Journals
Abstract
Mostly 5 to 15% of the women in the stage of reproduction face the disease called Polycystic Ovarian Syndrome (PCOS) which is the multifaceted, heterogeneous and complex. The long term consequences diseases like endometrial hyperplasia, type 2 diabetes mellitus and coronary disease are caused by the polycystic ovaries, chronic anovulation and hyperandrogenism are characterized with the resistance of insulin and the hypertension, abdominal obesity and dyslipidemia and hyperinsulinemia are called as Metabolic syndrome (frequent metabolic traits) The above cause the common disease called Anovulatory infertility. Computer based information along with advanced Data mining techniques are used for appropriate results. Classification is a classic data mining task, with roots in machine learning. Naïve Bayesian, Artificial Neural Network, Decision Tree, Support Vector Machines are the classification tasks in the data mining. Feature selection methods involve generation of the subset, evaluation of each subset, criteria for stopping the search and validation procedures. The characteristics of the search method used are important with respect to the time efficiency of the feature selection methods. PCA (Principle Component Analysis), Information gain Subset Evaluation, Fuzzy rough set evaluation, Correlation based Feature Selection (CFS) are some of the feature selection techniques, greedy first search, ranker etc are the search algorithms that are used in the feature selection. In this paper, a new algorithm which is based on Fuzzy neural subset evaluation and artificial neural network is proposed which reduces the task of classification and feature selection separately. This algorithm combines the neural fuzzy rough subset evaluation and artificial neural network together for the better performance than doing the tasks separately.
Keywords: ANN, SVM, PCA, CFS
DATA MINING CLASSIFICATION ALGORITHMS FOR KIDNEY DISEASE PREDICTION IJCI JOURNAL
Data mining is a non-trivial process of categorizing valid, novel, potentially useful and ultimately understandable patterns in data. In terms, it accurately state as the extraction of information from a huge database. Data mining is a vital role in several applications such as business organizations, educational institutions, government sectors, health care industry, scientific and engineering. . In the health care
industry, the data mining is predominantly used for disease prediction. Enormous data mining techniques are existing for predicting diseases namely classification, clustering, association rules, summarizations, regression and etc. The main objective of this research work is to predict kidney diseases using classification algorithms such as Naïve Bayes and Support Vector Machine. This research work mainly
focused on finding the best classification algorithm based on the classification accuracy and execution time performance factors. From the experimental results it is observed that the performance of the SVM is better than the Naive Bayes classifier algorithm.
Exercises in Measurement and validity For this assignment, you.docxSANSKAR20
Exercises in Measurement and validity
For this assignment, you will be working through questions regarding measurement and validity.. Your answers should be written in complete sentences. Some of the answers may require you to show your work.
1. You have just started a new diet program. To mark your progress, you start weighing yourself three times a day. You also notice that each time you weigh yourself in a given day, the number of pounds is different. Based on the rules regarding the scales of measurement, why is it wrong to weigh yourself more than once a day?
2. Your hospital administration has received several phone complaints from patients about rude behavior from registration staff and long wait times to register in both the Dermatology and Audiology Outpatient Clinics. A decision is made to send a patient satisfaction survey to all Outpatient Clinic patients to determine overall patient satisfaction in the hospital’s Clinic setting. The survey developed uses this type of scoring: 1 = strongly disagree and 5 = strongly agree. What type of scale of measurement is this?
3. Your hospital wants to study patients readmitted within 30-days. What measures (e.g. Medicare patients only) would you recommend be included in the study (identify at least 3)? Where would you locate the data elements (e.g. admission records)?
4. Your hospital’s Pharmacy and Therapeutics Committee undertook a quality review of Medication forms from discharges in the first quarter of the year and identified the errors by 5 general categories and then calculated the percentage of the total errors by category. The results were: Dosage Form 6%, Name confusion 13%, Communication 19%, Labeling 20%, and Human Factors 42%. As the HIM Director you are a member of the P&T Committee, the Chair asks you to prepare a graphic display of the error results for Medical Staff review. What is the best choice of a graphic display to present this data to the Medical Staff? And why
a. Line Graph
b. Bar Graph
c. Pie chart
d. Data Table
5. Provide a definition and example for the following terms:
a. Content validity
b. Construct validity
c. Criterion validity
Running head: BUSINESS AND USER REQUIREMENTS DOCUMENT DRAFT 1
BUSINESS AND USER REQUIREMENTS DOCUMENT DRAFT 6
Business and User Requirements Document Draft
thanks for your Draft report on the EHR project and requirements. There are 3 main parts to cover: Sources of information, departments affected: Provide more information about the clinical departments. HIM is not the "most important" department for this system. Clean up some of the writing possible errors or misunderstandings, too. 5 /7 Methods to gather information: Glad you mentioned interviews, focus groups, and questionnaires and explained all three. 7 /7
Requirements statements:3 /6 You are not quite understanding what Requirements are yet. They are what the system must do. We will get later on in the class, onto project implementation tasks such ...
ARTICLEAnalysing the power of deep learning techniques ovedessiechisomjj4
ARTICLE
Analysing the power of deep learning techniques over the
traditional methods using medicare utilisation and provider data
Varadraj P. Gurupura, Shrirang A. Kulkarnib, Xinliang Liua, Usha Desai c and Ayan Nasird
aDepartment of Health Management and Informatics, University of Central Florida, Orlando, FL, USA; bSchool of
Computer Science and Engineering, Vellore Institute of Technology, Vellore, India; cDepartment of Electronics and
Communication Engineering, Nitte Mahalinga Adyanthaya Memorial Institute of Technology, Nitte, Udupi, India;
dUCF School of Medicine, University of Central Florida, Orlando, FL, USA
ABSTRACT
Deep Learning Technique (DLT) is the sub-branch of Machine
Learning (ML) which assists to learn the data in multiple levels of
representation and abstraction and shows impressive performance
on many Artificial Intelligence (AI) tasks. This paper presents a new
method to analyse the healthcare data using DLT algorithms and
associated mathematical formulations. In this study, we have first
developed a DLT to programme two types of deep learning neural
networks, namely: (a) a two-hidden layer network, and (b) a three-
hidden layer network. The data was analysed for predictability in
both of these networks. Additionally, a comparison was also made
with simple and multiple Linear Regression (LR). The demonstration
of successful application of this method is carried out using the
dataset that was constructed based on 2014 Medicare Provider
Utilization and Payment Data. The results indicate a stronger case
to use DLTs compared to traditional techniques like LR. Furthermore,
it was identified that adding more hidden layers to neural network
constructed for performing deep learning analysis did not have
much impact on predictability for the dataset considered in this
study. Therefore, the experimentation described in this article sets
up a case for using DLTs over the traditional predictive analytics. The
investigators assume that the algorithms described for deep learning
is repeatable and can be applied for other types of predictive ana-
lysis on healthcare data. The observed results indicate, the accuracy
obtained by DLT was 40% more accurate than the traditional multi-
variate LR analysis.
ARTICLE HISTORY
Received 16 April 2018
Accepted 30 August 2018
KEYWORDS
Deep Learning Technique
(DLT); medicare data;
Machine Learning (ML);
Linear Regression (LR);
Confusion Matrix (CM)
Introduction
Methods involving Artificial Intelligence (AI) associated with Deep Learning Technique (DLT)
and Machine Learning (ML) are slowly but surely being used in medical and health infor-
matics. Traditionally, techniques such as Linear Regression (LR) (Nimon & Oeswald, 2013),
Analysis of Variance (ANOVA) (Kim, 2014), and Multivariate Analysis of Variance (MANOVA)
(Xu, 2014) (Malehi et al., 2015) have been used for predicting outcomes in healthcare.
However, in the recent years the methods of analysis applied are changing towards the
aforementi ...
Factors Affecting the Adoption of Electronic Health Records by Nursepaperpublications3
Abstract: Electronic Health Record has potential to improve patient care by managing patient’s medical and personal information efficiently and effectively. It is easy to maintain patient information electronically compared to paper based records. Many studies have been done in other countries to study the effective use of Electronic Health Record, but a small number of studies exist in Indian situation. This study is a footstep in this route. This study has been done to know the use of electronic health records among nurses in private medium sized hospitals of Tamil Nadu, India. The objective of the study is to explore the use of Electronic Health Records and barriers in using it among nurses. This study also analyzes the factors affecting nurses to adopt electronic health record. Only a third of the nurses (33%) use electronic health record. Lack of training is the major hindrance in use electronic health record among nurses.
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
Immunizing Image Classifiers Against Localized Adversary Attacksgerogepatton
This paper addresses the vulnerability of deep learning models, particularly convolutional neural networks
(CNN)s, to adversarial attacks and presents a proactive training technique designed to counter them. We
introduce a novel volumization algorithm, which transforms 2D images into 3D volumetric representations.
When combined with 3D convolution and deep curriculum learning optimization (CLO), itsignificantly improves
the immunity of models against localized universal attacks by up to 40%. We evaluate our proposed approach
using contemporary CNN architectures and the modified Canadian Institute for Advanced Research (CIFAR-10
and CIFAR-100) and ImageNet Large Scale Visual Recognition Challenge (ILSVRC12) datasets, showcasing
accuracy improvements over previous techniques. The results indicate that the combination of the volumetric
input and curriculum learning holds significant promise for mitigating adversarial attacks without necessitating
adversary training.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.