Abstract— At Early exposure of patients with dignified risk of developing diabetes mellitus is so hyper critical to the bettered prevention and global clinical management of these patients. In an existing system, apriori algorithm is used to find the itemsets for association rules but it is not efficient in finding itemsets and it uses only four association rules for finding the risk of diabetes mellitus so it have low precision. In this paper we are focusing to implement association rule mining to electronic medical records to detect set of danger factors and their equivalent or identical subpopulations that indicates patients at especially steep risk of progressing diabetes. Association rule mining accomplishes a very bulky set of rules for summarizing the EMR with huge dimensionability. We proposed a system in enlargement to combine risk of diabetes for the purpose of finding an suitable summary for this we use ten association rule and using the reorder algorithm for finding the itemsets and rules. For identifying the risk we considered four association rule set summarization techniques and organised a related calculation to support counselling with respect to their applicability merits and demerits and provide solutions to reduce the risk of diabetes. The above four methods having its fair strength but the bus algorithm developed the best acceptable summary.
Abstract: Now a days detection of patients with elevated risk of diabetes mellitus is developing critical to the improved prevention and overall health management of these patients. We aim to apply association rule mining to electronic medical records (EMR) to invent sets of risk factors and their corresponding subpopulations that represent patients which have high risk of developing diabetes. With the high linearity of EMRs, association rule mining generates a very large set of rules which we need to summarize for easy medical use. We reviewed four association rule set summarization techniques and conducted a comparative evaluation to provide guidance regarding their applicability, advantages and drawbacks. We proposed extensions to incorporate risk of diabetes into the process of finding an optimum summary. We evaluated these modified techniques on a real-world border line diabetes patient associate. We found that all four methods gives summaries that described subpopulations at high risk of diabetes with every method having its clear strength. In this extension to the Bottom-Up Summarization (BUS) algorithm produced the most suitable summary. The subpopulations identified by this summary covered most high-risk patients, had low overlap and were at very high risk of diabetes.
Keywords: Agile model, Association rules, Association rule summarization, Data mining, Survival analysis, Fuzzy Clustering.
Title: Diabetes Mellitus Prediction System Using Data Mining
Author: Yamini Amrale, Arti Shedge, Sonal Singh, Anjum Shaikh
ISSN 2350-1022
International Journal of Recent Research in Mathematics Computer Science and Information Technology
Paper Publications
Ascendable Clarification for Coronary Illness Prediction using Classification...ijtsrd
Coronary disease is predicted by classification technique. The data mining tool WEKA has been exploited for implementing Naïve Bayes classifier. Proposed work is trapped with a specific end goal to enhance the execution of models. For improving the classification accuracy Naïve Bayes is combined with Bagging and Attribute Selection. Trial results demonstrated a critical change over in the current Naïve Bayes classifier. This approach enhances the classification accuracy and reduces computational time. D. Haripriya | Dr. M. Lovelin Ponn Felciah "Ascendable Clarification for Coronary Illness Prediction using Classification Mining and Feature Selection Performances" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26690.pdfPaper URL: https://www.ijtsrd.com/computer-science/data-miining/26690/ascendable-clarification-for-coronary-illness-prediction-using-classification-mining-and-feature-selection-performances/d-haripriya
Allometry Scalling in Drug Development by Murugesh Kandasamy in Advancements in Bioequivalence & Bioavailability
Allometry is about the study of body size and its outcomes, it is described as ‘by a different measure’, and in allometric system the proportions are changed in a regular fashion [1]. Allometry, which is the oldest of the approaches and still widely applied in biology, is concerned with the study of the relationship between the size and function of components of the body and growth or size of the whole body [2]. Alternatively, to study the species change in a specific factor which correlates with difference in size of the species. Allometry is centered on the prediction (an exact prediction) by considering the physiological, anatomical and biochemical parallels among animals, which can be explained by mathematical models. It is now an established fact that many physiological processes and size of the organ that exhibit a power-law relationship with the body weight of the species. This relationship is defined as the scientific source of allometric scaling [3,4].
https://crimsonpublishers.com/abb/fulltext/ABB.000512.php
This paper helps in foreseeing diabetes by applying data mining strategy. The revelation of information
from clinical datasets is significant so as to make powerful medical determination. The point of data mining is to
extricate information from data put away in dataset and produce clear and reasonable depiction of examples. Diabetes
is an interminable sickness and a significant general wellbeing challenge around the world. Utilizing data mining
techniques by taking hba1c test data to help individuals to predict diabetes has increase significant fame. In this paper,
six classification models are used to classify a diabetic or non-diabetic patient and male and female patients. The
dataset utilized is gathered from a Diagnostics and research laboratory Liaquat university of medical and health
sciences Jamshoro, which gathers the data of patients with diabetes, without diabetes by taking blood sample of patient
and performing hba1c. We utilized Weka tool for the analysis diabetes, no-diabetic examination. Out of six
classification algorithms, four algorithms depict hundred percent accuracy on train and test data.
KEY WORDS: Data mining, Diabetes, HbA1c, Classification models, Weka.
Abstract: Now a days detection of patients with elevated risk of diabetes mellitus is developing critical to the improved prevention and overall health management of these patients. We aim to apply association rule mining to electronic medical records (EMR) to invent sets of risk factors and their corresponding subpopulations that represent patients which have high risk of developing diabetes. With the high linearity of EMRs, association rule mining generates a very large set of rules which we need to summarize for easy medical use. We reviewed four association rule set summarization techniques and conducted a comparative evaluation to provide guidance regarding their applicability, advantages and drawbacks. We proposed extensions to incorporate risk of diabetes into the process of finding an optimum summary. We evaluated these modified techniques on a real-world border line diabetes patient associate. We found that all four methods gives summaries that described subpopulations at high risk of diabetes with every method having its clear strength. In this extension to the Bottom-Up Summarization (BUS) algorithm produced the most suitable summary. The subpopulations identified by this summary covered most high-risk patients, had low overlap and were at very high risk of diabetes.
Keywords: Agile model, Association rules, Association rule summarization, Data mining, Survival analysis, Fuzzy Clustering.
Title: Diabetes Mellitus Prediction System Using Data Mining
Author: Yamini Amrale, Arti Shedge, Sonal Singh, Anjum Shaikh
ISSN 2350-1022
International Journal of Recent Research in Mathematics Computer Science and Information Technology
Paper Publications
Ascendable Clarification for Coronary Illness Prediction using Classification...ijtsrd
Coronary disease is predicted by classification technique. The data mining tool WEKA has been exploited for implementing Naïve Bayes classifier. Proposed work is trapped with a specific end goal to enhance the execution of models. For improving the classification accuracy Naïve Bayes is combined with Bagging and Attribute Selection. Trial results demonstrated a critical change over in the current Naïve Bayes classifier. This approach enhances the classification accuracy and reduces computational time. D. Haripriya | Dr. M. Lovelin Ponn Felciah "Ascendable Clarification for Coronary Illness Prediction using Classification Mining and Feature Selection Performances" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26690.pdfPaper URL: https://www.ijtsrd.com/computer-science/data-miining/26690/ascendable-clarification-for-coronary-illness-prediction-using-classification-mining-and-feature-selection-performances/d-haripriya
Allometry Scalling in Drug Development by Murugesh Kandasamy in Advancements in Bioequivalence & Bioavailability
Allometry is about the study of body size and its outcomes, it is described as ‘by a different measure’, and in allometric system the proportions are changed in a regular fashion [1]. Allometry, which is the oldest of the approaches and still widely applied in biology, is concerned with the study of the relationship between the size and function of components of the body and growth or size of the whole body [2]. Alternatively, to study the species change in a specific factor which correlates with difference in size of the species. Allometry is centered on the prediction (an exact prediction) by considering the physiological, anatomical and biochemical parallels among animals, which can be explained by mathematical models. It is now an established fact that many physiological processes and size of the organ that exhibit a power-law relationship with the body weight of the species. This relationship is defined as the scientific source of allometric scaling [3,4].
https://crimsonpublishers.com/abb/fulltext/ABB.000512.php
This paper helps in foreseeing diabetes by applying data mining strategy. The revelation of information
from clinical datasets is significant so as to make powerful medical determination. The point of data mining is to
extricate information from data put away in dataset and produce clear and reasonable depiction of examples. Diabetes
is an interminable sickness and a significant general wellbeing challenge around the world. Utilizing data mining
techniques by taking hba1c test data to help individuals to predict diabetes has increase significant fame. In this paper,
six classification models are used to classify a diabetic or non-diabetic patient and male and female patients. The
dataset utilized is gathered from a Diagnostics and research laboratory Liaquat university of medical and health
sciences Jamshoro, which gathers the data of patients with diabetes, without diabetes by taking blood sample of patient
and performing hba1c. We utilized Weka tool for the analysis diabetes, no-diabetic examination. Out of six
classification algorithms, four algorithms depict hundred percent accuracy on train and test data.
KEY WORDS: Data mining, Diabetes, HbA1c, Classification models, Weka.
Statistical multivariate analysis to infer the presence breast cancerFahad B. Mostafa
The primary aim of this multivariate analysis is to show statistical significance of many statistical technique to analysis multivariate data. To do this we start with exploratory study to develop and assess a prediction model which can potentially be used as a biomarker of breast cancer, based on anthropometric data and parameters which can be gathered in routine blood analysis of 116 women. To conduct this process, we will plot the sample data and show the type of distribution it follows. Main aim of this research is to reduce dimensionality using eigen decomposition of data matrix. To perform it we use the most useful PCA method. Finally, we want to find some hypothesis tests for finding the normality assumption, equal mean and covariance test, as well as simultaneous confidence interval for our data sets. Moreover, to predict breast cancer we used logistic regression model as well as confusion matrix to show how confuse our model.
Cost-effectiveness of electroconvulsive therapy compared to repetitive transc...Pydesalud
Póster sobre el coste-efectividad de la terapia electroconvulsiva frente a la estimulación magnética transcraneal en depresión resistente. Fue presentado por Laura Vallejo (técnica del SESCS) en la XXXIV edición de las Jornadas de Economía de la Salud organizadas por la Asociación de Economía de la Salud (AES). Pamplona, 27-30 mayo de 2014.
Cancer prognosis prediction using balanced stratified samplingijscai
High accuracy in cancer prediction is important to improve the quality of the treatment and to improve the
rate of survivability of patients. As the data volume is increasing rapidly in the healthcare research, the
analytical challenge exists in double. The use of effective sampling technique in classification algorithms
always yields good prediction accuracy. The SEER public use cancer database provides various prominent
class labels for prognosis prediction. The main objective of this paper is to find the effect of sampling
techniques in classifying the prognosis variable and propose an ideal sampling method based on the
outcome of the experimentation. In the first phase of this work the traditional random sampling and
stratified sampling techniques have been used. At the next level the balanced stratified sampling with
variations as per the choice of the prognosis class labels have been tested. Much of the initial time has been
focused on performing the pre-processing of the SEER data set. The classification model for
experimentation has been built using the breast cancer, respiratory cancer and mixed cancer data sets with
three traditional classifiers namely Decision Tree, Naïve Bayes and K-Nearest Neighbour. The three
prognosis factors survival, stage and metastasis have been used as class labels for experimental
comparisons. The results shows a steady increase in the prediction accuracy of balanced stratified model
as the sample size increases, but the traditional approach fluctuates before the optimum results.
An Experimental Study of Diabetes Disease Prediction System Using Classificat...IOSRjournaljce
Data mining means to the process of collecting, searching through, and analyzing a large amount of data in a database. Classification in one of the well-known data mining techniques for analyzing the performance of Naive Bayes, Random Forest, and Naïve Bayes tree (NB-Tree) classifier during the classification to improve precision, recall, f-measure, and accuracy. These three algorithms, of Naive Bayes, Random Forest, and NB-Tree are useful and efficient, has been tested in the medical dataset for diabetes disease and solving classification problem in data mining. In this paper, we compare the three different algorithms, and results indicate the Naive Bayes algorithms are able to achieve high accuracy rate along with minimum error rate when compared to other algorithms.
The correlation between pretreatment serum lactate dehydrogenase (LDH) levels...chaichana14
Objective: This study aimed to examine the relationship between pretreatment serum LDH levels and factors in advanced solid tumor to find out information for clinical use.
Materials and Methods: This is a cross-sectional study. Data of pretreatment LDH levels in 35 patients with advanced solid tumor at Cancer Clinic, Division of Medical oncology , Department of Internal Medicine, Buddhasothorn Hospital, were collected. And each patient was followed up for 6 months.
Results: The results showed that the pretreatment serum LDH levels did not correlate with factors including age, ECOG performance status, body mass index (BMI), tumor burden, site of metastasis, resection of the primary tumor, received systemic treatment, and 6-month mortality. However, High LDH levels were correlated with liver metastasis and being untreated by systemic treatment with statistical significance.(2-tailed significance, p = 0.001)
Conclusion: Pretreatment serum LDH levels were not found to correlate with the above mentioned factors; nevertheless, High Pretreatment serum LDH level was found to correlate with liver metastasis and correlate with and being untreated by systemic treatment. Data yet had limitations. However, the benefits of this research can be further studied in the future to find a marker that can help to evaluate and follow-up cancer patients.
Keywords: Lactate Dehydrogenase(LDH), Advanced Solid Tumor, Correlation
Performance Analysis of Data Mining Methods for Sexually Transmitted Disease ...IJECEIAES
According to health reports of Malang city, many people are exposed to sexually transmitted diseases and most sufferers are not aware of the symptoms. Malang city being known as a city of education so that every year the population number increases, it is at risk of increasing the spread of sexually transmitted diseases virus. This problem is important to be solved to treat earlier sufferers sexually transmitted diseases virus in order to reduce the burden of patient spending. In this research, authors conduct data mining methods to classifying sexually transmitted diseases. From the experiment result shows that K-NN is the best method for solve this problem with 90% accuracy.
Hybrid Genetic Algorithm for Optimization of Food Composition on Hypertensive...IJECEIAES
The healthy food with attention of salt degree is one of the efforts for healthy living of hypertensive patient. The effort is important for reducing the probability of hypertension change to be dangerous disease. In this study, the food composition is build with attention nutrition amount, salt degree, and minimum cost. The proposed method is hybrid method of Genetic Algorithm (GA) and Variable Neighborhood Search (VNS). The three scenarios of hybrid GA-VNS types had been developed in this study. Although hybrid GA and VNS take more time than pure GA or pure VNS but the proposed method give better quality of solution. VNS successfully help GA avoids premature convergence and improves better solution. The shortcomings on GA in local exploitation and premature convergence is solved by VNS, whereas the shortcoming on VNS that less capability in global exploration can be solved by use GA that has advantage in global exploration.
BLOOD TUMOR PREDICTION USING DATA MINING TECHNIQUEShiij
Healthcare systems generate a huge data collected from medical tests. Data mining is the computing
process of discovering patterns in large data sets such as medical examinations. Blood diseases are not an
exception; there are many test data can be collected from their patients. In this paper, we applied data
mining techniques to discover the relations between blood test characteristics and blood tumor in order to
predict the disease in an early stage, which can be used to enhance the curing ability. We conducted
experiments in our blood test dataset using three different data mining techniques which are association
rules, rule induction and deep learning. The goal of our experiments is to generate models that can
distinguish patients with normal blood disease from patients who have blood tumor. We evaluated our
results using different metrics applied on real data collected from Gaza European hospital in Palestine.
The final results showed that association rules could give us the relationship between blood test
characteristics and blood tumor. Also, it demonstrated that deep learning classifiers has the best ability to
predict tumor types of blood diseases with an accuracy of 79.45%. Also, rule induction gave us an
explanation of rules that describes both tumor in blood and normal hematology.
Estimating the Survival Function of HIV AIDS Patients using Weibull Modelijtsrd
This work provides information on the survival times of a cohort of infected individuals. The mean survival time was obtained as 22.579 months from the resultant estimate of the shape parameter =1.156 and scale parameter =0.0256 from Weibull 7 simulation of n = 500. Confidence intervals were also obtained for the two parameters at = 0.05 and it was found that the estimates are highly reliable. R. A. Adeleke | O. D. Ogunwale "Estimating the Survival Function of HIV/AIDS Patients using Weibull Model" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-4 , June 2020, URL: https://www.ijtsrd.com/papers/ijtsrd30636.pdf Paper Url :https://www.ijtsrd.com/mathemetics/statistics/30636/estimating-the-survival-function-of-hivaids-patients-using-weibull-model/r-a-adeleke
ABSTRACT
Objective: Stroke is one of the leading causes of death and disabilities worldwide. Cost-effectiveness analysis helps identify neglected opportunities
by highlighting interventions that are relatively inexpensive, yet have the potential to reduce the disease burden substantially. In India, there are
wide social and economic disparities. Socioeconomic environment influences occupation, lifestyle, and nutrition of social classes which in turn would
influence the prevalence and profile of stroke. By reduction of delays in access to hospital and improving provision of affordable treatments can
reduce morbidity and mortality in patients with stroke in India. This study is designed to measure and compare the costs (resources consumed) and
consequences (clinical, economic, and humanistic) of pharmaceutical products and services and their impact on individuals, healthcare systems and
society.
Methods: The purpose of this study is to analyze and conduct a cost-effectiveness analysis for the treatment of stroke in Guntur City Hospitals.
The patients were treated either with aspirin or clopidogrel. The health outcomes were measured using Modified Rankin Scale, A prominent risk
assessment scale for stroke. The pharmacoeconomic data were computed from the patient data collection forms.
Result: The incremental cost-effectiveness ratio of aspirin and clopidogrel were calculated to be Rs. 8046.2/year.
Conclusion: The study concludes that aspirin has the increased socioeconomic impact when compared to Clopidogrel and we can see that the earlier
therapy has supported discharge, home-based rehabilitation along with reduced hospital stay and hence preferable.
Keywords: Stroke, Pharmacoeconomics, Cost-effectiveness analysis, Aspirin, Clopidogrel, Incremental cost-effectiveness ratio.
Large amounts of heterogeneous medical data have become available in various healthcare organizations (payers, providers, pharmaceuticals). Those data could be an enabling resource for deriving insights for improving care delivery and reducing waste. The enormity and complexity of these datasets present great challenges in analyses and subsequent applications to a practical clinical environment. More details are available here http://dmkd.cs.wayne.edu/TUTORIAL/Healthcare/
Machine learning and operations research to find diabetics at risk for readmisison.
A team of researchers was able to apply machine learning to reduce readmissions for diabetics, see "Identifying diabetic patients with high risk of readmission" (Bhuvan,Kumar, Zafar, Aand Kishore, 2016).
Supervised Feature Selection for Diagnosis of Coronary Artery Disease Based o...cscpconf
Feature Selection (FS) has become the focus of much research on decision support systems areas for which datasets with tremendous number of variables are analyzed. In this paper we
present a new method for the diagnosis of Coronary Artery Diseases (CAD) founded on Genetic Algorithm (GA) wrapped Bayes Naïve (BN) based FS. Basically, CAD dataset contains two classes defined with 13 features. In GA–BN algorithm, GA
generates in each iteration a subset of attributes that will be evaluated using the BN in the second step of the selection procedure. The final set of attribute contains the most relevant feature model that increases the accuracy. The algorithm in this case produces 85.50% classification accuracy in the diagnosis of CAD. Thus, the asset of the Algorithm is then compared with the use of Support Vector Machine (SVM), Multi-Layer erceptron (MLP) and C4.5 decision tree Algorithm. The result of classification accuracy for those algorithms are respectively 83.5%, 83.16% and 80.85%. Consequently, the GA wrapped BN Algorithm is correspondingly compared with other FS algorithms. The Obtained results have shown very promising outcomes for the diagnosis of CAD.
SUPERVISED FEATURE SELECTION FOR DIAGNOSIS OF CORONARY ARTERY DISEASE BASED O...csitconf
Feature Selection (FS) has become the focus of much research on decision support systems
areas for which datasets with tremendous number of variables are analyzed. In this paper we
present a new method for the diagnosis of Coronary Artery Diseases (CAD) founded on Genetic
Algorithm (GA) wrapped Bayes Naïve (BN) based FS.
Basically, CAD dataset contains two classes defined with 13 features. In GA–BN algorithm, GA
generates in each iteration a subset of attributes that will be evaluated using the BN in the
second step of the selection procedure. The final set of attribute contains the most relevant
feature model that increases the accuracy. The algorithm in this case produces 85.50%
classification accuracy in the diagnosis of CAD. Thus, the asset of the Algorithm is then
compared with the use of Support Vector Machine (SVM), Multi-Layer Perceptron (MLP) and
C4.5 decision tree Algorithm. The result of classification accuracy for those algorithms are
respectively 83.5%, 83.16% and 80.85%. Consequently, the GA wrapped BN Algorithm is
correspondingly compared with other FS algorithms. The Obtained results have shown very
promising outcomes for the diagnosis of CAD.
Women who test positive for one of the two breast cancer susceptibility genes, BRCA1 and BRCA2, increase their risk by 45-55 percent. Currently, there are no specific physical activity recommendations for these women. However, research supports the positive effect of exercise on reducing breast cancer risk by reducing BMI, adipose tissue, and damage caused by lipid peroxidation.
Statistical multivariate analysis to infer the presence breast cancerFahad B. Mostafa
The primary aim of this multivariate analysis is to show statistical significance of many statistical technique to analysis multivariate data. To do this we start with exploratory study to develop and assess a prediction model which can potentially be used as a biomarker of breast cancer, based on anthropometric data and parameters which can be gathered in routine blood analysis of 116 women. To conduct this process, we will plot the sample data and show the type of distribution it follows. Main aim of this research is to reduce dimensionality using eigen decomposition of data matrix. To perform it we use the most useful PCA method. Finally, we want to find some hypothesis tests for finding the normality assumption, equal mean and covariance test, as well as simultaneous confidence interval for our data sets. Moreover, to predict breast cancer we used logistic regression model as well as confusion matrix to show how confuse our model.
Cost-effectiveness of electroconvulsive therapy compared to repetitive transc...Pydesalud
Póster sobre el coste-efectividad de la terapia electroconvulsiva frente a la estimulación magnética transcraneal en depresión resistente. Fue presentado por Laura Vallejo (técnica del SESCS) en la XXXIV edición de las Jornadas de Economía de la Salud organizadas por la Asociación de Economía de la Salud (AES). Pamplona, 27-30 mayo de 2014.
Cancer prognosis prediction using balanced stratified samplingijscai
High accuracy in cancer prediction is important to improve the quality of the treatment and to improve the
rate of survivability of patients. As the data volume is increasing rapidly in the healthcare research, the
analytical challenge exists in double. The use of effective sampling technique in classification algorithms
always yields good prediction accuracy. The SEER public use cancer database provides various prominent
class labels for prognosis prediction. The main objective of this paper is to find the effect of sampling
techniques in classifying the prognosis variable and propose an ideal sampling method based on the
outcome of the experimentation. In the first phase of this work the traditional random sampling and
stratified sampling techniques have been used. At the next level the balanced stratified sampling with
variations as per the choice of the prognosis class labels have been tested. Much of the initial time has been
focused on performing the pre-processing of the SEER data set. The classification model for
experimentation has been built using the breast cancer, respiratory cancer and mixed cancer data sets with
three traditional classifiers namely Decision Tree, Naïve Bayes and K-Nearest Neighbour. The three
prognosis factors survival, stage and metastasis have been used as class labels for experimental
comparisons. The results shows a steady increase in the prediction accuracy of balanced stratified model
as the sample size increases, but the traditional approach fluctuates before the optimum results.
An Experimental Study of Diabetes Disease Prediction System Using Classificat...IOSRjournaljce
Data mining means to the process of collecting, searching through, and analyzing a large amount of data in a database. Classification in one of the well-known data mining techniques for analyzing the performance of Naive Bayes, Random Forest, and Naïve Bayes tree (NB-Tree) classifier during the classification to improve precision, recall, f-measure, and accuracy. These three algorithms, of Naive Bayes, Random Forest, and NB-Tree are useful and efficient, has been tested in the medical dataset for diabetes disease and solving classification problem in data mining. In this paper, we compare the three different algorithms, and results indicate the Naive Bayes algorithms are able to achieve high accuracy rate along with minimum error rate when compared to other algorithms.
The correlation between pretreatment serum lactate dehydrogenase (LDH) levels...chaichana14
Objective: This study aimed to examine the relationship between pretreatment serum LDH levels and factors in advanced solid tumor to find out information for clinical use.
Materials and Methods: This is a cross-sectional study. Data of pretreatment LDH levels in 35 patients with advanced solid tumor at Cancer Clinic, Division of Medical oncology , Department of Internal Medicine, Buddhasothorn Hospital, were collected. And each patient was followed up for 6 months.
Results: The results showed that the pretreatment serum LDH levels did not correlate with factors including age, ECOG performance status, body mass index (BMI), tumor burden, site of metastasis, resection of the primary tumor, received systemic treatment, and 6-month mortality. However, High LDH levels were correlated with liver metastasis and being untreated by systemic treatment with statistical significance.(2-tailed significance, p = 0.001)
Conclusion: Pretreatment serum LDH levels were not found to correlate with the above mentioned factors; nevertheless, High Pretreatment serum LDH level was found to correlate with liver metastasis and correlate with and being untreated by systemic treatment. Data yet had limitations. However, the benefits of this research can be further studied in the future to find a marker that can help to evaluate and follow-up cancer patients.
Keywords: Lactate Dehydrogenase(LDH), Advanced Solid Tumor, Correlation
Performance Analysis of Data Mining Methods for Sexually Transmitted Disease ...IJECEIAES
According to health reports of Malang city, many people are exposed to sexually transmitted diseases and most sufferers are not aware of the symptoms. Malang city being known as a city of education so that every year the population number increases, it is at risk of increasing the spread of sexually transmitted diseases virus. This problem is important to be solved to treat earlier sufferers sexually transmitted diseases virus in order to reduce the burden of patient spending. In this research, authors conduct data mining methods to classifying sexually transmitted diseases. From the experiment result shows that K-NN is the best method for solve this problem with 90% accuracy.
Hybrid Genetic Algorithm for Optimization of Food Composition on Hypertensive...IJECEIAES
The healthy food with attention of salt degree is one of the efforts for healthy living of hypertensive patient. The effort is important for reducing the probability of hypertension change to be dangerous disease. In this study, the food composition is build with attention nutrition amount, salt degree, and minimum cost. The proposed method is hybrid method of Genetic Algorithm (GA) and Variable Neighborhood Search (VNS). The three scenarios of hybrid GA-VNS types had been developed in this study. Although hybrid GA and VNS take more time than pure GA or pure VNS but the proposed method give better quality of solution. VNS successfully help GA avoids premature convergence and improves better solution. The shortcomings on GA in local exploitation and premature convergence is solved by VNS, whereas the shortcoming on VNS that less capability in global exploration can be solved by use GA that has advantage in global exploration.
BLOOD TUMOR PREDICTION USING DATA MINING TECHNIQUEShiij
Healthcare systems generate a huge data collected from medical tests. Data mining is the computing
process of discovering patterns in large data sets such as medical examinations. Blood diseases are not an
exception; there are many test data can be collected from their patients. In this paper, we applied data
mining techniques to discover the relations between blood test characteristics and blood tumor in order to
predict the disease in an early stage, which can be used to enhance the curing ability. We conducted
experiments in our blood test dataset using three different data mining techniques which are association
rules, rule induction and deep learning. The goal of our experiments is to generate models that can
distinguish patients with normal blood disease from patients who have blood tumor. We evaluated our
results using different metrics applied on real data collected from Gaza European hospital in Palestine.
The final results showed that association rules could give us the relationship between blood test
characteristics and blood tumor. Also, it demonstrated that deep learning classifiers has the best ability to
predict tumor types of blood diseases with an accuracy of 79.45%. Also, rule induction gave us an
explanation of rules that describes both tumor in blood and normal hematology.
Estimating the Survival Function of HIV AIDS Patients using Weibull Modelijtsrd
This work provides information on the survival times of a cohort of infected individuals. The mean survival time was obtained as 22.579 months from the resultant estimate of the shape parameter =1.156 and scale parameter =0.0256 from Weibull 7 simulation of n = 500. Confidence intervals were also obtained for the two parameters at = 0.05 and it was found that the estimates are highly reliable. R. A. Adeleke | O. D. Ogunwale "Estimating the Survival Function of HIV/AIDS Patients using Weibull Model" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-4 , June 2020, URL: https://www.ijtsrd.com/papers/ijtsrd30636.pdf Paper Url :https://www.ijtsrd.com/mathemetics/statistics/30636/estimating-the-survival-function-of-hivaids-patients-using-weibull-model/r-a-adeleke
ABSTRACT
Objective: Stroke is one of the leading causes of death and disabilities worldwide. Cost-effectiveness analysis helps identify neglected opportunities
by highlighting interventions that are relatively inexpensive, yet have the potential to reduce the disease burden substantially. In India, there are
wide social and economic disparities. Socioeconomic environment influences occupation, lifestyle, and nutrition of social classes which in turn would
influence the prevalence and profile of stroke. By reduction of delays in access to hospital and improving provision of affordable treatments can
reduce morbidity and mortality in patients with stroke in India. This study is designed to measure and compare the costs (resources consumed) and
consequences (clinical, economic, and humanistic) of pharmaceutical products and services and their impact on individuals, healthcare systems and
society.
Methods: The purpose of this study is to analyze and conduct a cost-effectiveness analysis for the treatment of stroke in Guntur City Hospitals.
The patients were treated either with aspirin or clopidogrel. The health outcomes were measured using Modified Rankin Scale, A prominent risk
assessment scale for stroke. The pharmacoeconomic data were computed from the patient data collection forms.
Result: The incremental cost-effectiveness ratio of aspirin and clopidogrel were calculated to be Rs. 8046.2/year.
Conclusion: The study concludes that aspirin has the increased socioeconomic impact when compared to Clopidogrel and we can see that the earlier
therapy has supported discharge, home-based rehabilitation along with reduced hospital stay and hence preferable.
Keywords: Stroke, Pharmacoeconomics, Cost-effectiveness analysis, Aspirin, Clopidogrel, Incremental cost-effectiveness ratio.
Large amounts of heterogeneous medical data have become available in various healthcare organizations (payers, providers, pharmaceuticals). Those data could be an enabling resource for deriving insights for improving care delivery and reducing waste. The enormity and complexity of these datasets present great challenges in analyses and subsequent applications to a practical clinical environment. More details are available here http://dmkd.cs.wayne.edu/TUTORIAL/Healthcare/
Machine learning and operations research to find diabetics at risk for readmisison.
A team of researchers was able to apply machine learning to reduce readmissions for diabetics, see "Identifying diabetic patients with high risk of readmission" (Bhuvan,Kumar, Zafar, Aand Kishore, 2016).
Supervised Feature Selection for Diagnosis of Coronary Artery Disease Based o...cscpconf
Feature Selection (FS) has become the focus of much research on decision support systems areas for which datasets with tremendous number of variables are analyzed. In this paper we
present a new method for the diagnosis of Coronary Artery Diseases (CAD) founded on Genetic Algorithm (GA) wrapped Bayes Naïve (BN) based FS. Basically, CAD dataset contains two classes defined with 13 features. In GA–BN algorithm, GA
generates in each iteration a subset of attributes that will be evaluated using the BN in the second step of the selection procedure. The final set of attribute contains the most relevant feature model that increases the accuracy. The algorithm in this case produces 85.50% classification accuracy in the diagnosis of CAD. Thus, the asset of the Algorithm is then compared with the use of Support Vector Machine (SVM), Multi-Layer erceptron (MLP) and C4.5 decision tree Algorithm. The result of classification accuracy for those algorithms are respectively 83.5%, 83.16% and 80.85%. Consequently, the GA wrapped BN Algorithm is correspondingly compared with other FS algorithms. The Obtained results have shown very promising outcomes for the diagnosis of CAD.
SUPERVISED FEATURE SELECTION FOR DIAGNOSIS OF CORONARY ARTERY DISEASE BASED O...csitconf
Feature Selection (FS) has become the focus of much research on decision support systems
areas for which datasets with tremendous number of variables are analyzed. In this paper we
present a new method for the diagnosis of Coronary Artery Diseases (CAD) founded on Genetic
Algorithm (GA) wrapped Bayes Naïve (BN) based FS.
Basically, CAD dataset contains two classes defined with 13 features. In GA–BN algorithm, GA
generates in each iteration a subset of attributes that will be evaluated using the BN in the
second step of the selection procedure. The final set of attribute contains the most relevant
feature model that increases the accuracy. The algorithm in this case produces 85.50%
classification accuracy in the diagnosis of CAD. Thus, the asset of the Algorithm is then
compared with the use of Support Vector Machine (SVM), Multi-Layer Perceptron (MLP) and
C4.5 decision tree Algorithm. The result of classification accuracy for those algorithms are
respectively 83.5%, 83.16% and 80.85%. Consequently, the GA wrapped BN Algorithm is
correspondingly compared with other FS algorithms. The Obtained results have shown very
promising outcomes for the diagnosis of CAD.
Women who test positive for one of the two breast cancer susceptibility genes, BRCA1 and BRCA2, increase their risk by 45-55 percent. Currently, there are no specific physical activity recommendations for these women. However, research supports the positive effect of exercise on reducing breast cancer risk by reducing BMI, adipose tissue, and damage caused by lipid peroxidation.
Machine learning approach for predicting heart and diabetes diseases using da...IAESIJAI
Environmental changes and food habits affect people's health with numerous diseases in today's life. Machine learning is a technique that plays a vital role in predicting diseases from collected data. The health sector has plenty of electronic medical data, which helps this technique to diagnose various diseases quickly and accurately. There has been an improvement in accuracy in medical data analysis as data continues to grow in the medical field. Doctors may have a hard time predicting symptoms accurately. This proposed work utilized Kaggle data to predict and diagnose heart and diabetic diseases. The diseases heart and diabetes are the foremost cause of higher death rates for people. The dataset contains target features for the diagnosis of heart disease. This work finds the target variable for diabetic disease by comparing the patient's blood sugars to normal levels. Blood pressure, body mass index (BMI), and other factors diagnose these diseases and disorders. This work justifies the filter method and principal component analysis for selecting and extracting the feature. The main aim of this work is to highlight the implementation of three ensemble techniques-Adaptive boost, Extreme Gradient boosting, and Gradient boosting-as well as the emphasis placed on the accuracy of the results.
DIAGNOSIS OF OBESITY LEVEL BASED ON BAGGING ENSEMBLE CLASSIFIER AND FEATURE S...ijaia
In the current era, the amount of data generated from various device sources and business transactions is
rising exponentially, and the current machine learning techniques are not feasible for handling the massive
volume of data. Two commonly adopted schemes exist to solve such issues scaling up the data mining
algorithms and data reduction. Scaling the data mining algorithms is not the best way, but data reduction
is feasible. There are two approaches to reducing datasets selecting an optimal subset of features from the
initial dataset or eliminating those that contribute less information. Overweight and obesity are increasing
worldwide, and forecasting future overweight or obesity could help intervention. Our primary objective is
to find the optimal subset of features to diagnose obesity. This article proposes adapting a bagging
algorithm based on filter-based feature selection to improve the prediction accuracy of obesity with a
minimal number of feature subsets. We utilized several machine learning algorithms for classifying the
obesity classes and several filter feature selection methods to maximize the classifier accuracy. Based on
the results of experiments, Pairwise Consistency and Pairwise Correlation techniques are shown to be
promising tools for feature selection in respect of the quality of obtained feature subset and computation
efficiency. Analyzing the results obtained from the original and modified datasets has improved the
classification accuracy and established a relationship between obesity/overweight and common risk factors
such as weight, age, and physical activity patterns.
Metabolic associated fatty liver disease and continuous time Markov chains iman773407
This is dataset pf patients suffering from non-alcoholic fatty liver disease. These data are artificial data to illustrate depiction of the longitudinal study and the statsitical analysis of the results.
PREDICTION OF DIABETES MELLITUS USING MACHINE LEARNING TECHNIQUESIAEME Publication
Diabetes mellitus is a common disease caused by a set of metabolic ailments
where the sugar stages over drawn-out period is very high. It touches diverse organs
of the human body which therefore harm a huge number of the body's system, in
precise the blood strains and nerves. Early prediction in such disease can be exact
and save human life. To achieve the goal, this research work mainly discovers
numerous factors associated to this disease using machine learning techniques.
Machine learning methods provide effectual outcome to extract knowledge by building
predicting models from diagnostic medical datasets together from the diabetic
patients. Quarrying knowledge from such data can be valuable to predict diabetic
patients. In this research, six popular used machine learning techniques, namely
Random Forest (RF), Logistic Regression (LR), Naive Bayes (NB), C4.5 Decision
Tree (DT), K-Nearest Neighbor (KNN), and Support Vector Machine (SVM) are
compared in order to get outstanding machine learning techniques to forecast diabetic
mellitus. Our new outcome shows that Support Vector Machine (SVM) achieved
higher accuracy compared to other machine learning techniques.
Diabetes Prediction by Supervised and Unsupervised Approaches with Feature Se...IJARIIT
Two approaches to building models for prediction of the onset of Type diabetes mellitus in juvenile subjects were examined. A set of tests performed immediately before diagnosis was used to build classifiers to predict whether the subject would be diagnosed with juvenile diabetes. A modified training set consisting of differences between test results taken at different times was also used to build classifiers to predict whether a subject would be diagnosed with juvenile diabetes. Supervised were compared with decision trees and unsupervised of both types of classifiers. In this study, the system and the test most likely to confirm a diagnosis based on the pre-test probability computed from the patient's information including symptoms and the results of previous tests. If the patient's disease post-test probability is higher than the treatment threshold, a diagnostic decision will be made, and vice versa. Otherwise, the patient needs more tests to help make a decision. The system will then recommend the next optimal test and repeat the same process. In this thesis find out which approach is better on diabetes dataset in weka framework. Also use feature selection techniques which reduce the features and complexities of process
A Heart Disease Prediction Model using Logistic Regressionijtsrd
The early prognosis of cardiovascular diseases can aid in making decisions to lifestyle changes in high risk patients and in turn reduce their complications. Research has attempted to pinpoint the most influential factors of heart disease as well as accurately predict the overall risk using homogenous data mining techniques. Recent research has delved into amalgamating these techniques using approaches such as hybrid data mining algorithms. This paper proposes a rule based model to compare the accuracies of applying rules to the individual results of logistic regression on the Cleveland Heart Disease Database in order to present an accurate model of predicting heart disease. K. Sandhya Rani | M. Sai Manoj | G. Suguna Mani"A Heart Disease Prediction Model using Logistic Regression" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-3 , April 2018, URL: http://www.ijtsrd.com/papers/ijtsrd11401.pdf http://www.ijtsrd.com/computer-science/data-miining/11401/a-heart-disease-prediction-model-using-logistic-regression/k-sandhya-rani
K-Nearest Neighbours based diagnosis of hyperglycemiaijtsrd
AI or artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of information and rules for using the information), reasoning (using the rules to reach approximate or definite conclusions), and self-correction. As a result, Artificial Intelligence is gaining Importance in science and engineering fields. The use of Artificial Intelligence in medical diagnosis too is becoming increasingly common and has been used widely in the diagnosis of cancers, tumors, hepatitis, lung diseases, etc... The main aim of this paper is to build an Artificial Intelligent System that after analysis of certain parameters can predict that whether a person is diabetic or not. Diabetes is the name used to describe a metabolic condition of having higher than normal blood sugar levels. Diabetes is becoming increasingly more common throughout the world, due to increased obesity - which can lead to metabolic syndrome or pre-diabetes leading to higher incidences of type 2 diabetes. Authors have identified 10 parameters that play an important role in diabetes and prepared a rich database of training data which served as the backbone of the prediction algorithm. Keeping in view this training data authors developed a system that uses the artificial neural networks algorithm to serve the purpose. These are capable of predicting new observations (on specific variables) from previous observations (on the same or other variables) after executing a process of so-called learning from existing training data (Haykin 1998).The results indicate that the performance of KNN method when compared with the medical diagnosis system was found to be 91%. This system can be used to assist medical programs especially in geographically remote areas where expert human diagnosis not possible with an advantage of minimal expenses and faster results. Abid Sarwar"K-Nearest Neighbours based diagnosis of hyperglycemia" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-1 , December 2017, URL: http://www.ijtsrd.com/papers/ijtsrd7046.pdf http://www.ijtsrd.com/computer-science/artificial-intelligence/7046/k-nearest-neighbours-based-diagnosis-of-hyperglycemia/abid-sarwar
HEART DISEASE PREDICTION USING MACHINE LEARNING AND DEEP LEARNINGIJDKP
Heart disease is most common disease reported currently in the United States among both the genders and
according to official statistics about fifty percent of the American population is suffering from some form of
cardiovascular disease. This paper performs chi square tests and linear regression analysis to predict
heart disease based on the symptoms like chest pain and dizziness. This paper will help healthcare sectors
to provide better assistance for patients suffering from heart disease by predicting it in beginning stage of
disease. Chi square test is conducted to identify whether there is a relation between chest pain and heart
disease cases in the United States by analyzing heart disease dataset from IEEE Data Port. The test results
and analysis show that males in the United States are most likely to develop heart disease with the
symptoms like chest pain, dizziness, shortness of breath, fatigue, and nausea. This test also shows that
there is a week corelation of 0.5 is identified which shows people with all ages including teens can face
heart diseases and its prevalence increase with age. Also, the tests indicate that 90 percent of the
participant who are facing severe chest pain is suffering from heart disease where majority of the
successful heart disease identified is in males and only 10 percent participants are identified as healthy.
The evaluated p-values are much greater than the statistical threshold of 0.05 which concludes factors like
sex, Exercise angina, Cholesterol, old peak, ST_Slope, obesity, and blood sugar play significant role in
onset of cardiovascular disease. We have tested the dataset with prediction model built on logistic
regression and observed an accuracy of 85.12 percent.
La Hipertensión, es una de las mayores enfermedades que sufren los Hispanohablantes en el planeta . Es grato poder colocar este documento al público y haber podido hacer parte del equipo , ojalá sirvan a muchos las implementaciones. idioma más hablado según el foro Económico mundial - Me refiero al español ó castellano según sea -
segundo idioma y haber podido hacer parte de este equipo. Genuinamente, espero que se curen la mayor cantidad de personas con . Espero genuinamente puedan hacer algúna donación a este esfuerzo grupal. Espero Compartamos este "Paper" así como compartimos memes - En el sentido literal de la significancia-
** Refierase a Wikipedia sino tiene un diccionario a mano.
Beaglebone Black Webcam Server For SecurityIJTET Journal
Web server security using BeagleBone Black is based on ARM Cortex-A8 processor and Linux operating system
is designed and implemented. In this project the server side consists of BeagleBone Black with angstrom OS and interfaced
with webcam. The client can access the web server by proper authentication. The web server displays the web page forms
like home, video, upload, settings and about. The home web page describes the functions of Web Pages. The video Web page
displays the saved videos in the server and client can view or download the videos. The upload web page is used by the client
to upload the files to server. The settings web page is used to change the username, password and date if needed. The about web page provides the description of the project
Biometrics Authentication Using Raspberry PiIJTET Journal
Biometric authentication is one of the most popular and accurate technology. Nowadays, it is used in many real time
applications. However, recognizing fingerprints in Linux based embedded computers (raspberry pi) is still a very complex problem.
This entire work is done on the Linux based embedded computer called raspberry pi , in which database creation and management
using postgresql, web page creation using PHP language, fingerprint reader access, authentication and recognition using python were
entirely done on raspberry pi This paper discusses on the standardized authentication model which is capable of extracting the
fingerprints of individual and store that in database . Then I use the final fingerprint to match with others in fingerprints present in the
database (postgresql) to show the capability of this model.
Conceal Traffic Pattern Discovery from Revealing Form of Ad Hoc NetworksIJTET Journal
Number of techniques has been planned supported packet secret writing to safeguard the
communication in MANETs. STARS functioning supported stastical characteristics of captured raw traffic.
STARS discover the relationships of offer to destination communication. To forestall STAR attack associate
offer hidding technique is introduced.The pattern aims to derive the source/destination probability distribution.
that's the probability for each node to entire traffic captured with link details message source/destination and
conjointly the end-to-end link probability distribution that's the probability for each strive of nodes to be
associate end-to-end communication strive. thence construct point-to-point traffic originate and then derive the
end-to-end traffic with a set of traffic filtering rules; thus actual traffic protected against revelation attack.
Through this protective mechanism efficiency of traffic enlarged by ninety fifth from attacked traffic. For a lot of
sweetening to avoid overall attacks second shortest path is chosen.
Node Failure Prevention by Using Energy Efficient Routing In Wireless Sensor ...IJTET Journal
The most necessary issue that has to be solved in coming up with an information transmission rule for
wireless unplanned networks is a way to save unplanned node energy whereas meeting the wants of applications
users because the unplanned nodes are battery restricted. Whereas satisfying the energy saving demand, it’s
conjointly necessary to realize the standard of service. Just in case of emergency work, it's necessary to deliver the
information on time. Achieving quality of service in is additionally necessary. So as to realize this demand, Power -
efficient Energy-Aware routing protocol for wireless unplanned networks is planned that saves the energy by
expeditiously choosing the energy economical path within the routing method. When supply finds route to
destination, it calculates α for every route. The worth α is predicated on largest minimum residual energy of the trail
and hop count of the trail. If a route has higher α, then that path is chosen for routing the information. The worth of α
are higher, if the most important of minimum residual energy of the trail is higher and also the range of hop count is
lower. Once the trail is chosen, knowledge is transferred on the trail. So as to extend the energy potency any
transmission power of the nodes is additionally adjusted supported the situation of their neighbour. If the neighbours
of a node are closely placed thereto node, then transmission vary of the node is diminished. Thus it's enough for the
node to own the transmission power to achieve the neighbour at intervals that vary. As a result transmission power
of the node is cut back that later on reduces the energy consumption of the node. Our planned work is simulated
through Network machine (NS-2). Existing AODV and Man-Min energy routing protocol conjointly simulated
through NS-2 for performance comparison. Packet Delivery quantitative relation, Energy Consumption and end-toend
delay.
Prevention of Malicious Nodes and Attacks in Manets Using Trust worthy MethodIJTET Journal
In Manet the first demand is co-operative communication among nodes. The malicious nodes might cause security issues like grey hole and cooperative attacks. To resolve these attack issue planning Dynamic supply routing mechanism, that is referred as cooperative bait detection theme (CBDS) that integrate the advantage of each proactive and reactive defence design is used. In region attacks, a node transmits a malicious broadcast informing that it's the shortest path to the destination, with the goal of intercepting messages. During this case, a malicious node (so-called region node) will attract all packets by victimisation solid Route Reply (RREP) packet to incorrectly claim that “fake” shortest route to the destination then discard these packets while not forwarding them to the destination. In grey hole attacks, the malicious node isn't abs initio recognized in and of itself since it turns malicious solely at a later time, preventing a trust-based security resolution from detective work its presence within the network. It then by selection discards/forwards the info packets once packets undergo it. During this we have a tendency to focus is on detective work grey hole/collaborative region attacks employing a dynamic supply routing (DSR)-based routing technique.
Effective Pipeline Monitoring Technology in Wireless Sensor NetworksIJTET Journal
Wireless detector nodes are a promising technology to play three-dimensional applications. Even it
will sight correct lead to could on top of ground and underground. In solid underground watching system makes
some challenges are there to propagating the signals. The detector node is moving entire the underground
pipeline and sending information to relay node that's placed within the on top of ground. If any relay node is
unsuccessful during this condition suggests that it'll not sending the info. In this watching system can specially
designed as a heterogeneous networks. Every high power relay nodes most covers minimum 2 low power relay
node. If any relay node is unsuccessful within the network, the constellation can modification mechanically
supported the heterogeneous network. The high power relay node is change the unsuccessful node and sending
the condition of pipeline. The benefits are thought-about to be extremely distributed, improved packet delivery
Raspberry Pi Based Client-Server Synchronization Using GPRSIJTET Journal
A low cost Internet-based attendance record embedded system for students which uses wireless technology to
transfer data between the client and server is designed. The proposed system consist of a Raspberry Pi which acts as a
client which stores the details of the students in the database by using user login system using web. When the user logs
into the database the data is sent through GPRS to the server machine which maintains the records of the employees and
the attendance is updated in the server database. The GPRS module provides a bidirectional real-time data transfer
between the client and server. This system can be implemented to any real time application so as to retrieve information
from a data source of the client system and send a file to the remote server through GPRS. The main aim is to avoid the
limitations in Ethernet connection and design a low cost and efficient attendance record system where the data is
transferred in a secure way from the client database and updated in the server database using GPRS technology
ECG Steganography and Hash Function Based Privacy Protection of Patients Medi...IJTET Journal
Data hiding can hide sensitive information into signals for covert communication. Most data hiding
techniques will distort the signal in order to insert additional messages. The distortion is often small; the irreversibility is
not admissible to some sensitive techniques. Most of the applications, lossless data hiding is desired to extract the
embedded data and the original host signal. The project proposes the enhancement of protection system for secret data
communication through encrypted data concealment in ECG signals of the patient. The proposed encryption technique
used to encrypt the confidential data into unreadable form and not only enhances the safety of secret carrier information by
making the information inaccessible to any intruder having a random method. For that we use twelve square ciphering
techniques. The technique is used make the communication between the sender and the receiver to be authenticated is hash
function. To evaluate the effectiveness of ECG wave at the proposed technique, distortion measurement techniques of two
are used, the percentage residue difference (PWD) and wavelets weighted PRD. Proposed technique provides high security protection for patient data with low distortion is proven in this proposed system.
An Efficient Decoding Algorithm for Concatenated Turbo-Crc CodesIJTET Journal
In this paper, a hybrid turbo decoding algorithm is used, in which the outer code, Cyclic Redundancy Check code is
not used for detection of errors as usual but for error correction and improvement. This algorithm effectively combines the iterative
decoding algorithm with Rate-Compatible Insertion Convolution Turbo Decoding, where the CRC code and the turbo code are
regarded as an integrated whole in the Decoding process. Altogether we propose an effective error detecting method based on
normalized Euclidean distance to compensate for the loss of error detection capability which should have been provided by CRC
code. Simulation results show that with the proposed approach, 0.5-2dB performance gain can be achieved for the code blocks
with short information length
Improved Trans-Z-source Inverter for Automobile ApplicationIJTET Journal
In this paper a new technology is proposed with a replacement of conventional voltage source/current
source inverter with Improved Trans-Z-source inverter in automobile applications. The improved Trans-Z-source
inverter has a high boost inversion capability and continues input current. Also this new inverter can suppress the
resonant current at the startup; this resonant current in the startup may lead the device to permanent damage. In
improved Trans-Z-source inverter a couple inductor is needed, instead of this coupled inductor a transformer is used.
By using a transformer with sufficient turns ratio the size can be reduced. The turn’s ratio of the transformer decides
the input voltage of the inverter. In this paper operating principle, comparison with conventional inverters, working
with automobiles simulation results, THD analysis, Hardware implementation using ATMEGA 328 P are included.
Wind Energy Conversion System Using PMSG with T-Source Three Phase Matrix Con...IJTET Journal
This paper presents an analysis of a PMSG wind power system using T-Sourcethree phase matrix converter. PMSG using T-Source three phase matrix converterhas advantages that it can provide any desired AC output voltage regardless of DC input with regulation in shoot-through time. In this control system T-Source capacitor voltage can be kept stable with variations in the shoot-through time, maximum power from the wind turbine to be delivered. Inaddition, of a new future, the converter employs a safe-commutation strategy toconduct along a continuous current flow, which results in theelimination of voltage spikes on switches without the need for a snubber circuit. With the use of matrix converter the surely need forrectifier circuit and passive components to store energy arereduced. The MATLAB/Simulinkmodel of the overall system is carried out and theoretical wind energy conversion output load voltage calculations are madeand feasibility of the new topology has been verified and that theconverter can produce an output voltage and output current. This proposed method has greater efficiency and lower cost.
Comprehensive Path Quality Measurement in Wireless Sensor NetworksIJTET Journal
A wireless sensor network mostly relies on multi-hop transmissions to deliver a data packet. It is of essential importance to measure the quality of multi-hop paths and such information shall be utilized in designing efficient routing strategies. Existing metrics like ETF, ETX mainly focus on quantifying the link performance in between the nodes while overlooking the forwarding capabilities inside the sensor nodes. By combining the QoF measurements within a node and over a link, we are able to comprehensively measure the intact path quality in designing efficient multihop routing protocols. We propose QoF, Quality of Forwarding, a new metric which explores the performance in the gray zone inside a node left unattended in previous studies. We implement QoF and build a modified Collection Tree Protocol.
Optimizing Data Confidentiality using Integrated Multi Query ServicesIJTET Journal
Query services have experienced terribly massive growth within past few years for that reason large usage of services need to balance outsourcing data management to Cloud service providers that provide query services to the client for data owners, therefore data owner needs data confidentiality as well as query privacy to be guaranteed attributable to disloyal behavior of cloud service provider consequently enhancing data confidentiality must not be compromise the query processed performance. It is not significant to provide slow query services as the result of security along with privacy assurance. We propose the random space perturbation data perturbation method to provide secure with kNN(k-nearest-neighbor) range query services for protecting data in the cloud and Frequency Structured R-Tree (FSR-Tree) efficient range query. Our schemes enhance data confidentiality without compromising the FSR-TREE query processing performance that also increases the user experience.
Foliage Measurement Using Image Processing TechniquesIJTET Journal
Automatic detection of fruit and leaf diseases is essential to automatically detect the symptoms of diseases as early as they appear on the growing stage. This system helps to detect the diseases on fruit during farming , right from plan and easily monitoring the diseases of grapes leaf and apple fruit. By using this system we can avoid the economical loss due to various diseases in agriculture production. K-means clustering technique is used for segmentation. The features are extracted from the segmented image and artificial neural network is used for training the image database and classified their performance to the respective disease categories. The experimental results express that what type of disease can be affected in the fruit and leaf .
Harmonic Mitigation Method for the DC-AC Converter in a Single Phase SystemIJTET Journal
This project suggest a sine-wave modulation technique is to achieve a low total harmonic distortion of Buck-Boost converter connected to a changing polarity inverter in a system. The suggested technique improves the harmonic content of the output. In addition, a proportional-resonant Integral controller is used along with harmonic compensation techniques for eliminating the DC component in the system. Also, the performance of the Proposed controller is analyzed when it connecting to the converter. The design of Buck-Boost converter is fed by modulated sine wave Pulse width modulation technique are proposed to mitigate the low order harmonics and to control the output current. So, that the output complies within the standard limit without use of low pass filter.
Comparative Study on NDCT with Different Shell Supporting StructuresIJTET Journal
Natural draft cooling towers are very essential in modern days in thermal and nuclear power stations. These are the hyperbolic shells of revolution in form and are supported on inclined columns. Several types of shell supporting structures such as A,V,X,Y are being used for construction of NDCT’s. Wind loading on NDCT governs critical cases and requires attention. In this paper a comparative study on reinforcement details has been done on NDCT’s with X and Y shell supporting structures. For this purpose 166m cooling tower with X and Y supporting structures being analyzed and design for wind (BS & IS code methods), seismic loads using SAP2000.
Experimental Investigation of Lateral Pressure on Vertical Formwork Systems u...IJTET Journal
The modeling of pressure distribution of fresh concrete poured in vertical formwork are rather dynamic than complex. Many researchers had worked on the pressure distribution modeling of concrete and formulated empirical relationship factors like formwork height, rate of pour, consistency classes of concrete. However, in the current scenario, most of high rise construction uses self compacting concrete(SCC) which is a special concrete which utilizes not only mineral and chemical admixtures but also varied aggregate proportions and hence modeling pressure distribution of SCC over other concrete in vertical formwork systems is necessitated. This research seeks to bridge the gap between the theoretical formulation of pressure distribution with the actual modeled (scaled) vertical formwork systems. The pressure distribution of SCC in the laboratory will be determined using pressure sensors, modeled and analyzed.
A Five – Level Integrated AC – DC ConverterIJTET Journal
This paper presents the implementation of a new five – level integrated AC – DC converter with high input power factor and reduced input current harmonics complied with IEC1000-3-2 harmonic standards for electrical equipments. The proposed topology is a combination of boost input power factor pre – regulator and five – level DC – DC converter. The single – stage PFC (SSPFC) approach used in this topology is an alternative solution to low – power and cost – effective applications.
A Comprehensive Approach for Multi Biometric Recognition Using Sclera Vein an...IJTET Journal
Sclera and finger print vein fusion is a new biometric approach for uniquely identifying humans. First, Sclera vein is identified and refined using image enhancement techniques. Then Y shape feature extraction algorithm is used to obtain Y shape pattern which are then fused with finger vein pattern. Second, Finger vein pattern is obtained using CCD camera by passing infrared light through the finger. The obtained image is then enhanced. A line shape feature extraction algorithm is used to get line patterns from enhanced finger vein image. Finally Sclera vein image pattern and Finger vein image pattern were combined to get the final fused image. The image thus obtained can be used to uniquely identify a person. The proposed multimodal system will produce accurate results as it combines two main traits of an individual. Therefore, it can be used in human identification and authentication systems.
Study of Eccentrically Braced Outrigger Frame under Seismic ExitationIJTET Journal
Outrigger braced structures has efficient structural form consist of a central core, comprising braced frames with
horizontal cantilever ”outrigger” trusses or girders connecting the core to the outer column. When the structure is loaded
horizontally, vertical plane rotation of the core is restrained by the outriggers through tension in windward column and
compression in leeward column. The effective structural depth of the building is greatly increased, thus augmenting the lateral
stiffness of the building and reducing the lateral deflections and moments in core. In effect, the outriggers join the columns to the
core to make the structure behave as a partly composite cantilever. By providing eccentrically braced system in outrigger frame by
varying the size of links and analyzing it. Push over analysis is carried out by varying the link size using computer programs, Sap
2007 to understand their seismic performance. The ductile behavior of eccentrically braced frame is highly desirable for structures
subjected to strong ground motion. Maximum stiffness, strength, ductility and energy dissipation capacity are provided by
eccentrically braced frame. Studies were conducted on the use of outrigger frame for the high steel building subjected to
earthquake load. Braces are designed not to buckle, regardless of the severity of lateral loading on the frame. Thus eccentrically
braced frame ensures safety against collapse.
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
Ethnobotany and Ethnopharmacology:
Ethnobotany in herbal drug evaluation,
Impact of Ethnobotany in traditional medicine,
New development in herbals,
Bio-prospecting tools for drug discovery,
Role of Ethnopharmacology in drug evaluation,
Reverse Pharmacology.
The Indian economy is classified into different sectors to simplify the analysis and understanding of economic activities. For Class 10, it's essential to grasp the sectors of the Indian economy, understand their characteristics, and recognize their importance. This guide will provide detailed notes on the Sectors of the Indian Economy Class 10, using specific long-tail keywords to enhance comprehension.
For more information, visit-www.vavaclasses.com
This is a presentation by Dada Robert in a Your Skill Boost masterclass organised by the Excellence Foundation for South Sudan (EFSS) on Saturday, the 25th and Sunday, the 26th of May 2024.
He discussed the concept of quality improvement, emphasizing its applicability to various aspects of life, including personal, project, and program improvements. He defined quality as doing the right thing at the right time in the right way to achieve the best possible results and discussed the concept of the "gap" between what we know and what we do, and how this gap represents the areas we need to improve. He explained the scientific approach to quality improvement, which involves systematic performance analysis, testing and learning, and implementing change ideas. He also highlighted the importance of client focus and a team approach to quality improvement.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.