This document describes a risk classification method using an adaptive naive Bayes kernel machine model. The method partitions genetic data into blocks, such as gene sets or linkage disequilibrium blocks, and applies kernel machine regression within each block to allow for complex, nonlinear effects. Regularized selection of informative blocks is used to build an accurate yet parsimonious prediction model. Simulation studies show the method achieves high prediction accuracy and correctly selects predictive blocks. The method is applied to genetic risk prediction of type 1 diabetes using single nucleotide polymorphism data from known risk loci.
Functional Genomics Journal Club presentation on the following publication:
Kuzawa, C. W., Chugani, H. T., Grossman, L. I., Lipovich, L., Muzik, O., Hof, P. R., … Lange, N. (2014). Metabolic costs and evolutionary implications of human brain development. Proceedings of the National Academy of Sciences, 111(36), 13010–13015. https://doi.org/10.1073/pnas.1323099111
Discuss about Al, machine learning, and the hype cycle
Discuss the knowledge-based classification of proteins
Discuss applications of AI/ML to drug discovery
Functional Genomics Journal Club presentation on the following publication:
Kuzawa, C. W., Chugani, H. T., Grossman, L. I., Lipovich, L., Muzik, O., Hof, P. R., … Lange, N. (2014). Metabolic costs and evolutionary implications of human brain development. Proceedings of the National Academy of Sciences, 111(36), 13010–13015. https://doi.org/10.1073/pnas.1323099111
Discuss about Al, machine learning, and the hype cycle
Discuss the knowledge-based classification of proteins
Discuss applications of AI/ML to drug discovery
GRAPHICAL MODEL AND CLUSTERINGREGRESSION BASED METHODS FOR CAUSAL INTERACTION...ijaia
The early detection of Breast Cancer, the deadly disease that mostly affects women is extremely complex because it requires various features of the cell type. Therefore, the efficient approach to diagnosing Breast Cancer at the early stage was to apply artificial intelligence where machines are simulated with intelligence and programmed to think and act like a human. This allows machines to passively learn and find a pattern, which can be used later to detect any new changes that may occur. In general, machine learning is quite useful particularly in the medical field, which depends on complex genomic measurements such as microarray technique and would increase the accuracy and precision of results. With this technology, doctors can easily diagnose patients with cancer quickly and apply the proper treatment in a timely manner. Therefore, the goal of this paper is to address and propose a robust Breast Cancer diagnostic system using complex genomic analysis via microarray technology. The system will combine two machine learning methods, K-means cluster, and linear regression.
MISSING DATA CLASSIFICATION OF CHRONIC KIDNEY DISEASEIJDKP
In this paper we propose an approach on chronic kidney disease classification with the presence of missing data. We implemented a classification system to solve the challenge of detecting chronic kidney diseases based on medical test data. The approach is comparing three different techniques that deals with missing data including deletion, mean imputation, and selection of best features. Each techniques is tested using the K-NN classifier, Naïve Bayes classifier, decision tree, and support vector machines (SVM). The final accuracy of each system is determined using 10-fold cross validation.
large data set is not available for some disease such as Brain Tumor. This and part2 presentation shows how to find "Actionable solution from a difficult cancer dataset
Here are tutorial (Methods and Applications of NLP in Medicine) slides at AIME 2020 (International Conference on Artificial Intelligence in Medicine) provided by Dr. Hua Xu, Dr. Yifan Peng, Dr. Yanshan Wang, Dr. Rui Zhang. Through this half-day tutorial, we introduced our methodological efforts in applying NLP to the clinical domain, and showcase our real-world NLP applications in clinical practice and research across four institutions. We reviewed NLP techniques in solving clinical problems and facilitating clinical research, the state-of-the art clinical NLP tools, and share collaboration experience with clinicians, as well as publicly available EHR data and medical resources, and also concluded the tutorial with vast opportunities and challenges of clinical NLP. The tutorial will provide an overview of clinical backgrounds, and does not presume knowledge in medicine or health care.
بعض (وليس الكل) ملخصات الأبحاث الجيدة المنشورة فى بعض المجلات الجيدة وفيها تنوع من الافكار الابحاث الابتكارية التى يخدم فيها علوم الحاسبات فيها - انها تطبيقات حياتية
Analysis of Imbalanced Classification Algorithms A Perspective Viewijtsrd
Classification of data has become an important research area. The process of classifying documents into predefined categories Unbalanced data set, a problem often found in real world application, can cause seriously negative effect on classification performance of machine learning algorithms. There have been many attempts at dealing with classification of unbalanced data sets. In this paper we present a brief review of existing solutions to the class-imbalance problem proposed both at the data and algorithmic levels. Even though a common practice to handle the problem of imbalanced data is to rebalance them artificially by oversampling and or under-sampling, some researchers proved that modified support vector machine, rough set based minority class oriented rule learning methods, cost sensitive classifier perform good on imbalanced data set. We observed that current research in imbalance data problem is moving to hybrid algorithms. Priyanka Singh | Prof. Avinash Sharma "Analysis of Imbalanced Classification Algorithms: A Perspective View" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-2 , February 2019, URL: https://www.ijtsrd.com/papers/ijtsrd21574.pdf
Paper URL: https://www.ijtsrd.com/engineering/computer-engineering/21574/analysis-of-imbalanced-classification-algorithms-a-perspective-view/priyanka-singh
Simplified Knowledge Prediction: Application of Machine Learning in Real LifePeea Bal Chakraborty
Machine learning is the scientific study of algorithms and statistical models that is used by the machines to perform a specific task depending on patterns and inference rather than explicit instructions. This research and analysis aims to observe how precisely a machine can predict that a patient suspected of breast cancer is having malignant or benign cancer.In this paper the classification of cancer type and prediction of risk levels is done by various model of machine learning and is pictorially depicted by various tools of visual analytics.
SCDT: FC-NNC-structured Complex Decision Technique for Gene Analysis Using Fu...IJECEIAES
In many diseases classification an accurate gene analysis is needed, for which selection of most informative genes is very important and it require a technique of decision in complex context of ambiguity. The traditional methods include for selecting most significant gene includes some of the statistical analysis namely 2-Sample-T-test (2STT), Entropy, Signal to Noise Ratio (SNR). This paper evaluates gene selection and classification on the basis of accurate gene selection using structured complex decision technique (SCDT) and classifies it using fuzzy cluster based nearest neighborclassifier (FC-NNC). The effectiveness of the proposed SCDT and FC-NNC is evaluated for leave one out cross validation metric(LOOCV) along with sensitivity, specificity, precision and F1-score with four different classifiers namely 1) Radial Basis Function (RBF), 2) Multi-layer perception(MLP), 3) Feed Forward(FF) and 4) Support vector machine(SVM) for three different datasets of DLBCL, Leukemia and Prostate tumor. The proposed SCDT &FC-NNC exhibits superior result for being considered more accurate decision mechanism.
Evaluation of Logistic Regression and Neural Network Model With Sensitivity A...CSCJournals
Logistic Regression (LR) is a well known classification method in the field of statistical learning. It allows probabilistic classification and shows promising results on several benchmark problems. Logistic regression enables us to investigate the relationship between a categorical outcome and a set of explanatory variables. Artificial Neural Networks (ANNs) are popularly used as universal non-linear inference models and have gained extensive popularity in recent years. Research activities are considerable and literature is growing. The goal of this research work is to compare the performance of Logistic Regression and Neural Network models on publicly available medical datasets. The evaluation process of the model is as follows. The logistic regression and neural network methods with sensitivity analysis have been evaluated for the effectiveness of the classification. The Classification Accuracy is used to measure the performance of both the models. From the experimental results it is confirmed that the neural network model with sensitivity analysis model gives more efficient result.
Machine Learning Based Approaches for Prediction of Parkinson's Disease mlaij
The prediction of Parkinson’s disease is most important and challenging problem for biomedical engineering researchers and doctors. The symptoms of disease are investigated in middle and late middle age. In this paper, minimum redundancy maximum relevance feature selection algorithms is used to select the most important feature among all the features to predict the Parkinson diseases. Here, it is observed that the random forest with 20 number of features selected by minimum redundancy maximum relevance feature selection algorithms provide the overall accuracy 90.3%, precision 90.2%, Mathews correlation coefficient values of 0.73 and ROC values 0.96 which is better in comparison to all other machine learning based approaches such as bagging, boosting, random forest, rotation forest, random subspace, support vector machine, multilayer perceptron, and decision tree based methods.
GRAPHICAL MODEL AND CLUSTERINGREGRESSION BASED METHODS FOR CAUSAL INTERACTION...ijaia
The early detection of Breast Cancer, the deadly disease that mostly affects women is extremely complex because it requires various features of the cell type. Therefore, the efficient approach to diagnosing Breast Cancer at the early stage was to apply artificial intelligence where machines are simulated with intelligence and programmed to think and act like a human. This allows machines to passively learn and find a pattern, which can be used later to detect any new changes that may occur. In general, machine learning is quite useful particularly in the medical field, which depends on complex genomic measurements such as microarray technique and would increase the accuracy and precision of results. With this technology, doctors can easily diagnose patients with cancer quickly and apply the proper treatment in a timely manner. Therefore, the goal of this paper is to address and propose a robust Breast Cancer diagnostic system using complex genomic analysis via microarray technology. The system will combine two machine learning methods, K-means cluster, and linear regression.
MISSING DATA CLASSIFICATION OF CHRONIC KIDNEY DISEASEIJDKP
In this paper we propose an approach on chronic kidney disease classification with the presence of missing data. We implemented a classification system to solve the challenge of detecting chronic kidney diseases based on medical test data. The approach is comparing three different techniques that deals with missing data including deletion, mean imputation, and selection of best features. Each techniques is tested using the K-NN classifier, Naïve Bayes classifier, decision tree, and support vector machines (SVM). The final accuracy of each system is determined using 10-fold cross validation.
large data set is not available for some disease such as Brain Tumor. This and part2 presentation shows how to find "Actionable solution from a difficult cancer dataset
Here are tutorial (Methods and Applications of NLP in Medicine) slides at AIME 2020 (International Conference on Artificial Intelligence in Medicine) provided by Dr. Hua Xu, Dr. Yifan Peng, Dr. Yanshan Wang, Dr. Rui Zhang. Through this half-day tutorial, we introduced our methodological efforts in applying NLP to the clinical domain, and showcase our real-world NLP applications in clinical practice and research across four institutions. We reviewed NLP techniques in solving clinical problems and facilitating clinical research, the state-of-the art clinical NLP tools, and share collaboration experience with clinicians, as well as publicly available EHR data and medical resources, and also concluded the tutorial with vast opportunities and challenges of clinical NLP. The tutorial will provide an overview of clinical backgrounds, and does not presume knowledge in medicine or health care.
بعض (وليس الكل) ملخصات الأبحاث الجيدة المنشورة فى بعض المجلات الجيدة وفيها تنوع من الافكار الابحاث الابتكارية التى يخدم فيها علوم الحاسبات فيها - انها تطبيقات حياتية
Analysis of Imbalanced Classification Algorithms A Perspective Viewijtsrd
Classification of data has become an important research area. The process of classifying documents into predefined categories Unbalanced data set, a problem often found in real world application, can cause seriously negative effect on classification performance of machine learning algorithms. There have been many attempts at dealing with classification of unbalanced data sets. In this paper we present a brief review of existing solutions to the class-imbalance problem proposed both at the data and algorithmic levels. Even though a common practice to handle the problem of imbalanced data is to rebalance them artificially by oversampling and or under-sampling, some researchers proved that modified support vector machine, rough set based minority class oriented rule learning methods, cost sensitive classifier perform good on imbalanced data set. We observed that current research in imbalance data problem is moving to hybrid algorithms. Priyanka Singh | Prof. Avinash Sharma "Analysis of Imbalanced Classification Algorithms: A Perspective View" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-2 , February 2019, URL: https://www.ijtsrd.com/papers/ijtsrd21574.pdf
Paper URL: https://www.ijtsrd.com/engineering/computer-engineering/21574/analysis-of-imbalanced-classification-algorithms-a-perspective-view/priyanka-singh
Simplified Knowledge Prediction: Application of Machine Learning in Real LifePeea Bal Chakraborty
Machine learning is the scientific study of algorithms and statistical models that is used by the machines to perform a specific task depending on patterns and inference rather than explicit instructions. This research and analysis aims to observe how precisely a machine can predict that a patient suspected of breast cancer is having malignant or benign cancer.In this paper the classification of cancer type and prediction of risk levels is done by various model of machine learning and is pictorially depicted by various tools of visual analytics.
SCDT: FC-NNC-structured Complex Decision Technique for Gene Analysis Using Fu...IJECEIAES
In many diseases classification an accurate gene analysis is needed, for which selection of most informative genes is very important and it require a technique of decision in complex context of ambiguity. The traditional methods include for selecting most significant gene includes some of the statistical analysis namely 2-Sample-T-test (2STT), Entropy, Signal to Noise Ratio (SNR). This paper evaluates gene selection and classification on the basis of accurate gene selection using structured complex decision technique (SCDT) and classifies it using fuzzy cluster based nearest neighborclassifier (FC-NNC). The effectiveness of the proposed SCDT and FC-NNC is evaluated for leave one out cross validation metric(LOOCV) along with sensitivity, specificity, precision and F1-score with four different classifiers namely 1) Radial Basis Function (RBF), 2) Multi-layer perception(MLP), 3) Feed Forward(FF) and 4) Support vector machine(SVM) for three different datasets of DLBCL, Leukemia and Prostate tumor. The proposed SCDT &FC-NNC exhibits superior result for being considered more accurate decision mechanism.
Evaluation of Logistic Regression and Neural Network Model With Sensitivity A...CSCJournals
Logistic Regression (LR) is a well known classification method in the field of statistical learning. It allows probabilistic classification and shows promising results on several benchmark problems. Logistic regression enables us to investigate the relationship between a categorical outcome and a set of explanatory variables. Artificial Neural Networks (ANNs) are popularly used as universal non-linear inference models and have gained extensive popularity in recent years. Research activities are considerable and literature is growing. The goal of this research work is to compare the performance of Logistic Regression and Neural Network models on publicly available medical datasets. The evaluation process of the model is as follows. The logistic regression and neural network methods with sensitivity analysis have been evaluated for the effectiveness of the classification. The Classification Accuracy is used to measure the performance of both the models. From the experimental results it is confirmed that the neural network model with sensitivity analysis model gives more efficient result.
Machine Learning Based Approaches for Prediction of Parkinson's Disease mlaij
The prediction of Parkinson’s disease is most important and challenging problem for biomedical engineering researchers and doctors. The symptoms of disease are investigated in middle and late middle age. In this paper, minimum redundancy maximum relevance feature selection algorithms is used to select the most important feature among all the features to predict the Parkinson diseases. Here, it is observed that the random forest with 20 number of features selected by minimum redundancy maximum relevance feature selection algorithms provide the overall accuracy 90.3%, precision 90.2%, Mathews correlation coefficient values of 0.73 and ROC values 0.96 which is better in comparison to all other machine learning based approaches such as bagging, boosting, random forest, rotation forest, random subspace, support vector machine, multilayer perceptron, and decision tree based methods.
Signal Hill, a boutique investment advisory firm serving the M&A and private capital raising needs of growth companies, has developed a benchmark of 2016 M&A activity in the middle market, with a particular focus on the technology sector.
A short tutorial on R, basically for a starter who wants to do data mining especially text data mining.
Related codes and data will be found at the following lnik: http://textanalytics.in/wm/R%20tutorial%20(DATA2014).zip
Sentiment analysis using naive bayes classifier Dev Sahu
This ppt contains a small description of naive bayes classifier algorithm. It is a machine learning approach for detection of sentiment and text classification.
randomization approach in case-based reasoning: case of study of mammography ...Miled Basma Bentaiba
A new way to amplify the case-based reasoning (CBR) knowledge-base using randomization. This method allows knowledge amplification without deteriorating the CBR's resolution time and it was applied to find the severity of mammography mass for patients.
Vahid Taslimitehrani's Dissertation Defense: Friday, February 19 2015.
Ph.D. Committee: Drs. Guozhu Dong, Advisor, T.K. Prasad, Amit Sheth, Keke Chen
and Jyotishman Pathak, Division of Health Informatics, Weill Cornell Medical College, Cornell University.
ABSTRACT:
Regression and classification techniques play an essential role in many data mining tasks and have broad applications. However, most of the state-of-the-art regression and classification techniques are often unable to adequately model the interactions among predictor variables in highly heterogeneous datasets. New techniques that can effectively model such complex and heterogeneous structures are needed to significantly improve prediction accuracy.
In this dissertation, we propose a novel type of accurate and interpretable regression and classification models, named as Pattern Aided Regression (PXR) and Pattern Aided Classification (PXC) respectively. Both PXR and PXC rely on identifying regions in the data space where a given baseline model has large modeling errors, characterizing such regions using patterns, and learning specialized models for those regions. Each PXR/PXC model contains several pairs of contrast patterns and local models, where a local classifier is applied only to data instances matching its associated pattern. We also propose a class of classification and regression techniques called Contrast Pattern Aided Regression (CPXR) and Contrast Pattern Aided Classification (CPXC) to build accurate and interpretable PXR and PXC models.
We have conducted a set of comprehensive performance studies to evaluate the performance of CPXR and CPXC. The results show that CPXR and CPXC outperform state-of-the-art regression and classification algorithms, often by significant margins. The results also show that CPXR and CPXC are especially effective for heterogeneous and high dimensional datasets. Besides being new types of modeling, PXR and PXC models can also provide insights into data heterogeneity and diverse predictor-response relationships.
We have also adapted CPXC to handle classifying imbalanced datasets and introduced a new algorithm called Contrast Pattern Aided Classification for Imbalanced Datasets (CPXCim). In CPXCim, we applied a weighting method to boost minority instances as well as a new filtering method to prune patterns with imbalanced matching datasets.
Finally, we applied our techniques on three real applications, two in the healthcare domain and one in the soil mechanic domain. PXR and PXC models are significantly more accurate than other learning algorithms in those three applications.
Regression and classification techniques play an essential role in many data mining tasks and have broad applications. However, most of the state-of-the-art regression and classification techniques are often unable to adequately model the interactions among predictor variables in highly heterogeneous datasets. New techniques that can effectively model such complex and heterogeneous structures are needed to significantly improve prediction accuracy.
In this dissertation, we propose a novel type of accurate and interpretable regression and classification models, named as Pattern Aided Regression (PXR) and Pattern Aided Classification (PXC) respectively. Both PXR and PXC rely on identifying regions in the data space where a given baseline model has large modeling errors, characterizing such regions using patterns, and learning specialized models for those regions. Each PXR/PXC model contains several pairs of contrast patterns and local models, where a local classifier is applied only to data instances matching its associated pattern. We also propose a class of classification and regression techniques called Contrast Pattern Aided Regression (CPXR) and Contrast Pattern Aided Classification (CPXC) to build accurate and interpretable PXR and PXC models.
We have conducted a set of comprehensive performance studies to evaluate the performance of CPXR and CPXC. The results show that CPXR and CPXC outperform state-of-the-art regression and classification algorithms, often by significant margins. The results also show that CPXR and CPXC are especially effective for heterogeneous and high dimensional datasets. Besides being new types of modeling, PXR and PXC models can also provide insights into data heterogeneity and diverse predictor-response relationships.
We have also adapted CPXC to handle classifying imbalanced datasets and introduced a new algorithm called Contrast Pattern Aided Classification for Imbalanced Datasets (CPXCim). In CPXCim, we applied a weighting method to boost minority instances as well as a new filtering method to prune patterns with imbalanced matching datasets.
Finally, we applied our techniques on three real applications, two in the healthcare domain and one in the soil mechanic domain. PXR and PXC models are significantly more accurate than other learning algorithms in those three applications.
Adjusting OpenMP PageRank : SHORT REPORT / NOTESSubhajit Sahu
For massive graphs that fit in RAM, but not in GPU memory, it is possible to take
advantage of a shared memory system with multiple CPUs, each with multiple cores, to
accelerate pagerank computation. If the NUMA architecture of the system is properly taken
into account with good vertex partitioning, the speedup can be significant. To take steps in
this direction, experiments are conducted to implement pagerank in OpenMP using two
different approaches, uniform and hybrid. The uniform approach runs all primitives required
for pagerank in OpenMP mode (with multiple threads). On the other hand, the hybrid
approach runs certain primitives in sequential mode (i.e., sumAt, multiply).
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Discussion on Vector Databases, Unstructured Data and AI
https://www.meetup.com/unstructured-data-meetup-new-york/
This meetup is for people working in unstructured data. Speakers will come present about related topics such as vector databases, LLMs, and managing data at scale. The intended audience of this group includes roles like machine learning engineers, data scientists, data engineers, software engineers, and PMs.This meetup was formerly Milvus Meetup, and is sponsored by Zilliz maintainers of Milvus.
The Building Blocks of QuestDB, a Time Series Databasejavier ramirez
Talk Delivered at Valencia Codes Meetup 2024-06.
Traditionally, databases have treated timestamps just as another data type. However, when performing real-time analytics, timestamps should be first class citizens and we need rich time semantics to get the most out of our data. We also need to deal with ever growing datasets while keeping performant, which is as fun as it sounds.
It is no wonder time-series databases are now more popular than ever before. Join me in this session to learn about the internal architecture and building blocks of QuestDB, an open source time-series database designed for speed. We will also review a history of some of the changes we have gone over the past two years to deal with late and unordered data, non-blocking writes, read-replicas, or faster batch ingestion.
The Building Blocks of QuestDB, a Time Series Database
Risk Classification with an Adaptive Naive Bayes Kernel Machine Model
1. Risk Classification with an
Adaptive Naive Bayes Kernel Machine Model
Jessica Minnier1,
Ming Yuan3, Jun Liu4, and Tianxi Cai2
1Department of Public Health & Preventive Medicine, Oregon Health & Science University
2Department of Biostatistics, Harvard School of Public Health
3Department of Statistics, University of Wisconsin-Madison
4Department of Statistics, Harvard University
June 30, 2015
ASA Oregon Chapter Meeting
2. Outline
1 Background and Motivation
2 Model and Methods
Kernels
Blockwise Kernel PCA Estimation
Regularized Selection of Informative Regions
Theoretical Results
3 Simulation Studies
4 Genetic Risk of Type I Diabetes
5 Conclusions
Adaptive Naive Bayes Kernel Machine Model 2
3. Background and Motivation
Adaptive Naive Bayes (Blockwise) Kernel Machine Classification
• Goal: genetic data → quantify disease risk, predict therapeutic
efficacy, determine disease subtypes
• Goal: build an accurate parsimonious prediction model
– reduce the cost of unnecessary marker measurements
– improve the prediction precision for future patients
– improve over modest prediction precision obtained with clinical
predictors and/or known risk alleles
Adaptive Naive Bayes Kernel Machine Model Background and Motivation 3
4. Background and Motivation
Adaptive Naive Bayes (Blockwise) Kernel Machine Classification
• Goal: genetic data → quantify disease risk, predict therapeutic
efficacy, determine disease subtypes
• Goal: build an accurate parsimonious prediction model
– reduce the cost of unnecessary marker measurements
– improve the prediction precision for future patients
– improve over modest prediction precision obtained with clinical
predictors and/or known risk alleles
• Complex diseases
– many alleles contribute to risk
– many distinct combinations of risk factors lead to disease
Adaptive Naive Bayes Kernel Machine Model Background and Motivation 3
5. Background and Motivation
• Genome wide association studies (GWAS)
– identifying SNPs associated with disease risk
– primary goal of testing
– accurate risk prediction remains difficult
• Common approach:
– select top ranked SNPs based on large scale testing
– construct a composite genetic score w/ selected SNPs
Adaptive Naive Bayes Kernel Machine Model Background and Motivation 4
6. Background and Motivation
• Genome wide association studies (GWAS)
– identifying SNPs associated with disease risk
– primary goal of testing
– accurate risk prediction remains difficult
• Common approach:
– select top ranked SNPs based on large scale testing
– construct a composite genetic score w/ selected SNPs
– may not work well due to
false +/− errors in identifying predictive SNPs
over-fitting
using only subset of SNPs available
additive effects only
Adaptive Naive Bayes Kernel Machine Model Background and Motivation 4
7. Background and Motivation
Recent progress in prediction with high dimensional data
• Regularized estimation: LASSO (Tibshirani, 1996); SCAD (Fan and Li,
2001); Adaptive LASSO (Zou, 2006)
• Machine learning: Support vector machine (Cristianini, Shawe-Taylor,
2000); Least square Kernel Machine Regression (Liu, Lin, Ghosh, 2007);
Kernel logistic regression (Zhu and Hastie, 2005; Liu, Ghosh and Lin,
2008)
• Screening + Regularized estimation: Sure independence screening
(Fan and Lv, 2008; Fan and Song, 2009)
Global methods: may be unstable for large p, high correlation
Adaptive Naive Bayes Kernel Machine Model Background and Motivation 5
8. Approach
Challenge:
• Prediction models based on univariate testing, additive models, global
methods → low prediction accuracy, low AUC, missing heritability
• Non-linear effects? testing for interactions → low power
Adaptive Naive Bayes Kernel Machine Model Model and Methods 6
9. Approach
Challenge:
• Prediction models based on univariate testing, additive models, global
methods → low prediction accuracy, low AUC, missing heritability
• Non-linear effects? testing for interactions → low power
Our approach [Minnier et al., 2015]:
• Blockwise method:
leverage biological knowledge to build models at the gene-set level
genes, gene-pathways, linkage disequilibrium blocks
• Kernel machine regression:
allow for complex and nonlinear effects
implicitly specify underlying complex functional form of covariate
effects via similarity measures (kernels) that define the distance
between two sets of covariates
Adaptive Naive Bayes Kernel Machine Model Model and Methods 6
10. Kernel Methods: similar inputs to similar outputs
• transform data to feature space H with non-linear map φ
• “kernel trick” lets us use K(, ) similarity function instead of φ
• K induces the feature space
N. Takahashi’s webpage
Adaptive Naive Bayes Kernel Machine Model Model and Methods 7
11. Previous Methods
Blockwise methods
• Inference: Gene-set testing
Gene burden tests
Gene Set Enrichment Analysis (GSEA)
SNP-set Sequence Kernel Association Test (SKAT, SKAT-O; Wu et al.
2010; Wu, Lee, et al. 2011)
Adaptive Naive Bayes Kernel Machine Model Model and Methods 8
12. Previous Methods
Blockwise methods
• Inference: Gene-set testing
Gene burden tests
Gene Set Enrichment Analysis (GSEA)
SNP-set Sequence Kernel Association Test (SKAT, SKAT-O; Wu et al.
2010; Wu, Lee, et al. 2011)
Kernel machine methods
• Support Vector Machine (SVM) classification methods
• Inference
KM SNP-set Testing (Liu et al. 2007, 2008; SKAT methods)
Gene expression test with kernel Cox model (Li and Luan 2003)
Adaptive Naive Bayes Kernel Machine Model Model and Methods 8
13. Notations and Model Assumptions
• Data
– Response: Y = (Y1, ..., Yn)T
– Predictors: M blocks of genomic regions, for b = 1, ..., M,
X(b)
= (X
(b)
1 , ..., X(b)
n )T
n×pb
,
Adaptive Naive Bayes Kernel Machine Model Model and Methods 9
14. Notations and Model Assumptions
• Data
– Response: Y = (Y1, ..., Yn)T
– Predictors: M blocks of genomic regions, for b = 1, ..., M,
X(b)
= (X
(b)
1 , ..., X(b)
n )T
n×pb
,
• Blockwise: Partition genome into gene-sets
– Recombination hotspots, gene-pathways
Adaptive Naive Bayes Kernel Machine Model Model and Methods 9
15. Notations and Model Assumptions
• Data
– Response: Y = (Y1, ..., Yn)T
– Predictors: M blocks of genomic regions, for b = 1, ..., M,
X(b)
= (X
(b)
1 , ..., X(b)
n )T
n×pb
,
• Model under blockwise Naive Bayes (NB) assumption:
X(1)
, ..., X(M)
| Y independent
Adaptive Naive Bayes Kernel Machine Model Model and Methods 10
16. Notations and Model Assumptions
• Data
– Response: Y = (Y1, ..., Yn)T
– Predictors: M blocks of genomic regions, for b = 1, ..., M,
X(b)
= (X
(b)
1 , ..., X(b)
n )T
n×pb
,
• Model under blockwise Naive Bayes (NB) assumption:
X(1)
, ..., X(M)
| Y independent ⇒
logit{pr(Y = 1 | X(1)
, ..., X(M)
)} = c +
M
b=1
logit{pr(Y = 1 | X(b)
)}
– NB assumption allows separate estimation by block and reduces overfitting
– Performs well for zero-one loss L(X) = I( ˆY (X) = Y ) [Domingos and
Pazzani, 1997]
Adaptive Naive Bayes Kernel Machine Model Model and Methods 10
17. Notations and Model Assumptions
• Within each region, the effect may be complex and interactive due to
– multiple causal variants
– un-typed causal variants in the presence of high LD
Adaptive Naive Bayes Kernel Machine Model Model and Methods 11
18. Notations and Model Assumptions
• Within each region, the effect may be complex and interactive due to
– multiple causal variants
– un-typed causal variants in the presence of high LD
• Blockwise Kernel Machine Regression
logit{pr(Y = 1 | X(b)
)} = a(b)
+h(b)
(X(b)
)
h(b)
(X(b)
) =
l
β(b)
l ψ(b)
l (X(b)
) ∈ HK(b)
{ψ(b)
l } = { λ(b)
l φ(b)
l } implicitly specified via a symmetric positive
definite kernel K(b)
(·, ·).
Adaptive Naive Bayes Kernel Machine Model Model and Methods 11
19. Notations and Model Assumptions
• Within each region, the effect may be complex and interactive due to
– multiple causal variants
– un-typed causal variants in the presence of high LD
• Blockwise Kernel Machine Regression
logit{pr(Y = 1 | X(b)
)} = a(b)
+h(b)
(X(b)
)
h(b)
(X(b)
) =
l
β(b)
l ψ(b)
l (X(b)
) ∈ HK(b)
{ψ(b)
l } = { λ(b)
l φ(b)
l } implicitly specified via a symmetric positive
definite kernel K(b)
(·, ·).
K(b)
(X(b)
i , X(b)
j ) defines the similarity between X(b)
i and X(b)
j .
Adaptive Naive Bayes Kernel Machine Model Model and Methods 11
20. Notations and Model Assumptions
• Within each region, the effect may be complex and interactive due to
– multiple causal variants
– un-typed causal variants in the presence of high LD
• Blockwise Kernel Machine Regression
logit{pr(Y = 1 | X(b)
)} = a(b)
+h(b)
(X(b)
)
h(b)
(X(b)
) =
l
β(b)
l ψ(b)
l (X(b)
) ∈ HK(b)
{ψ(b)
l } = { λ(b)
l φ(b)
l } implicitly specified via a symmetric positive
definite kernel K(b)
(·, ·).
K(b)
(X(b)
i , X(b)
j ) defines the similarity between X(b)
i and X(b)
j .
HK(b) , the functional space spanned by K(b)
(·, ·), is a reproducible
kernel hilbert space (RKHS)
Adaptive Naive Bayes Kernel Machine Model Model and Methods 11
21. Choices of Kernel Functions
Linear kernel: K(Xi , Xj ) = ρ + XT
i Xj ,
h(X) =
p
k=1
βkXk
Fitting logistic regression with linear kernel ⇔ logistic ridge regression.
Adaptive Naive Bayes Kernel Machine Model Model and Methods 12
22. Choices of Kernel Functions
Linear kernel: K(Xi , Xj ) = ρ + XT
i Xj ,
h(X) =
p
k=1
βkXk
Fitting logistic regression with linear kernel ⇔ logistic ridge regression.
IBS kernel: K(Xi , Xj ) = p
k=1(2 − |Xik − Xjk|),
powerful in detecting non-linear effects with SNP data [Wu et al, 2010]
Adaptive Naive Bayes Kernel Machine Model Model and Methods 12
23. Estimation of h: Kernel PCA
• primal form: h = l βl ψl = l βl
√
λl φl
• Kernel PCA approximation:
K = [K(Xi , Xj )]1≤i,j≤n =
n
l=1
λl φl φ
T
l
K =
0
l=1
λl φl φ
T
l = ΨΨ
T
; Ψ = [λ
1
2
1 φ1, ..., λ
1
2
0
φ 0
]n× 0
Scholkopf et al. [1999]; Williams and Seeger [2000]; Braun et al. [2008]; Zhang et al. [2010]
• h(b)
(X(b)
) = Ψβ
• obtain (a, β) as the minimizer of ridge logistic objective function
L(Y , a, Ψβ) + τ β 2
Adaptive Naive Bayes Kernel Machine Model Model and Methods 13
24. Regularized Selection of Informative Regions
• For b = 1, ..., M, perform kernel PCA regression and obtain h(b)
logit{pr(Y = 1 | X(b)
)} = a(b)
+ h(b)
(X(b)
)
• Classify a future subject with X = {X(b)
, b = 1, ..., M} based on
M
b=1
h(b)
(X(b)
) ≥ c
• Final prediction rule with weighted block effects
– Some regions may not be predictive of the outcome due to false
discovery
– Inclusion of all regions for prediction may lead to reduced accuracy
– Regularized estimation of block effects using LASSO:
M
b=1
γbh(b)
(X(b)
) ≥ c
Adaptive Naive Bayes Kernel Machine Model Model and Methods 14
25. Regularized Selection of Informative Regions
• For b = 1, ..., M, perform kernel PCA regression and obtain h(b)
logit{pr(Y = 1 | X(b)
)} = a(b)
+ h(b)
(X(b)
)
• Classify a future subject with X = {X(b)
, b = 1, ..., M} based on
M
b=1
h(b)
(X(b)
) ≥ c
• Final prediction rule with weighted block effects
– Regularized estimation of block effects using LASSO, pseudo-data H
estimated with cross-validation:
K
k=1
YT
log g(b + Hγ) + (1 − Y)T
log{1 − g(b + Hγ)} − τ2 γ 1,
M
b=1
γbh(b)
(X(b)
) ≥ c
Adaptive Naive Bayes Kernel Machine Model Model and Methods 15
26. Theoretical Results
• Consistency of h(b)(x):
– h(b)
(x) → h(b)
(x) at
√
n rate for finite dimensional HK
– Relies on convergence of sample eigen-values and -vectors from kernel
PCA to the true eigensystem of HK
Ψ → Ψ = {ψ(b)
1 , . . . , ψ(b)
0
}
• Oracle property of γ:
– Gene-set selection consistency
P(A = A) → 1
where A = {b|h(b)
(x) = 0}, A = {b|h(b)
(x) = 0}
Adaptive Naive Bayes Kernel Machine Model Model and Methods 16
27. Simulation Studies for NBKM
• SNP data sampled from gene-sets in a GWAS dataset (from type I
diabetes study, Affy 500k)
• 350 regions, 9256 SNPs
• Only the first 4 regions are associated with the outcome
• the joint effects of the SNPs in each of these regions set as
– linear for the first two regions and non-linear for the other 2 regions
– linear for all 4 regions
– nonlinear for all 4 regions
Adaptive Naive Bayes Kernel Machine Model Simulation Studies 17
28. Prediction Accuracy
Simulations: nt = 1000, nv = 500, # of genes = 350 total # of SNPs = 9256
Adaptive Naive Bayes Kernel Machine Model Simulation Studies 18
29. Gene-set selection
Simulations: nt = 1000, nv = 500, # of genes = 350 total # of SNPs = 9256
Adaptive Naive Bayes Kernel Machine Model Simulation Studies 19
30. Genetic Risk of Type I Diabetes
• Autoimmune disease, usually diagnosed in childhood
• T1D
– 75 SNPs have been identified as T1D risk alleles (National Human
Genome Research Institute, Hindorff et al. [2009])
– 91 genes that either contain these SNPs or flank the SNP on either
side on the chromosome
Adaptive Naive Bayes Kernel Machine Model Genetic Risk of Type I Diabetes 20
31. Genetic Risk of Type I Diabetes
• Autoimmune disease, usually diagnosed in childhood
• T1D
– 75 SNPs have been identified as T1D risk alleles (National Human
Genome Research Institute, Hindorff et al. [2009])
– 91 genes that either contain these SNPs or flank the SNP on either
side on the chromosome
• T1D + Other autoimmune diseases (Rheumatoid arthritis, Celiac
disease, Crohns disease, Lupus, Inflammatory bowel disease)
– 365 SNPs have been identified as T1D+other autoimmune disease risk
alleles (NHGRI)
– 375 genes that either contain these SNPs or flank the SNP on either
side on the chromosome
Adaptive Naive Bayes Kernel Machine Model Genetic Risk of Type I Diabetes 20
32. Genetic Risk of Type I Diabetes
GWAS data collected by Welcome Trust Case Control Consortium
(WTCCC)
• 2000 cases, 3000 controls of European descent from Great Britain
• segment the genome into gene-sets: gene and a flanking region of
20KB on either side of the gene
• The WTCCC data includes
– 350 of the gene-sets listed in the NHGRI catalog
– covering 9,256 SNPs in the WTCCC data
Adaptive Naive Bayes Kernel Machine Model Genetic Risk of Type I Diabetes 21
34. Conclusions
• Kernel Machine Regression provides a useful tool for incorporating
non-linear complex effects
• Blockwise KM regression achieves a nice balance between capturing
complex effects and overfitting
• IBS kernel performs well under both linear and non-linear settings
Remarks
• May use SKAT to screen blocks for initial stage
• Can be extended to data with other covariates such as clinical
variables
• Possible extensions might incorporate more complex block structure,
different types of outcomes, interactions, and beyond!
Adaptive Naive Bayes Kernel Machine Model Conclusions 23
36. References I
M. Braun, J. Buhmann, and K. M¨uller. On relevant dimensions in kernel feature spaces. The Journal of Machine Learning
Research, 9:1875–1908, 2008.
P. Domingos and M. Pazzani. On the optimality of the simple bayesian classifier under zero-one loss. Machine learning, 29(2):
103–130, 1997.
L. Hindorff, P. Sethupathy, H. Junkins, E. Ramos, J. Mehta, F. Collins, and T. Manolio. Potential etiologic and functional
implications of genome-wide association loci for human diseases and traits. Proceedings of the National Academy of
Sciences, 106(23):9362, 2009.
J. Minnier, M. Yuan, J. S. Liu, and T. Cai. Risk classification with an adaptive naive bayes kernel machine model. Journal of the
American Statistical Association, 110(509):393–404, 2015.
B. Scholkopf, S. Mika, C. Burges, P. Knirsch, K. Muller, G. Ratsch, and A. Smola. Input space versus feature space in
kernel-based methods. Neural Networks, IEEE Transactions on, 10(5):1000–1017, 1999.
C. Williams and M. Seeger. The effect of the input density distribution on kernel-based classifiers. In Proceedings of the 17th
International Conference on Machine Learning. Citeseer, 2000.
R. Zhang, W. Wang, and Y. Ma. Approximations of the standard principal components analysis and kernel pca. Expert Systems
with Applications, 37(9):6531–6537, 2010.
Adaptive Naive Bayes Kernel Machine Model 25