Presentation given by Rich Jacques, talking thorough the use of the package "mice", which eases the pain of doing multiple imputations for analysis with incomplete datasets
Imputation techniques for missing data in clinical trialsNitin George
Missing data are unavoidable in clinical and epidemiological researches. Missing data leads to bias and loss of information in research analysis. Usually we are not aware of missing data techniques because we are depending on some software’s. The objective of this seminar is to introduce different missing data mechanisms and imputation techniques for missing data with the help of examples.
Missing data handling is typically done in an ad-hoc way. Without understanding the repurcussions of a missing data handling technique, approaches that only let you get to the "next step" in your analytics pipeline leads to terrible outputs, conclusions that aren't robust and biased estimates. Handling missing data in data sets requires a structured approach. In this workshop, we will cover the key tenets of handling missing data in a structured way
Data Science - Part III - EDA & Model SelectionDerek Kane
This lecture introduces the concept of EDA, understanding, and working with data for machine learning and predictive analysis. The lecture is designed for anyone who wants to understand how to work with data and does not get into the mathematics. We will discuss how to utilize summary statistics, diagnostic plots, data transformations, variable selection techniques including principal component analysis, and finally get into the concept of model selection.
Abstract: This PDSG workshop introduces basic concepts of splitting a dataset for training a model in machine learning. Concepts covered are training, test and validation data, serial and random splitting, data imbalance and k-fold cross validation.
Level: Fundamental
Requirements: No prior programming or statistics knowledge required.
Imputation techniques for missing data in clinical trialsNitin George
Missing data are unavoidable in clinical and epidemiological researches. Missing data leads to bias and loss of information in research analysis. Usually we are not aware of missing data techniques because we are depending on some software’s. The objective of this seminar is to introduce different missing data mechanisms and imputation techniques for missing data with the help of examples.
Missing data handling is typically done in an ad-hoc way. Without understanding the repurcussions of a missing data handling technique, approaches that only let you get to the "next step" in your analytics pipeline leads to terrible outputs, conclusions that aren't robust and biased estimates. Handling missing data in data sets requires a structured approach. In this workshop, we will cover the key tenets of handling missing data in a structured way
Data Science - Part III - EDA & Model SelectionDerek Kane
This lecture introduces the concept of EDA, understanding, and working with data for machine learning and predictive analysis. The lecture is designed for anyone who wants to understand how to work with data and does not get into the mathematics. We will discuss how to utilize summary statistics, diagnostic plots, data transformations, variable selection techniques including principal component analysis, and finally get into the concept of model selection.
Abstract: This PDSG workshop introduces basic concepts of splitting a dataset for training a model in machine learning. Concepts covered are training, test and validation data, serial and random splitting, data imbalance and k-fold cross validation.
Level: Fundamental
Requirements: No prior programming or statistics knowledge required.
What is an "ensemble learner"? How can we combine different base learners into an ensemble in order to improve the overall classification performance? In this lecture, we are providing some answers to these questions.
How to validate a model?
What is a best model ?
Types of data
Types of errors
The problem of over fitting
The problem of under fitting
Bias Variance Tradeoff
Cross validation
K-Fold Cross validation
Boot strap Cross validation
Principal Component Analysis (PCA) and LDA PPT SlidesAbhishekKumar4995
Machine learning (ML) technique use for Dimension reduction, feature extraction and analyzing huge amount of data are Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) are easily and interactively explained with scatter plot graph , 2D and 3D projection of Principal components(PCs) for better understanding.
Principal Component Analysis, or PCA, is a factual method that permits you to sum up the data contained in enormous information tables by methods for a littler arrangement of "synopsis files" that can be all the more handily envisioned and broke down.
Principal Component Analysis and ClusteringUsha Vijay
Identifying the borrower segments from the give bank data set which has 27000 rows and 77 variable using PROC PRINCOMP. variables, it is important to reduce the data set to a smaller set of variables to derive a feasible
conclusion. With the effect of multicollinearity two or more variables can share the same plane in the in dimensions. Each row of the data can
be envisioned as a 77 dimensional graph and when we project the data as orthonormal, it is expected that the certain characteristics of the
data based on the plots to cluster together as principal components. In order to identify these principal components. PROC PRINCOMP is
executed with all the variables except the constant variables(recoveries and collection fees) and we derive a plot of Eigen values of all the
principal components
Multiple Imputation: Joint and Conditional Modeling of Missing DataKazuki Yoshida
This is a student-created document on multiple imputation focusing on the two major approaches of modeling missing data: the joint and conditional approaches.
What is an "ensemble learner"? How can we combine different base learners into an ensemble in order to improve the overall classification performance? In this lecture, we are providing some answers to these questions.
How to validate a model?
What is a best model ?
Types of data
Types of errors
The problem of over fitting
The problem of under fitting
Bias Variance Tradeoff
Cross validation
K-Fold Cross validation
Boot strap Cross validation
Principal Component Analysis (PCA) and LDA PPT SlidesAbhishekKumar4995
Machine learning (ML) technique use for Dimension reduction, feature extraction and analyzing huge amount of data are Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) are easily and interactively explained with scatter plot graph , 2D and 3D projection of Principal components(PCs) for better understanding.
Principal Component Analysis, or PCA, is a factual method that permits you to sum up the data contained in enormous information tables by methods for a littler arrangement of "synopsis files" that can be all the more handily envisioned and broke down.
Principal Component Analysis and ClusteringUsha Vijay
Identifying the borrower segments from the give bank data set which has 27000 rows and 77 variable using PROC PRINCOMP. variables, it is important to reduce the data set to a smaller set of variables to derive a feasible
conclusion. With the effect of multicollinearity two or more variables can share the same plane in the in dimensions. Each row of the data can
be envisioned as a 77 dimensional graph and when we project the data as orthonormal, it is expected that the certain characteristics of the
data based on the plots to cluster together as principal components. In order to identify these principal components. PROC PRINCOMP is
executed with all the variables except the constant variables(recoveries and collection fees) and we derive a plot of Eigen values of all the
principal components
Multiple Imputation: Joint and Conditional Modeling of Missing DataKazuki Yoshida
This is a student-created document on multiple imputation focusing on the two major approaches of modeling missing data: the joint and conditional approaches.
Diabetes data - model assessment using RGregg Barrett
This report analyses the diabetes data in Efron et al. (2003) to examine the effects of ten baseline predictor variables on a quantitative measure of disease progression one year after baseline.
A ROBUST MISSING VALUE IMPUTATION METHOD MIFOIMPUTE FOR INCOMPLETE MOLECULAR ...ijcsa
Missing data imputation is an important research topic in data mining. Large-scale Molecular descriptor data may contains missing values (MVs). However, some methods for downstream analyses, including some prediction tools, require a complete descriptor data matrix. We propose and evaluate an iterative imputation method MiFoImpute based on a random forest. By averaging over many unpruned regression trees, random forest intrinsically constitutes a multiple imputation scheme. Using the NRMSE and NMAE estimates of random forest, we are able to estimate the imputation error. Evaluation is performed on two molecular descriptor datasets generated from a diverse selection of pharmaceutical fields with artificially introduced missing values ranging from 10% to 30%. The experimental result demonstrates that missing values has a great impact on the effectiveness of imputation techniques and our method MiFoImpute is more robust to missing value than the other ten imputation methods used as benchmark. Additionally, MiFoImpute exhibits attractive computational efficiency and can cope with high-dimensional data.
Regression Analysis and model comparison on the Boston Housing DataShivaram Prakash
Creation of regression models to predict the median housing price using the Boston Housing dataset. Models used: Generalized linear model, generalized additive model, artificial neural networks, regression tree
Large amounts of heterogeneous medical data have become available in various healthcare organizations (payers, providers, pharmaceuticals). Those data could be an enabling resource for deriving insights for improving care delivery and reducing waste. The enormity and complexity of these datasets present great challenges in analyses and subsequent applications to a practical clinical environment. More details are available here http://dmkd.cs.wayne.edu/TUTORIAL/Healthcare/
Prognosticating Autism Spectrum Disorder Using Artificial Neural Network: Lev...Avishek Choudhury
Autism spectrum condition (ASC) or autism spectrum disorder (ASD) is primarily identified with the help of behavioral indications encompassing social, sensory and motor characteristics. Although categorized, recurring motor actions are measured during diagnosis, quantifiable measures that ascertain kinematic physiognomies in the movement configurations of autistic persons are not adequately studied, hindering the advances in understanding the etiology of motor mutilation. Subject aspects such as behavioral characters that influences ASD need further exploration. Presently, limited autism datasets concomitant with screening ASD are available, and a majority of them are genetic. Hence, in this study, we used a dataset related to autism screening enveloping ten behavioral and ten personal attributes that have been effective in diagnosing ASD cases from controls in behavior science. ASD diagnosis is time exhaustive and uneconomical. The burgeoning ASD cases worldwide mandate a need for the fast and economical screening tool. Our study aimed to implement an artificial neural network with the Levenberg- Marquardt algorithm to detect ASD and examine its predictive accuracy. Consecutively, develop a clinical decision support system for early ASD identification.
In healthcare sector, data are enormous and diverse because it contains a data of different types and getting knowledge from these data is crucial. So to get that knowledge, data mining techniques may be utilized to mine knowledge by building models from healthcare dataset. At present, the classification of heart diseases patients has been a demanding research confront for many researchers. For building a classification model for a these patient, we used four different classification algorithms such as NaiveBayes, MultilayerPerceptron, RandomForest and DecisionTable. The intention behind this work is to classify that whether a patient is tested positive or tested negative for heart diseases, based on some diagnostic measurements integrated into the dataset.
This hands-on R course will demonstrate a variety of statistical procedures using the open-source statistical software program, R. Emphasis is on regression modeling using the Zelig package.
Workshop materials including example data sets and R scripts are available from http://projects.iq.harvard.edu/rtc/r-stats
Android Based Questionnaires Application for Heart Disease Prediction Systemijtsrd
Today classification techniques in data mining are most popular to prediction and data exploration. This Heart Disease Prediction System HDPS is using Naive Bayesian Classification with a comparison for simple probability and that of Jelinek Mercer JM Smoothing. It is implemented as an Android based application user must be feedback and answers the questions then can be seen the result as user desired in different ways exactly heart disease is present or not and then with predictions No, Low, Average, High, Very High . And the system will be provided required suggestions such as doctor details and medications to patients could be able. It will be also proved that enhanced Naive Bayes with Jelinek Mercer smoothing technique is also effective to eliminate the noise for prediction the heart disease. This system can also calculate classifier accuracy by using precision and recall. Nan Yu Hlaing | Phyu Pyar Moe "Android Based Questionnaires Application for Heart Disease Prediction System" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26750.pdfPaper URL: https://www.ijtsrd.com/computer-science/data-miining/26750/android-based-questionnaires-application-for-heart-disease-prediction-system/nan-yu-hlaing
Preparing and submitting a package to CRAN - June Sanderson, Sheffield R User...Paul Richards
Presentation by Jean Sanderson given at June R Users Group meeting, describing how to build an R package and submit to CRAN, based on her experiences with the "adwave" package.
How to win $10m - analysing DOTA2 data in R (Sheffield R Users Group - May)Paul Richards
Presentation given by Chris Hopkinson at May Sheffield R Users Group meeting - how to (potentially) win $10m using association rules with data from the DOTA2 API
Querying open data with R - Talk at April SheffieldR Users GpPaul Richards
Presentation given at the April SheffieldR meeting by Paul Richards, looking at how R fits into the open data philosophy and a few examples of packages to query open datasets
Sheffield R Jan 2015 - Using R to investigate parasite infections in Asian el...Paul Richards
Presented at the January meeting of the SheffieldR Users Group - Carly Lynsdale from Animal and Plant Sciences (University of Sheffield) talks about how she is using R to investigate parasite infection in Asian elephants. Warning - contains poop!
GraphSummit Paris - The art of the possible with Graph TechnologyNeo4j
Sudhir Hasbe, Chief Product Officer, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
A Study of Variable-Role-based Feature Enrichment in Neural Models of CodeAftab Hussain
Understanding variable roles in code has been found to be helpful by students
in learning programming -- could variable roles help deep neural models in
performing coding tasks? We do an exploratory study.
- These are slides of the talk given at InteNSE'23: The 1st International Workshop on Interpretability and Robustness in Neural Software Engineering, co-located with the 45th International Conference on Software Engineering, ICSE 2023, Melbourne Australia
Zoom is a comprehensive platform designed to connect individuals and teams efficiently. With its user-friendly interface and powerful features, Zoom has become a go-to solution for virtual communication and collaboration. It offers a range of tools, including virtual meetings, team chat, VoIP phone systems, online whiteboards, and AI companions, to streamline workflows and enhance productivity.
OpenMetadata Community Meeting - 5th June 2024OpenMetadata
The OpenMetadata Community Meeting was held on June 5th, 2024. In this meeting, we discussed about the data quality capabilities that are integrated with the Incident Manager, providing a complete solution to handle your data observability needs. Watch the end-to-end demo of the data quality features.
* How to run your own data quality framework
* What is the performance impact of running data quality frameworks
* How to run the test cases in your own ETL pipelines
* How the Incident Manager is integrated
* Get notified with alerts when test cases fail
Watch the meeting recording here - https://www.youtube.com/watch?v=UbNOje0kf6E
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
Mobile App Development Company In Noida | Drona InfotechDrona Infotech
Looking for a reliable mobile app development company in Noida? Look no further than Drona Infotech. We specialize in creating customized apps for your business needs.
Visit Us For : https://www.dronainfotech.com/mobile-application-development/
Graspan: A Big Data System for Big Code AnalysisAftab Hussain
We built a disk-based parallel graph system, Graspan, that uses a novel edge-pair centric computation model to compute dynamic transitive closures on very large program graphs.
We implement context-sensitive pointer/alias and dataflow analyses on Graspan. An evaluation of these analyses on large codebases such as Linux shows that their Graspan implementations scale to millions of lines of code and are much simpler than their original implementations.
These analyses were used to augment the existing checkers; these augmented checkers found 132 new NULL pointer bugs and 1308 unnecessary NULL tests in Linux 4.4.0-rc5, PostgreSQL 8.3.9, and Apache httpd 2.2.18.
- Accepted in ASPLOS ‘17, Xi’an, China.
- Featured in the tutorial, Systemized Program Analyses: A Big Data Perspective on Static Analysis Scalability, ASPLOS ‘17.
- Invited for presentation at SoCal PLS ‘16.
- Invited for poster presentation at PLDI SRC ‘16.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
Do you want Software for your Business? Visit Deuglo
Deuglo has top Software Developers in India. They are experts in software development and help design and create custom Software solutions.
Deuglo follows seven steps methods for delivering their services to their customers. They called it the Software development life cycle process (SDLC).
Requirement — Collecting the Requirements is the first Phase in the SSLC process.
Feasibility Study — after completing the requirement process they move to the design phase.
Design — in this phase, they start designing the software.
Coding — when designing is completed, the developers start coding for the software.
Testing — in this phase when the coding of the software is done the testing team will start testing.
Installation — after completion of testing, the application opens to the live server and launches!
Maintenance — after completing the software development, customers start using the software.
Need for Speed: Removing speed bumps from your Symfony projects ⚡️Łukasz Chruściel
No one wants their application to drag like a car stuck in the slow lane! Yet it’s all too common to encounter bumpy, pothole-filled solutions that slow the speed of any application. Symfony apps are not an exception.
In this talk, I will take you for a spin around the performance racetrack. We’ll explore common pitfalls - those hidden potholes on your application that can cause unexpected slowdowns. Learn how to spot these performance bumps early, and more importantly, how to navigate around them to keep your application running at top speed.
We will focus in particular on tuning your engine at the application level, making the right adjustments to ensure that your system responds like a well-oiled, high-performance race car.
Navigating the Metaverse: A Journey into Virtual Evolution"Donna Lenk
Join us for an exploration of the Metaverse's evolution, where innovation meets imagination. Discover new dimensions of virtual events, engage with thought-provoking discussions, and witness the transformative power of digital realms."
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...Mind IT Systems
Healthcare providers often struggle with the complexities of chronic conditions and remote patient monitoring, as each patient requires personalized care and ongoing monitoring. Off-the-shelf solutions may not meet these diverse needs, leading to inefficiencies and gaps in care. It’s here, custom healthcare software offers a tailored solution, ensuring improved care and effectiveness.
AI Genie Review: World’s First Open AI WordPress Website CreatorGoogle
AI Genie Review: World’s First Open AI WordPress Website Creator
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-genie-review
AI Genie Review: Key Features
✅Creates Limitless Real-Time Unique Content, auto-publishing Posts, Pages & Images directly from Chat GPT & Open AI on WordPress in any Niche
✅First & Only Google Bard Approved Software That Publishes 100% Original, SEO Friendly Content using Open AI
✅Publish Automated Posts and Pages using AI Genie directly on Your website
✅50 DFY Websites Included Without Adding Any Images, Content Or Doing Anything Yourself
✅Integrated Chat GPT Bot gives Instant Answers on Your Website to Visitors
✅Just Enter the title, and your Content for Pages and Posts will be ready on your website
✅Automatically insert visually appealing images into posts based on keywords and titles.
✅Choose the temperature of the content and control its randomness.
✅Control the length of the content to be generated.
✅Never Worry About Paying Huge Money Monthly To Top Content Creation Platforms
✅100% Easy-to-Use, Newbie-Friendly Technology
✅30-Days Money-Back Guarantee
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
#AIGenieApp #AIGenieBonus #AIGenieBonuses #AIGenieDemo #AIGenieDownload #AIGenieLegit #AIGenieLiveDemo #AIGenieOTO #AIGeniePreview #AIGenieReview #AIGenieReviewandBonus #AIGenieScamorLegit #AIGenieSoftware #AIGenieUpgrades #AIGenieUpsells #HowDoesAlGenie #HowtoBuyAIGenie #HowtoMakeMoneywithAIGenie #MakeMoneyOnline #MakeMoneywithAIGenie
2. Multiple Imputation (MI)
MI is a statistical techniques for handling missing data.
The key concent of MI is to use the distribution of the observed data
to estimate a set of plausible values for the missing data.
Random components are incorporated into these estimated values to
reflect their uncertainty.
Multiple datasets are created and then analyzed individually but
identically to obtain a set of parameter estimates.
Estimates are combined to obtain a set of parameter estimates.
IR White et al. Multiple imputation using chained equations: Issues and guidance for
practice. Statist. Med. 2011; 30:337-399.
21st July 2015 2 / 13
3. Example Data
NHANES (National Health and Nutrition Examination Survey)
Four variables: age (age group), bmi (body mass index), hyp
(hypertention status), chl (cholesterol level)
> library(mice)
> nhanes[1:5,]
age bmi hyp chl
1 1 NA NA NA
2 2 22.7 1 187
3 1 NA 1 187
4 3 NA NA NA
5 1 20.4 1 113
21st July 2015 3 / 13
4. Inspecting Missing Data
> md.pattern(nhanes)
age hyp bmi chl
13 1 1 1 1 0
1 1 1 0 1 1
3 1 1 1 0 1
1 1 0 0 1 2
7 1 0 0 0 3
0 8 9 10 27
A matrix, in which each row corresponds to a missing data pattern
(1=observed, 0=missing).
21st July 2015 4 / 13
5. Multiple Imputation: Main Steps
Imputation Steps R Function R Object Class
Incomplete Data data frame
mice( )
Imputed Data mids
with( )
Analysis Results mira
pool( )
Pooled Results mipo
21st July 2015 5 / 13
6. Generating mutliple imputations: mice()
> mice(data,m,method,predictorMatrix)
data: A data frame or matrix containing the incomplete data. Missing
values coded as NA.
m: Number of imputations (default = 5)
method: A single string, or a vector of strings, specifying the
imputation method used for each column in the data.
predictorMatrix: A square matrix specifying the set of predictors to be
used for each column.
21st July 2015 6 / 13
7. Built-in imputation methods
Method Description Scale Type
ppm Predictive mean matching numeric
norm Bayesian linear regression numeric
norm.nob Linear regression, non-Bayesian numeric
mean Unconditional mean imputation numeric
2l.norm Two-level linear model numeric
logreg Logistic regression factor, 2 levels
polyreg Polytomous (unordered) regression factor, >2 levels
lda Linear discriminant analysis factor
sample Random sample from observed data any
21st July 2015 7 / 13
8. Example
> nhanes_mice<-mice(nhanes,m=5,method=c("","norm","pmm","mean"))
> nhanes_mice
Multiply imputed data set
Call:
mice(data = nhanes, m = 5, method = c("", "norm", "pmm", "mean"))
Number of multiple imputations: 5
Missing cells per column:
age bmi hyp chl
0 9 8 10
Imputation methods:
age bmi hyp chl
"" "norm" "pmm" "mean"
VisitSequence:
bmi hyp chl
2 3 4
PredictorMatrix:
age bmi hyp chl
age 0 0 0 0
bmi 1 0 1 1
hyp 1 1 0 1
chl 1 1 1 0
Random generator seed value: NA
21st July 2015 8 / 13
10. Data Analysis
with.mids() is used to perform the desired analysis for each imputed copy
of the data.
> fit<-with(nhanes_mice,lm(chl~age+bmi))
> summary(fit)
## summary of imputation 1 :
Call:
lm(formula = chl ~ age + bmi)
Residuals:
Min 1Q Median 3Q Max
-43.225 -10.881 -2.835 9.934 65.137
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -22.546 48.078 -0.469 0.643721
age 31.660 7.436 4.258 0.000322 ***
bmi 6.004 1.496 4.012 0.000585 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 25.43 on 22 degrees of freedom
Multiple R-squared: 0.5028, Adjusted R-squared: 0.4576
F-statistic: 11.12 on 2 and 22 DF, p-value: 0.0004593
## summary of imputation 2 : 21st July 2015 10 / 13
11. Pooling Results
pool() talks the results from with.mids() and combines the separate
estimates and standard errors from each of the m imputed data sets to
give an over estimate and standard error
> est<-pool(fit)
> summary(est)
est se t df Pr(>|t|) lo 95
(Intercept) -2.063050 56.538439 -0.03648934 12.54558 0.971466388 -124.658189
age 28.054106 8.827146 3.17816263 11.35829 0.008466749 8.700200
bmi 5.404212 1.736748 3.11168532 13.92380 0.007695105 1.677345
hi 95 nmis
(Intercept) 120.532089 NA
age 47.408013 0
bmi 9.131079 9
21st July 2015 11 / 13
12. Models
pool() can be used with any object having both coef() and vcov()
methods. The function will abort if an approporiate method is not
found.
pool() can also be used with results obtained with lme() and lmer(),
but only with the fixed part of the model.
21st July 2015 12 / 13
13. References
S van Buuren, K Groothuis-Oudshoorn. MICE: Multivariate
Imputation by Chained Equations in R. Journal of Statistical Software
2011; 45(3)
IR White, P Royston, AM Wood. Multiple imputation using chained
equations: Issues and guidance for practice. Statistic in Medicine
2011; 30(4): 337-339.
21st July 2015 13 / 13