SlideShare a Scribd company logo
1 of 30
Download to read offline
Week 10.1: Empirical Logit
! Finish Generalized LMERs
! Clarification on Logit Models
! Poisson Regression
! Empirical Logit
! Low & High Probabilities
! Empirical Logit
! Implementation
! Lab
Clarification on Logit Models
• Dependent/outcome variable would be a
variable made up of 0s and 1s
• We don’t need to apply any kind of transformation
• glmer(Recalled ~ 1 + StudyTime * Strategy +
(1|Subject) + (1|WordPair),
data=cuedrecall, family=binomial)
• family=binomial tells R to analyze this using the logit
link function—everything handled automatically
Clarification on Logit Models
• When we get our results, they will be in terms
of log odds/logits (because we used
family=binomial)
• We may want to use exp() to transform our
estimates into effects on the odds to make them
more interpretable
• Elaborative rehearsal = +2.29 log odds of recall
• exp(2.29) = Odds of recall 9.87 times greater
Week 10.1: Empirical Logit
! Finish Generalized LMERs
! Clarification on Logit Models
! Poisson Regression
! Empirical Logit
! Low & High Probabilities
! Empirical Logit
! Implementation
! Lab
Poisson Regression
• glmer() supports other non-normal
distributions, such as family=poisson
• Good when the DV is a frequency count
• Number of gestures made in communicative task
• Number of traffic accidents
• Number of ideas brainstormed
• Number of doctor’s visits
• Different from both normal
& binomial distributions
• Lower-bound at 0
• But, no upper bound
• This is a Poisson distribution! -2 0 2 4 6 8 10
0.0
0.1
0.2
0.3
0.4
Count
Probability
• Link function for a Poisson distribution is the log()
• Thus, we would also want to use exp()
• Effect of ASD on number of disfluent pauses: 0.27
• exp(0.27) = 1.31
• ASD increases frequency of pauses by 1.31 times
= !0 + !1X1i + ei
Intercept
(Baseline)
ASD
Can be any number
Can be any number
log(yi)
Poisson Regression
Poisson Models: Other Variants
• Offset term: Controls for differences in the
opportunities to observe the DV (e.g., time)
• e.g., one child observed for 15 minutes, another
for 14 minutes
• glmer(Disfluencies ~ 1 + ASD + (1|Family),
offset=Time, family=poisson)
Poisson Models: Other Variants
• Zero-inflated Poisson: Sometimes, more 0s
than expected under a true Poisson distribution
• Often, when a separate process creates 0s
• # of traffic violations " 0 if you don’t have a car
• # of alcoholic drinks per week " 0 if you don’t drink
• Zero-inflated model:
Simultaneously models
the 0-creating process
as a logit and the rest
as a Poisson
• Use package zeroinfl
Week 10.1: Empirical Logit
! Finish Generalized LMERs
! Clarification on Logit Models
! Poisson Regression
! Empirical Logit
! Low & High Probabilities
! Empirical Logit
! Implementation
! Lab
• Remember our cued recall data from last week?
• A second categorical DV here is source
confusions:
• Lots of theoretical interest in source memory—
memory for the context or source where you learned
something
Odds are not the
same thing as
probabilities!
Source Confusion
Odds are not the
same thing as
probabilities!
WHO
SAID
IT?
VIKING—COLLEGE
SCOTCH—VODKA
Study: Test:
VIKING—vodka
Source Confusion Model
• Let’s model what causes people to make source
confusions:
• model.Source <- lmer(SourceConfusion ~
Strategy*StudyTime.cen + (1|Subject) +
(1|WordPair), data=sourceconfusion)
• This code contains two mistakes—can we fix them?
Source Confusion Model
• Let’s model what causes people to make source
confusions:
• model.Source <- glmer(SourceConfusion ~
Strategy*StudyTime.cen + (1|Subject) +
(1|WordPair), data=sourceconfusion,
family=binomial)
• Failed to converge, with very questionable results
exp(20.08) =
Source errors
525,941,212 times
more likely in
Maintenance condition
… but not significant
Low & High Probabilities
• Problem: These are
low frequency events
• In fact, lots of theoretically interesting things have
low frequency
• Clinical diagnoses that are not common
• Various kinds of cognitive errors
• Language production, memory, language
comprehension…
• Learners’ errors in math or other educational
domains
Low & High Probabilities
• A problem for our model:
• Model was trying to find the odds of making a
source confusion within each study condition
• But: Source confusions were never observed with
elaborative rehearsal, ever!
• How small are the odds?
They are infinitely small in
this dataset!
• Note that not all failures to converge reflect low
frequency events. But when very low frequencies exist,
they are likely to cause convergence problems.
Low & High Probabilities
• Logit is undefined if probability = 1
• Logit is also undefined if probability = 0
• Log 0 is undefined
• e??? = 0
• But there is nothing to which you can raise e to get 0
p(confusion)
1-p(confusion)
[ ]
logit = log
1
0
[ ]
= log
p(confusion)
1-p(confusion)
[ ]
logit = log
0
1
[ ]
= log = log (0)
Division by
zero!!
Low & High Probabilities
• When close to 0 or 1, logit is defined but unstable
0.0 0.2 0.4 0.6 0.8 1.0
-4
-2
0
2
4
PROBABILITY of recall
LOG
ODDS
of
recall
p(0.6) -> 0.41
p(0.8) ->1.39
Relatively
gradual change
at moderate
probabilities
p(0.95) -> 2.94
p(0.98) -> 3.89
Fast change at
extreme
probabilities
Low & High Probabilities
• A problem for our model:
• Question was how much less common source
confusions become with elaborative rehearsal
• But: Source confusions were never
observed with elaborative rehearsal
• Why we think this happened:
• In theory, elaborative subjects would
probably make at least one of these
errors eventually (given infinite trials)
• Not impossible
• But, empirically, probability was low
enough that we didn’t see the error
in our sample (limited sample size)
Probability = 1%
N = ∞
N = 8
Week 10.1: Empirical Logit
! Finish Generalized LMERs
! Clarification on Logit Models
! Poisson Regression
! Empirical Logit
! Low & High Probabilities
! Empirical Logit
! Implementation
! Lab
Empirical Logit
• Empirical logit: An adjustment to the regular
logit to deal with probabilities near (or at) 0 or 1
p(confusion)
1-p(confusion)
[ ]
logit = log
Empirical Logit
• Empirical logit: An adjustment to the regular
logit to deal with probabilities near (or at) 0 or 1
• Makes extreme values
(close to 0 or 1) less extreme
Num of “A”s
Num of “B”s
[ ]
logit = log A = Source confusion
occurred
B = Source confusion did
not occur
Num of “A”s + 0.5
Num of “B”s + 0.5
[ ]
emp. logit = log
e
Empirical Logit
• Empirical logit: An adjustment to the regular
logit to deal with probabilities near (or at) 0 or 1
• Makes extreme values
(close to 0 or 1) less extreme
Num of “A”s
Num of “B”s
[ ]
logit = log
Num of “A”s + 0.5
Num of “B”s + 0.5
[ ]
emp. logit = log
e
Num of As: 10
Num of Bs: 0
Num of As: 9
Num of Bs: 1
3.04
2.20
1.85
0.41
0.37
Num of As: 6
Num of Bs: 4
Empirical logit doesn’t go as
high or as low as the “true” logit
At moderate values, they’re
essentially the same
With larger
samples,
difference
gets much
smaller (as
long as
probability
isn’t 0 or 1)
Week 10.1: Empirical Logit
! Finish Generalized LMERs
! Clarification on Logit Models
! Poisson Regression
! Empirical Logit
! Low & High Probabilities
! Empirical Logit
! Implementation
! Lab
Empirical Logit: Implementation
• Empirical logit requires summing up events and
then adding 0.5 to numerator & denominator:
• Thus, we have to (1) sum across individual trials,
and then (2) calculate empirical logit
Num A + 0.5
Num B + 0.5
[ ]
empirical logit = log
Single empirical
logit value for
each subject in
each condition
Not a sequence
of one YES or
NO for every
item
Num of As for subject S10 in
Low associative strength,
Maintenance rehearsal
condition
Empirical Logit: Implementation
• Empirical logit requires summing up events and
then adding 0.5 to numerator & denominator:
• Thus, we have to (1) sum across individual trials,
and then (2) calculate empirical logit
• Can’t have multiple random effects with empirical
logit
• Would have to do separate by-subjects and by-
items analyses
• Collecting more data can be another solution
Num A + 0.5
Num B + 0.5
[ ]
empirical logit = log
Num of As for subject S10 in
Maintenance rehearsal
condition
Empirical Logit: Implementation
• Scott’s psycholing package can help calculate the
empirical logit & run the model
• Example script will be posted on Canvas
• Two notes:
• No longer using glmer() with family=binomial.
We’re now running the model on the empirical logit
value, which isn’t just a 0 or 1.
Here, the
value of
the DV is
-1.61
Empirical Logit: Implementation
• Scott’s psycholing package can help calculate the
empirical logit & run the model
• Example script will be posted on Canvas
• Two notes:
• No longer using glmer() with family=binomial.
We’re now running the model on the empirical logit
value, which isn’t just a 0 or 1.
• Because we calculate the empirical logit beforehand,
model doesn’t know how many observations went into
that value
• Want to appropriately
weight the model
-2.46 could be the
average across 10
trials or across 100
trials
Week 10.1: Empirical Logit
! Finish Generalized LMERs
! Clarification on Logit Models
! Poisson Regression
! Empirical Logit
! Low & High Probabilities
! Empirical Logit
! Implementation
! Lab

More Related Content

What's hot

Test Automation Framework using Cucumber BDD overview (part 1)
Test Automation Framework using Cucumber BDD overview (part 1)Test Automation Framework using Cucumber BDD overview (part 1)
Test Automation Framework using Cucumber BDD overview (part 1)Mindfire Solutions
 
An Introduction to Unit Testing
An Introduction to Unit TestingAn Introduction to Unit Testing
An Introduction to Unit TestingJoe Tremblay
 
Impossibility of Consensus with One Faulty Process - Papers We Love SF
Impossibility of Consensus with One Faulty Process - Papers We Love SFImpossibility of Consensus with One Faulty Process - Papers We Love SF
Impossibility of Consensus with One Faulty Process - Papers We Love SFHenryRobinson
 
How NOT to Measure Latency
How NOT to Measure LatencyHow NOT to Measure Latency
How NOT to Measure LatencyC4Media
 
ISTQB, ISEB Lecture Notes- 4
ISTQB, ISEB Lecture Notes- 4ISTQB, ISEB Lecture Notes- 4
ISTQB, ISEB Lecture Notes- 4onsoftwaretest
 
Advanced Flink Training - Design patterns for streaming applications
Advanced Flink Training - Design patterns for streaming applicationsAdvanced Flink Training - Design patterns for streaming applications
Advanced Flink Training - Design patterns for streaming applicationsAljoscha Krettek
 
Unit Testing Concepts and Best Practices
Unit Testing Concepts and Best PracticesUnit Testing Concepts and Best Practices
Unit Testing Concepts and Best PracticesDerek Smith
 
The Patterns of Distributed Logging and Containers
The Patterns of Distributed Logging and ContainersThe Patterns of Distributed Logging and Containers
The Patterns of Distributed Logging and ContainersSATOSHI TAGOMORI
 
Unit and integration Testing
Unit and integration TestingUnit and integration Testing
Unit and integration TestingDavid Berliner
 
Processing Semantically-Ordered Streams in Financial Services
Processing Semantically-Ordered Streams in Financial ServicesProcessing Semantically-Ordered Streams in Financial Services
Processing Semantically-Ordered Streams in Financial ServicesFlink Forward
 
Levels Of Testing.pptx
Levels Of Testing.pptxLevels Of Testing.pptx
Levels Of Testing.pptxSunilNagaraj10
 
Specification-By-Example with Gherkin
Specification-By-Example with GherkinSpecification-By-Example with Gherkin
Specification-By-Example with GherkinChristian Hassa
 
Embeddings! embeddings everywhere!
Embeddings! embeddings everywhere!Embeddings! embeddings everywhere!
Embeddings! embeddings everywhere!Maciej Arciuch
 
Self-Attention with Linear Complexity
Self-Attention with Linear ComplexitySelf-Attention with Linear Complexity
Self-Attention with Linear ComplexitySangwoo Mo
 
A Deep Dive into Kafka Controller
A Deep Dive into Kafka ControllerA Deep Dive into Kafka Controller
A Deep Dive into Kafka Controllerconfluent
 
Introduction to Thrift
Introduction to ThriftIntroduction to Thrift
Introduction to ThriftDvir Volk
 

What's hot (20)

Test Automation Framework using Cucumber BDD overview (part 1)
Test Automation Framework using Cucumber BDD overview (part 1)Test Automation Framework using Cucumber BDD overview (part 1)
Test Automation Framework using Cucumber BDD overview (part 1)
 
An Introduction to Unit Testing
An Introduction to Unit TestingAn Introduction to Unit Testing
An Introduction to Unit Testing
 
Impossibility of Consensus with One Faulty Process - Papers We Love SF
Impossibility of Consensus with One Faulty Process - Papers We Love SFImpossibility of Consensus with One Faulty Process - Papers We Love SF
Impossibility of Consensus with One Faulty Process - Papers We Love SF
 
How NOT to Measure Latency
How NOT to Measure LatencyHow NOT to Measure Latency
How NOT to Measure Latency
 
ISTQB, ISEB Lecture Notes- 4
ISTQB, ISEB Lecture Notes- 4ISTQB, ISEB Lecture Notes- 4
ISTQB, ISEB Lecture Notes- 4
 
Advanced Flink Training - Design patterns for streaming applications
Advanced Flink Training - Design patterns for streaming applicationsAdvanced Flink Training - Design patterns for streaming applications
Advanced Flink Training - Design patterns for streaming applications
 
Unit Testing Concepts and Best Practices
Unit Testing Concepts and Best PracticesUnit Testing Concepts and Best Practices
Unit Testing Concepts and Best Practices
 
The Patterns of Distributed Logging and Containers
The Patterns of Distributed Logging and ContainersThe Patterns of Distributed Logging and Containers
The Patterns of Distributed Logging and Containers
 
Unit and integration Testing
Unit and integration TestingUnit and integration Testing
Unit and integration Testing
 
Processing Semantically-Ordered Streams in Financial Services
Processing Semantically-Ordered Streams in Financial ServicesProcessing Semantically-Ordered Streams in Financial Services
Processing Semantically-Ordered Streams in Financial Services
 
Prolog
PrologProlog
Prolog
 
Bleu vs rouge
Bleu vs rougeBleu vs rouge
Bleu vs rouge
 
Levels Of Testing.pptx
Levels Of Testing.pptxLevels Of Testing.pptx
Levels Of Testing.pptx
 
Specification-By-Example with Gherkin
Specification-By-Example with GherkinSpecification-By-Example with Gherkin
Specification-By-Example with Gherkin
 
Embeddings! embeddings everywhere!
Embeddings! embeddings everywhere!Embeddings! embeddings everywhere!
Embeddings! embeddings everywhere!
 
Spotify: P2P music streaming
Spotify: P2P music streamingSpotify: P2P music streaming
Spotify: P2P music streaming
 
Self-Attention with Linear Complexity
Self-Attention with Linear ComplexitySelf-Attention with Linear Complexity
Self-Attention with Linear Complexity
 
Skip gram and cbow
Skip gram and cbowSkip gram and cbow
Skip gram and cbow
 
A Deep Dive into Kafka Controller
A Deep Dive into Kafka ControllerA Deep Dive into Kafka Controller
A Deep Dive into Kafka Controller
 
Introduction to Thrift
Introduction to ThriftIntroduction to Thrift
Introduction to Thrift
 

Similar to Mixed Effects Models - Empirical Logit

Mixed Effects Models - Logit Models
Mixed Effects Models - Logit ModelsMixed Effects Models - Logit Models
Mixed Effects Models - Logit ModelsScott Fraundorf
 
Mixed Effects Models - Post-Hoc Comparisons
Mixed Effects Models - Post-Hoc ComparisonsMixed Effects Models - Post-Hoc Comparisons
Mixed Effects Models - Post-Hoc ComparisonsScott Fraundorf
 
Logic programming (1)
Logic programming (1)Logic programming (1)
Logic programming (1)Nitesh Singh
 
Logic in Predicate and Propositional Logic
Logic in Predicate and Propositional LogicLogic in Predicate and Propositional Logic
Logic in Predicate and Propositional LogicArchanaT32
 
[YIDLUG] Programming Languages Differences, The Underlying Implementation 1 of 2
[YIDLUG] Programming Languages Differences, The Underlying Implementation 1 of 2[YIDLUG] Programming Languages Differences, The Underlying Implementation 1 of 2
[YIDLUG] Programming Languages Differences, The Underlying Implementation 1 of 2Yo Halb
 
From Database Optimizers To Data Science
From Database Optimizers To Data ScienceFrom Database Optimizers To Data Science
From Database Optimizers To Data ScienceLyticswareTechnologi
 
Foundations of Statistics for Ecology and Evolution. 4. Maximum Likelihood
Foundations of Statistics for Ecology and Evolution. 4. Maximum LikelihoodFoundations of Statistics for Ecology and Evolution. 4. Maximum Likelihood
Foundations of Statistics for Ecology and Evolution. 4. Maximum LikelihoodAndres Lopez-Sepulcre
 
Emergence of Invariance and Disentangling in Deep Representations
Emergence of Invariance and Disentangling in Deep RepresentationsEmergence of Invariance and Disentangling in Deep Representations
Emergence of Invariance and Disentangling in Deep RepresentationsSangwoo Mo
 
The marginal value of adaptive gradient methods in machine learning
The marginal value of adaptive gradient methods in machine learningThe marginal value of adaptive gradient methods in machine learning
The marginal value of adaptive gradient methods in machine learningJamie Seol
 
Fuzzy mathematics:An application oriented introduction
Fuzzy mathematics:An application oriented introductionFuzzy mathematics:An application oriented introduction
Fuzzy mathematics:An application oriented introductionNagasuri Bala Venkateswarlu
 
LKNA 2014 Risk and Impediment Analysis and Analytics - Troy Magennis
LKNA 2014 Risk and Impediment Analysis and Analytics - Troy MagennisLKNA 2014 Risk and Impediment Analysis and Analytics - Troy Magennis
LKNA 2014 Risk and Impediment Analysis and Analytics - Troy MagennisTroy Magennis
 
Mixed Effects Models - Orthogonal Contrasts
Mixed Effects Models - Orthogonal ContrastsMixed Effects Models - Orthogonal Contrasts
Mixed Effects Models - Orthogonal ContrastsScott Fraundorf
 
Deep Learning Theory Seminar (Chap 1-2, part 1)
Deep Learning Theory Seminar (Chap 1-2, part 1)Deep Learning Theory Seminar (Chap 1-2, part 1)
Deep Learning Theory Seminar (Chap 1-2, part 1)Sangwoo Mo
 
Introduction iii
Introduction iiiIntroduction iii
Introduction iiichandsek666
 
Sergey Nikolenko and Elena Tutubalina - Constructing Aspect-Based Sentiment ...
Sergey Nikolenko and  Elena Tutubalina - Constructing Aspect-Based Sentiment ...Sergey Nikolenko and  Elena Tutubalina - Constructing Aspect-Based Sentiment ...
Sergey Nikolenko and Elena Tutubalina - Constructing Aspect-Based Sentiment ...AIST
 

Similar to Mixed Effects Models - Empirical Logit (20)

Mixed Effects Models - Logit Models
Mixed Effects Models - Logit ModelsMixed Effects Models - Logit Models
Mixed Effects Models - Logit Models
 
Mixed Effects Models - Post-Hoc Comparisons
Mixed Effects Models - Post-Hoc ComparisonsMixed Effects Models - Post-Hoc Comparisons
Mixed Effects Models - Post-Hoc Comparisons
 
Fuzzy logic
Fuzzy logicFuzzy logic
Fuzzy logic
 
Logic programming (1)
Logic programming (1)Logic programming (1)
Logic programming (1)
 
Logic in Predicate and Propositional Logic
Logic in Predicate and Propositional LogicLogic in Predicate and Propositional Logic
Logic in Predicate and Propositional Logic
 
[YIDLUG] Programming Languages Differences, The Underlying Implementation 1 of 2
[YIDLUG] Programming Languages Differences, The Underlying Implementation 1 of 2[YIDLUG] Programming Languages Differences, The Underlying Implementation 1 of 2
[YIDLUG] Programming Languages Differences, The Underlying Implementation 1 of 2
 
From Database Optimizers To Data Science
From Database Optimizers To Data ScienceFrom Database Optimizers To Data Science
From Database Optimizers To Data Science
 
Foundations of Statistics for Ecology and Evolution. 4. Maximum Likelihood
Foundations of Statistics for Ecology and Evolution. 4. Maximum LikelihoodFoundations of Statistics for Ecology and Evolution. 4. Maximum Likelihood
Foundations of Statistics for Ecology and Evolution. 4. Maximum Likelihood
 
Emergence of Invariance and Disentangling in Deep Representations
Emergence of Invariance and Disentangling in Deep RepresentationsEmergence of Invariance and Disentangling in Deep Representations
Emergence of Invariance and Disentangling in Deep Representations
 
The marginal value of adaptive gradient methods in machine learning
The marginal value of adaptive gradient methods in machine learningThe marginal value of adaptive gradient methods in machine learning
The marginal value of adaptive gradient methods in machine learning
 
Fuzzy mathematics:An application oriented introduction
Fuzzy mathematics:An application oriented introductionFuzzy mathematics:An application oriented introduction
Fuzzy mathematics:An application oriented introduction
 
eviewsOLSMLE
eviewsOLSMLEeviewsOLSMLE
eviewsOLSMLE
 
R meetup lm
R meetup lmR meetup lm
R meetup lm
 
PNP.pptx
PNP.pptxPNP.pptx
PNP.pptx
 
PNP.pptx
PNP.pptxPNP.pptx
PNP.pptx
 
LKNA 2014 Risk and Impediment Analysis and Analytics - Troy Magennis
LKNA 2014 Risk and Impediment Analysis and Analytics - Troy MagennisLKNA 2014 Risk and Impediment Analysis and Analytics - Troy Magennis
LKNA 2014 Risk and Impediment Analysis and Analytics - Troy Magennis
 
Mixed Effects Models - Orthogonal Contrasts
Mixed Effects Models - Orthogonal ContrastsMixed Effects Models - Orthogonal Contrasts
Mixed Effects Models - Orthogonal Contrasts
 
Deep Learning Theory Seminar (Chap 1-2, part 1)
Deep Learning Theory Seminar (Chap 1-2, part 1)Deep Learning Theory Seminar (Chap 1-2, part 1)
Deep Learning Theory Seminar (Chap 1-2, part 1)
 
Introduction iii
Introduction iiiIntroduction iii
Introduction iii
 
Sergey Nikolenko and Elena Tutubalina - Constructing Aspect-Based Sentiment ...
Sergey Nikolenko and  Elena Tutubalina - Constructing Aspect-Based Sentiment ...Sergey Nikolenko and  Elena Tutubalina - Constructing Aspect-Based Sentiment ...
Sergey Nikolenko and Elena Tutubalina - Constructing Aspect-Based Sentiment ...
 

More from Scott Fraundorf

Mixed Effects Models - Signal Detection Theory
Mixed Effects Models - Signal Detection TheoryMixed Effects Models - Signal Detection Theory
Mixed Effects Models - Signal Detection TheoryScott Fraundorf
 
Mixed Effects Models - Power
Mixed Effects Models - PowerMixed Effects Models - Power
Mixed Effects Models - PowerScott Fraundorf
 
Mixed Effects Models - Effect Size
Mixed Effects Models - Effect SizeMixed Effects Models - Effect Size
Mixed Effects Models - Effect SizeScott Fraundorf
 
Mixed Effects Models - Autocorrelation
Mixed Effects Models - AutocorrelationMixed Effects Models - Autocorrelation
Mixed Effects Models - AutocorrelationScott Fraundorf
 
Mixed Effects Models - Growth Curve Analysis
Mixed Effects Models - Growth Curve AnalysisMixed Effects Models - Growth Curve Analysis
Mixed Effects Models - Growth Curve AnalysisScott Fraundorf
 
Mixed Effects Models - Missing Data
Mixed Effects Models - Missing DataMixed Effects Models - Missing Data
Mixed Effects Models - Missing DataScott Fraundorf
 
Mixed Effects Models - Simple and Main Effects
Mixed Effects Models - Simple and Main EffectsMixed Effects Models - Simple and Main Effects
Mixed Effects Models - Simple and Main EffectsScott Fraundorf
 
Mixed Effects Models - Crossed Random Effects
Mixed Effects Models - Crossed Random EffectsMixed Effects Models - Crossed Random Effects
Mixed Effects Models - Crossed Random EffectsScott Fraundorf
 
Mixed Effects Models - Level-2 Variables
Mixed Effects Models - Level-2 VariablesMixed Effects Models - Level-2 Variables
Mixed Effects Models - Level-2 VariablesScott Fraundorf
 
Mixed Effects Models - Random Intercepts
Mixed Effects Models - Random InterceptsMixed Effects Models - Random Intercepts
Mixed Effects Models - Random InterceptsScott Fraundorf
 
Mixed Effects Models - Model Comparison
Mixed Effects Models - Model ComparisonMixed Effects Models - Model Comparison
Mixed Effects Models - Model ComparisonScott Fraundorf
 
Mixed Effects Models - Fixed Effects
Mixed Effects Models - Fixed EffectsMixed Effects Models - Fixed Effects
Mixed Effects Models - Fixed EffectsScott Fraundorf
 
Mixed Effects Models - Data Processing
Mixed Effects Models - Data ProcessingMixed Effects Models - Data Processing
Mixed Effects Models - Data ProcessingScott Fraundorf
 

More from Scott Fraundorf (14)

Mixed Effects Models - Signal Detection Theory
Mixed Effects Models - Signal Detection TheoryMixed Effects Models - Signal Detection Theory
Mixed Effects Models - Signal Detection Theory
 
Mixed Effects Models - Power
Mixed Effects Models - PowerMixed Effects Models - Power
Mixed Effects Models - Power
 
Mixed Effects Models - Effect Size
Mixed Effects Models - Effect SizeMixed Effects Models - Effect Size
Mixed Effects Models - Effect Size
 
Mixed Effects Models - Autocorrelation
Mixed Effects Models - AutocorrelationMixed Effects Models - Autocorrelation
Mixed Effects Models - Autocorrelation
 
Mixed Effects Models - Growth Curve Analysis
Mixed Effects Models - Growth Curve AnalysisMixed Effects Models - Growth Curve Analysis
Mixed Effects Models - Growth Curve Analysis
 
Mixed Effects Models - Missing Data
Mixed Effects Models - Missing DataMixed Effects Models - Missing Data
Mixed Effects Models - Missing Data
 
Mixed Effects Models - Simple and Main Effects
Mixed Effects Models - Simple and Main EffectsMixed Effects Models - Simple and Main Effects
Mixed Effects Models - Simple and Main Effects
 
Mixed Effects Models - Crossed Random Effects
Mixed Effects Models - Crossed Random EffectsMixed Effects Models - Crossed Random Effects
Mixed Effects Models - Crossed Random Effects
 
Mixed Effects Models - Level-2 Variables
Mixed Effects Models - Level-2 VariablesMixed Effects Models - Level-2 Variables
Mixed Effects Models - Level-2 Variables
 
Mixed Effects Models - Random Intercepts
Mixed Effects Models - Random InterceptsMixed Effects Models - Random Intercepts
Mixed Effects Models - Random Intercepts
 
Mixed Effects Models - Model Comparison
Mixed Effects Models - Model ComparisonMixed Effects Models - Model Comparison
Mixed Effects Models - Model Comparison
 
Mixed Effects Models - Fixed Effects
Mixed Effects Models - Fixed EffectsMixed Effects Models - Fixed Effects
Mixed Effects Models - Fixed Effects
 
Mixed Effects Models - Data Processing
Mixed Effects Models - Data ProcessingMixed Effects Models - Data Processing
Mixed Effects Models - Data Processing
 
Scott_Fraundorf_Resume
Scott_Fraundorf_ResumeScott_Fraundorf_Resume
Scott_Fraundorf_Resume
 

Recently uploaded

DBMSArchitecture_QueryProcessingandOptimization.pdf
DBMSArchitecture_QueryProcessingandOptimization.pdfDBMSArchitecture_QueryProcessingandOptimization.pdf
DBMSArchitecture_QueryProcessingandOptimization.pdfChristalin Nelson
 
Satirical Depths - A Study of Gabriel Okara's Poem - 'You Laughed and Laughed...
Satirical Depths - A Study of Gabriel Okara's Poem - 'You Laughed and Laughed...Satirical Depths - A Study of Gabriel Okara's Poem - 'You Laughed and Laughed...
Satirical Depths - A Study of Gabriel Okara's Poem - 'You Laughed and Laughed...HetalPathak10
 
Comparative Literature in India by Amiya dev.pptx
Comparative Literature in India by Amiya dev.pptxComparative Literature in India by Amiya dev.pptx
Comparative Literature in India by Amiya dev.pptxAvaniJani1
 
The role of Geography in climate education: science and active citizenship
The role of Geography in climate education: science and active citizenshipThe role of Geography in climate education: science and active citizenship
The role of Geography in climate education: science and active citizenshipKarl Donert
 
Unraveling Hypertext_ Analyzing Postmodern Elements in Literature.pptx
Unraveling Hypertext_ Analyzing  Postmodern Elements in  Literature.pptxUnraveling Hypertext_ Analyzing  Postmodern Elements in  Literature.pptx
Unraveling Hypertext_ Analyzing Postmodern Elements in Literature.pptxDhatriParmar
 
Shark introduction Morphology and its behaviour characteristics
Shark introduction Morphology and its behaviour characteristicsShark introduction Morphology and its behaviour characteristics
Shark introduction Morphology and its behaviour characteristicsArubSultan
 
MS4 level being good citizen -imperative- (1) (1).pdf
MS4 level   being good citizen -imperative- (1) (1).pdfMS4 level   being good citizen -imperative- (1) (1).pdf
MS4 level being good citizen -imperative- (1) (1).pdfMr Bounab Samir
 
4.11.24 Poverty and Inequality in America.pptx
4.11.24 Poverty and Inequality in America.pptx4.11.24 Poverty and Inequality in America.pptx
4.11.24 Poverty and Inequality in America.pptxmary850239
 
DiskStorage_BasicFileStructuresandHashing.pdf
DiskStorage_BasicFileStructuresandHashing.pdfDiskStorage_BasicFileStructuresandHashing.pdf
DiskStorage_BasicFileStructuresandHashing.pdfChristalin Nelson
 
ICS 2208 Lecture Slide Notes for Topic 6
ICS 2208 Lecture Slide Notes for Topic 6ICS 2208 Lecture Slide Notes for Topic 6
ICS 2208 Lecture Slide Notes for Topic 6Vanessa Camilleri
 
Sulphonamides, mechanisms and their uses
Sulphonamides, mechanisms and their usesSulphonamides, mechanisms and their uses
Sulphonamides, mechanisms and their usesVijayaLaxmi84
 
Team Lead Succeed – Helping you and your team achieve high-performance teamwo...
Team Lead Succeed – Helping you and your team achieve high-performance teamwo...Team Lead Succeed – Helping you and your team achieve high-performance teamwo...
Team Lead Succeed – Helping you and your team achieve high-performance teamwo...Association for Project Management
 
CHUYÊN ĐỀ ÔN THEO CÂU CHO HỌC SINH LỚP 12 ĐỂ ĐẠT ĐIỂM 5+ THI TỐT NGHIỆP THPT ...
CHUYÊN ĐỀ ÔN THEO CÂU CHO HỌC SINH LỚP 12 ĐỂ ĐẠT ĐIỂM 5+ THI TỐT NGHIỆP THPT ...CHUYÊN ĐỀ ÔN THEO CÂU CHO HỌC SINH LỚP 12 ĐỂ ĐẠT ĐIỂM 5+ THI TỐT NGHIỆP THPT ...
CHUYÊN ĐỀ ÔN THEO CÂU CHO HỌC SINH LỚP 12 ĐỂ ĐẠT ĐIỂM 5+ THI TỐT NGHIỆP THPT ...Nguyen Thanh Tu Collection
 
Objectives n learning outcoms - MD 20240404.pptx
Objectives n learning outcoms - MD 20240404.pptxObjectives n learning outcoms - MD 20240404.pptx
Objectives n learning outcoms - MD 20240404.pptxMadhavi Dharankar
 
Mythology Quiz-4th April 2024, Quiz Club NITW
Mythology Quiz-4th April 2024, Quiz Club NITWMythology Quiz-4th April 2024, Quiz Club NITW
Mythology Quiz-4th April 2024, Quiz Club NITWQuiz Club NITW
 
4.9.24 School Desegregation in Boston.pptx
4.9.24 School Desegregation in Boston.pptx4.9.24 School Desegregation in Boston.pptx
4.9.24 School Desegregation in Boston.pptxmary850239
 

Recently uploaded (20)

DBMSArchitecture_QueryProcessingandOptimization.pdf
DBMSArchitecture_QueryProcessingandOptimization.pdfDBMSArchitecture_QueryProcessingandOptimization.pdf
DBMSArchitecture_QueryProcessingandOptimization.pdf
 
Satirical Depths - A Study of Gabriel Okara's Poem - 'You Laughed and Laughed...
Satirical Depths - A Study of Gabriel Okara's Poem - 'You Laughed and Laughed...Satirical Depths - A Study of Gabriel Okara's Poem - 'You Laughed and Laughed...
Satirical Depths - A Study of Gabriel Okara's Poem - 'You Laughed and Laughed...
 
Comparative Literature in India by Amiya dev.pptx
Comparative Literature in India by Amiya dev.pptxComparative Literature in India by Amiya dev.pptx
Comparative Literature in India by Amiya dev.pptx
 
The role of Geography in climate education: science and active citizenship
The role of Geography in climate education: science and active citizenshipThe role of Geography in climate education: science and active citizenship
The role of Geography in climate education: science and active citizenship
 
Unraveling Hypertext_ Analyzing Postmodern Elements in Literature.pptx
Unraveling Hypertext_ Analyzing  Postmodern Elements in  Literature.pptxUnraveling Hypertext_ Analyzing  Postmodern Elements in  Literature.pptx
Unraveling Hypertext_ Analyzing Postmodern Elements in Literature.pptx
 
Shark introduction Morphology and its behaviour characteristics
Shark introduction Morphology and its behaviour characteristicsShark introduction Morphology and its behaviour characteristics
Shark introduction Morphology and its behaviour characteristics
 
MS4 level being good citizen -imperative- (1) (1).pdf
MS4 level   being good citizen -imperative- (1) (1).pdfMS4 level   being good citizen -imperative- (1) (1).pdf
MS4 level being good citizen -imperative- (1) (1).pdf
 
Spearman's correlation,Formula,Advantages,
Spearman's correlation,Formula,Advantages,Spearman's correlation,Formula,Advantages,
Spearman's correlation,Formula,Advantages,
 
4.11.24 Poverty and Inequality in America.pptx
4.11.24 Poverty and Inequality in America.pptx4.11.24 Poverty and Inequality in America.pptx
4.11.24 Poverty and Inequality in America.pptx
 
DiskStorage_BasicFileStructuresandHashing.pdf
DiskStorage_BasicFileStructuresandHashing.pdfDiskStorage_BasicFileStructuresandHashing.pdf
DiskStorage_BasicFileStructuresandHashing.pdf
 
ICS 2208 Lecture Slide Notes for Topic 6
ICS 2208 Lecture Slide Notes for Topic 6ICS 2208 Lecture Slide Notes for Topic 6
ICS 2208 Lecture Slide Notes for Topic 6
 
Sulphonamides, mechanisms and their uses
Sulphonamides, mechanisms and their usesSulphonamides, mechanisms and their uses
Sulphonamides, mechanisms and their uses
 
Team Lead Succeed – Helping you and your team achieve high-performance teamwo...
Team Lead Succeed – Helping you and your team achieve high-performance teamwo...Team Lead Succeed – Helping you and your team achieve high-performance teamwo...
Team Lead Succeed – Helping you and your team achieve high-performance teamwo...
 
prashanth updated resume 2024 for Teaching Profession
prashanth updated resume 2024 for Teaching Professionprashanth updated resume 2024 for Teaching Profession
prashanth updated resume 2024 for Teaching Profession
 
Mattingly "AI & Prompt Design" - Introduction to Machine Learning"
Mattingly "AI & Prompt Design" - Introduction to Machine Learning"Mattingly "AI & Prompt Design" - Introduction to Machine Learning"
Mattingly "AI & Prompt Design" - Introduction to Machine Learning"
 
CHUYÊN ĐỀ ÔN THEO CÂU CHO HỌC SINH LỚP 12 ĐỂ ĐẠT ĐIỂM 5+ THI TỐT NGHIỆP THPT ...
CHUYÊN ĐỀ ÔN THEO CÂU CHO HỌC SINH LỚP 12 ĐỂ ĐẠT ĐIỂM 5+ THI TỐT NGHIỆP THPT ...CHUYÊN ĐỀ ÔN THEO CÂU CHO HỌC SINH LỚP 12 ĐỂ ĐẠT ĐIỂM 5+ THI TỐT NGHIỆP THPT ...
CHUYÊN ĐỀ ÔN THEO CÂU CHO HỌC SINH LỚP 12 ĐỂ ĐẠT ĐIỂM 5+ THI TỐT NGHIỆP THPT ...
 
Faculty Profile prashantha K EEE dept Sri Sairam college of Engineering
Faculty Profile prashantha K EEE dept Sri Sairam college of EngineeringFaculty Profile prashantha K EEE dept Sri Sairam college of Engineering
Faculty Profile prashantha K EEE dept Sri Sairam college of Engineering
 
Objectives n learning outcoms - MD 20240404.pptx
Objectives n learning outcoms - MD 20240404.pptxObjectives n learning outcoms - MD 20240404.pptx
Objectives n learning outcoms - MD 20240404.pptx
 
Mythology Quiz-4th April 2024, Quiz Club NITW
Mythology Quiz-4th April 2024, Quiz Club NITWMythology Quiz-4th April 2024, Quiz Club NITW
Mythology Quiz-4th April 2024, Quiz Club NITW
 
4.9.24 School Desegregation in Boston.pptx
4.9.24 School Desegregation in Boston.pptx4.9.24 School Desegregation in Boston.pptx
4.9.24 School Desegregation in Boston.pptx
 

Mixed Effects Models - Empirical Logit

  • 1. Week 10.1: Empirical Logit ! Finish Generalized LMERs ! Clarification on Logit Models ! Poisson Regression ! Empirical Logit ! Low & High Probabilities ! Empirical Logit ! Implementation ! Lab
  • 2. Clarification on Logit Models • Dependent/outcome variable would be a variable made up of 0s and 1s • We don’t need to apply any kind of transformation • glmer(Recalled ~ 1 + StudyTime * Strategy + (1|Subject) + (1|WordPair), data=cuedrecall, family=binomial) • family=binomial tells R to analyze this using the logit link function—everything handled automatically
  • 3. Clarification on Logit Models • When we get our results, they will be in terms of log odds/logits (because we used family=binomial) • We may want to use exp() to transform our estimates into effects on the odds to make them more interpretable • Elaborative rehearsal = +2.29 log odds of recall • exp(2.29) = Odds of recall 9.87 times greater
  • 4. Week 10.1: Empirical Logit ! Finish Generalized LMERs ! Clarification on Logit Models ! Poisson Regression ! Empirical Logit ! Low & High Probabilities ! Empirical Logit ! Implementation ! Lab
  • 5. Poisson Regression • glmer() supports other non-normal distributions, such as family=poisson • Good when the DV is a frequency count • Number of gestures made in communicative task • Number of traffic accidents • Number of ideas brainstormed • Number of doctor’s visits • Different from both normal & binomial distributions • Lower-bound at 0 • But, no upper bound • This is a Poisson distribution! -2 0 2 4 6 8 10 0.0 0.1 0.2 0.3 0.4 Count Probability
  • 6. • Link function for a Poisson distribution is the log() • Thus, we would also want to use exp() • Effect of ASD on number of disfluent pauses: 0.27 • exp(0.27) = 1.31 • ASD increases frequency of pauses by 1.31 times = !0 + !1X1i + ei Intercept (Baseline) ASD Can be any number Can be any number log(yi) Poisson Regression
  • 7. Poisson Models: Other Variants • Offset term: Controls for differences in the opportunities to observe the DV (e.g., time) • e.g., one child observed for 15 minutes, another for 14 minutes • glmer(Disfluencies ~ 1 + ASD + (1|Family), offset=Time, family=poisson)
  • 8. Poisson Models: Other Variants • Zero-inflated Poisson: Sometimes, more 0s than expected under a true Poisson distribution • Often, when a separate process creates 0s • # of traffic violations " 0 if you don’t have a car • # of alcoholic drinks per week " 0 if you don’t drink • Zero-inflated model: Simultaneously models the 0-creating process as a logit and the rest as a Poisson • Use package zeroinfl
  • 9. Week 10.1: Empirical Logit ! Finish Generalized LMERs ! Clarification on Logit Models ! Poisson Regression ! Empirical Logit ! Low & High Probabilities ! Empirical Logit ! Implementation ! Lab
  • 10. • Remember our cued recall data from last week? • A second categorical DV here is source confusions: • Lots of theoretical interest in source memory— memory for the context or source where you learned something Odds are not the same thing as probabilities! Source Confusion Odds are not the same thing as probabilities! WHO SAID IT? VIKING—COLLEGE SCOTCH—VODKA Study: Test: VIKING—vodka
  • 11. Source Confusion Model • Let’s model what causes people to make source confusions: • model.Source <- lmer(SourceConfusion ~ Strategy*StudyTime.cen + (1|Subject) + (1|WordPair), data=sourceconfusion) • This code contains two mistakes—can we fix them?
  • 12. Source Confusion Model • Let’s model what causes people to make source confusions: • model.Source <- glmer(SourceConfusion ~ Strategy*StudyTime.cen + (1|Subject) + (1|WordPair), data=sourceconfusion, family=binomial) • Failed to converge, with very questionable results exp(20.08) = Source errors 525,941,212 times more likely in Maintenance condition … but not significant
  • 13. Low & High Probabilities • Problem: These are low frequency events • In fact, lots of theoretically interesting things have low frequency • Clinical diagnoses that are not common • Various kinds of cognitive errors • Language production, memory, language comprehension… • Learners’ errors in math or other educational domains
  • 14. Low & High Probabilities • A problem for our model: • Model was trying to find the odds of making a source confusion within each study condition • But: Source confusions were never observed with elaborative rehearsal, ever! • How small are the odds? They are infinitely small in this dataset! • Note that not all failures to converge reflect low frequency events. But when very low frequencies exist, they are likely to cause convergence problems.
  • 15. Low & High Probabilities • Logit is undefined if probability = 1 • Logit is also undefined if probability = 0 • Log 0 is undefined • e??? = 0 • But there is nothing to which you can raise e to get 0 p(confusion) 1-p(confusion) [ ] logit = log 1 0 [ ] = log p(confusion) 1-p(confusion) [ ] logit = log 0 1 [ ] = log = log (0) Division by zero!!
  • 16. Low & High Probabilities • When close to 0 or 1, logit is defined but unstable 0.0 0.2 0.4 0.6 0.8 1.0 -4 -2 0 2 4 PROBABILITY of recall LOG ODDS of recall p(0.6) -> 0.41 p(0.8) ->1.39 Relatively gradual change at moderate probabilities p(0.95) -> 2.94 p(0.98) -> 3.89 Fast change at extreme probabilities
  • 17. Low & High Probabilities • A problem for our model: • Question was how much less common source confusions become with elaborative rehearsal • But: Source confusions were never observed with elaborative rehearsal • Why we think this happened: • In theory, elaborative subjects would probably make at least one of these errors eventually (given infinite trials) • Not impossible • But, empirically, probability was low enough that we didn’t see the error in our sample (limited sample size) Probability = 1% N = ∞ N = 8
  • 18. Week 10.1: Empirical Logit ! Finish Generalized LMERs ! Clarification on Logit Models ! Poisson Regression ! Empirical Logit ! Low & High Probabilities ! Empirical Logit ! Implementation ! Lab
  • 19. Empirical Logit • Empirical logit: An adjustment to the regular logit to deal with probabilities near (or at) 0 or 1 p(confusion) 1-p(confusion) [ ] logit = log
  • 20. Empirical Logit • Empirical logit: An adjustment to the regular logit to deal with probabilities near (or at) 0 or 1 • Makes extreme values (close to 0 or 1) less extreme Num of “A”s Num of “B”s [ ] logit = log A = Source confusion occurred B = Source confusion did not occur Num of “A”s + 0.5 Num of “B”s + 0.5 [ ] emp. logit = log e
  • 21. Empirical Logit • Empirical logit: An adjustment to the regular logit to deal with probabilities near (or at) 0 or 1 • Makes extreme values (close to 0 or 1) less extreme Num of “A”s Num of “B”s [ ] logit = log Num of “A”s + 0.5 Num of “B”s + 0.5 [ ] emp. logit = log e Num of As: 10 Num of Bs: 0 Num of As: 9 Num of Bs: 1 3.04 2.20 1.85 0.41 0.37 Num of As: 6 Num of Bs: 4
  • 22. Empirical logit doesn’t go as high or as low as the “true” logit At moderate values, they’re essentially the same
  • 23.
  • 24. With larger samples, difference gets much smaller (as long as probability isn’t 0 or 1)
  • 25. Week 10.1: Empirical Logit ! Finish Generalized LMERs ! Clarification on Logit Models ! Poisson Regression ! Empirical Logit ! Low & High Probabilities ! Empirical Logit ! Implementation ! Lab
  • 26. Empirical Logit: Implementation • Empirical logit requires summing up events and then adding 0.5 to numerator & denominator: • Thus, we have to (1) sum across individual trials, and then (2) calculate empirical logit Num A + 0.5 Num B + 0.5 [ ] empirical logit = log Single empirical logit value for each subject in each condition Not a sequence of one YES or NO for every item Num of As for subject S10 in Low associative strength, Maintenance rehearsal condition
  • 27. Empirical Logit: Implementation • Empirical logit requires summing up events and then adding 0.5 to numerator & denominator: • Thus, we have to (1) sum across individual trials, and then (2) calculate empirical logit • Can’t have multiple random effects with empirical logit • Would have to do separate by-subjects and by- items analyses • Collecting more data can be another solution Num A + 0.5 Num B + 0.5 [ ] empirical logit = log Num of As for subject S10 in Maintenance rehearsal condition
  • 28. Empirical Logit: Implementation • Scott’s psycholing package can help calculate the empirical logit & run the model • Example script will be posted on Canvas • Two notes: • No longer using glmer() with family=binomial. We’re now running the model on the empirical logit value, which isn’t just a 0 or 1. Here, the value of the DV is -1.61
  • 29. Empirical Logit: Implementation • Scott’s psycholing package can help calculate the empirical logit & run the model • Example script will be posted on Canvas • Two notes: • No longer using glmer() with family=binomial. We’re now running the model on the empirical logit value, which isn’t just a 0 or 1. • Because we calculate the empirical logit beforehand, model doesn’t know how many observations went into that value • Want to appropriately weight the model -2.46 could be the average across 10 trials or across 100 trials
  • 30. Week 10.1: Empirical Logit ! Finish Generalized LMERs ! Clarification on Logit Models ! Poisson Regression ! Empirical Logit ! Low & High Probabilities ! Empirical Logit ! Implementation ! Lab