Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
causal inference:
a friendly introduction
Alex Dimakis
UT Austin
based on joint work with
Murat Kocaoglu,
Karthik Shanmuga...
Overview
• What is causal inference
• Interventions and how to design them
• What to do if you cannot intervene
Disclaimer
• There are many frameworks of causality
• For time-series: Granger causality
• Potential Outcomes / CounterFac...
Overview
• What is causal inference
• Directed graphical models and conditional independence
• That’s not it.
• Interventi...
Independence of random variables
S: Heavy
smoker
C: Lung cancer
before 60
0 0
1 1
0 1
1 …. 1 ….
Observational data
How to ...
Joint Pdf and Independence
S: Heavy
smoker
C: Lung cancer
before 60
0 0
1 1
0 1
1 …. 1 ….
Observational data
S=0 S=1
C=0 3...
Joint Pdf and Independence
S: Heavy
smoker
C: Lung cancer
before 60
0 0
1 1
0 1
1 …. 1 ….
Observational data
S=0 S=1
C=0 3...
Directed graphical models
A B C
Given data on A,B,C we can estimate the joint PDF
p(A,B,C)
See if it factorizes as
P(A,B,C...
Directed graphical models
A B C
Given data on A,B,C we can estimate the joint PDF
p(A,B,C)
See if it factorizes as
P(A,B,C...
Smoking causes cancer
S: Heavy smoker C: Lung cancer before
60
0 0
1 1
0 1
1 …. 1 ….
Observational data
S=0 S=1
C=0 30/100...
Causality= mechanism
S C Pr(S,C)
Causality= mechanism
S C
S=0 0.5
S=1 0.5
Pr(S)
S=0 S=1
C=0 30/50 10/50
C=1 20/50 40/50
Pr(C/S)
Pr(S,C)
Universe 1
S C
S=0 0.5
S=1 0.5
Pr(S)
S=0 S=1
C=0 30/50 10/50
C=1 20/50 40/50
Pr(C/S)
Pr(S,C) C=F(S,E)
E ⫫ S
Universe 2
S C
Universe 2
S C
C=0 0.4
C=1 0.6
Pr(C)
C=0 C=1
S=0 30/(100*0.4) = 0.75 20/(100*0.6) = 0.33
S=1 10/(100*0.4) = 0.25 40/(100*0...
How to find the causal direction?
Pr(S,C)
Pr(S) Pr(C/S)
S C
C=F(S,E)
E ⫫ S
How to find the causal direction?
S C
Pr(S,C)
Pr(C) Pr(S/C)Pr(S) Pr(C/S)
S C
C=F(S,E)
E ⫫ S
S=F’(C,E’)
E’ ⫫ S
How to find the causal direction?
S C
Pr(S,C)
Pr(C) Pr(S/C)Pr(S) Pr(C/S)
S C
C=F(S,E)
E ⫫ S
S=F’(C,E’)
E’ ⫫ S
• It is impo...
Overview
• What is causal inference
• Directed graphical models and conditional independence
• That’s not it.
• Interventi...
Intervention: force people to smoke
S C
S=0 0.5
S=1 0.5
Pr(S)
S=0 S=1
C=0 30/50 10/50
C=1 20/50 40/50
Pr(C/S)
• Flip coin ...
Intervention: force people to smoke
• Flip coin and force each person to smoke or not, with prob ½.
• In Universe 2 (Under...
Intervention: force people to smoke
• Flip coin and force each person to smoke or not, with prob ½.
• In Universe 2 (Under...
who does interventions like that?
-you’re giving dying people sugar pills?
More variables
S2 S7
S1 S3
S4
S6
S5
True Causal DAG
More variables
S2 S7
S1 S3
S4
S6
S5
True Causal DAG
From observational
Data we can learn
Conditional
independencies.
Obtai...
More variables
S2 S7
S1 S3
S4
S6
S5
S2 S7
S1 S3
S4
S6
S5
True Causal DAG Skeleton
From observational
Data we can learn
Con...
PC Algorithm (Spirtes et al. Meek)
S2 S7
S1 S3
S4
S6
S5
Skeleton
There are a few directions we can learn from
observationa...
PC Algorithm (Spirtes et al. Meek)
S2 S7
S1 S3
S4
S6
S5
Skeleton
There are a few directions we can learn from
observationa...
How interventions reveal directions
S2 S7
S1 S3
S4
S6
S5
Intervened Set S
={S1,S2,S4}
We choose a subset of the variables ...
How interventions reveal directions
S2 S7
S1 S3
S4
S6
S5
Intervened Set S
={S1,S2,S4}
We choose a subset of the variables ...
How interventions reveal directions
S2 S7
S1 S3
S4
S6
S5
Intervened Set S
={S1,S2,S4}
We choose a subset of the variables ...
Learning Causal DAGs
S2 S7
S1 S3
S4
S6
S5
Skeleton
Given a skeleton graph, how many interventions are
needed to learn all ...
Learning Causal DAGs
S2 S7
S1 S3
S4
S6
S5
Skeleton
Given a skeleton graph, how many interventions are
needed to learn all ...
Learning Causal DAGs
S2 S7
S1 S3
S4
S6
S5
Skeleton
Given a skeleton graph, how many interventions are
needed to learn all ...
A good algorithm for general graphs
Overview
• What is causal inference
• Interventions and how to design them
• What to do if you cannot intervene
• Make mor...
Data-driven causality
• How to find causal direction without interventions
• Impossible for two variables. Possible under ...
Conclusions
• Learning causal graphs with interventions is an on-going field of research
• Tetrad project (CMU)
• http://w...
Pointers
• Tuebingen Benchmark: https://webdav.tuebingen.mpg.de/cause-effect/
• http://www.phil.cmu.edu/projects/tetrad/
•...
fin
Learning Causal DAGs
Theorem: Log(χ) interventions suffice
Proof:
1.Color the vertices. (legal coloring)
S2 S7
S1 S3
S4
S6...
Learning Causal DAGs
S2 S7
S1 S3
S4
S6
S5
Skeleton
Thm: Log(χ) interventions suffice
Proof:
1.Color the vertices.
2. Form ...
Learning Causal DAGs
S2 S7
S1 S3
S4
S6
S5
Skeleton
Thm: Log(χ) interventions suffice
Proof:
1.Color the vertices.
2. Form ...
Learning Causal DAGs
S2 S7
S1 S3
S4
S6
S5
For any edge, its two vertices have different colors.
Their binary reps are diff...
Learning Causal DAGs
S2 S7
S1 S3
S4
S6
S5
Skeleton
On-going Research on several problems
• What if the size of the interve...
Major problem: Size of interventions
S2 S7
S1 S3
S4
S6
S5
Intervened Set S
={S1,S2,S4}
We choose a subset of the variables...
Major problem: Size of interventions
S2 S7
S1 S3
S4
S6
S5
Intervened Set S
={S1,S2,S4}
Open problem: Is a separating syste...
Entropic Causality
• Extra slides
Entropic Causality
• Given data Xi,Yi.
• Search over explanations assuming X→Y
• Y= F(X,E) , (E ⫫ X)
• Simplest explanatio...
Entropic Causality in pictures
S C S C
C= F(S,E) , (E ⫫ S)
H(E) small
S= F’(C,E’) , (E’ ⫫ C)
H(E’) big
Entropic Causality in pictures
S C S C
C= F(S,E) , (E ⫫ S)
H(E) small
S= F’(C,E’) , (E’ ⫫ C)
H(E’) big
• You may be thinki...
Question 1: Identifiability?
• If data is generated from X→Y ,
i.e. Y= f(X,E), (E ⫫ X) and H(E) is small.
• Is it true tha...
Question 2: How to find simplest
explanation?
• Minimum entropy coupling problem: Given some marginal
distributions U1,U2,...
Proof idea
• Consider Y = f(X, E). (X,Y over n sized alphabet.)
• pi,j =P(Y = i|X=j) = P(f(X,E) = i | X = j) = P( fj(E) = ...
Performance on Tubingen dataset
20 30 40 50 60 70 80 90 100
Decision Rate, %
0
10
20
30
40
50
60
70
80
90
100
Accuracy,%
A...
Conclusions 2
• Introduced a new framework for data-driven causality for two variables
• Established Identifiability for g...
Existing Theory: Additive Noise
Models
• Assume Y = f(X)+E, X⫫E
• Identifiability 1:
• If f nonlinear, then ∄ g, N ⫫Y such...
Existing Theory: Independence of
Cause and Mechanism
• Function f chosen “independently” from distribution of X by nature
...
Our Approach
• Consider discrete variables X, Y, E.
• Use total input (Renyi) entropy as a measure of complexity
• Choose ...
Performance of Greedy Joint Entropy
Minimization
2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7
Log of number of states, log2
(n)
0
0.2
0...
Results
Shannon Entropy-based Identifiability
H(E)/log(n)
0 0.2 0.4 0.6 0.8 1
ProbabilityofSuccess
0.82
0.84
0.86
0.88
0.9...
Characterization of Conditionals
• Define conditional distribution
• Let p = [p1
T, p2
T, …, pn
T]T. Then
Ex.:
where M is ...
General Position Argument
• Suppose Y|X = j are uniform over simplex (not realistic, toy
example)
• Note: Let xi ∼ exp(1)....
Our contribution
• Nature chooses X, E, f. Joint distribution over X, Y implied
• Choose X, E randomly over simplex.
• Der...
Formal Result
• X, Y discrete r.v.’s with cardinality n
• Y = f(X,E) where E ⫫ X is also discrete
• f is generic (technica...
Working with Shannon Entropy
• Given Y|X, finding E with minimum Shannon entropy such that
there is f that satisfies Y = f...
Alex Dimakis, Associate Professor, Dept. of Electrical and Computer Engineering, University of Texas at Austin at MLconf S...
Alex Dimakis, Associate Professor, Dept. of Electrical and Computer Engineering, University of Texas at Austin at MLconf S...
Upcoming SlideShare
Loading in …5
×

Alex Dimakis, Associate Professor, Dept. of Electrical and Computer Engineering, University of Texas at Austin at MLconf SF 2016

734 views

Published on

A Friendly Introduction To Causality: Causality has been studied under several frameworks in statistics and artificial intelligence. We will briefly survey Pearl’s Structural Equation model and explain how interventions can be used to discover causality. We will also present a novel information theoretic framework for discovering causal directions from observational data when interventions are not possible. The starting point is conditional independence in joint probability distributions and no prior knowledge on causal inference is required.

Published in: Technology
  • Be the first to comment

Alex Dimakis, Associate Professor, Dept. of Electrical and Computer Engineering, University of Texas at Austin at MLconf SF 2016

  1. 1. causal inference: a friendly introduction Alex Dimakis UT Austin based on joint work with Murat Kocaoglu, Karthik Shanmugam Sriram Vishwanath, Babak Hassibi
  2. 2. Overview • What is causal inference • Interventions and how to design them • What to do if you cannot intervene
  3. 3. Disclaimer • There are many frameworks of causality • For time-series: Granger causality • Potential Outcomes / CounterFactuals framework (Imbens & Rubin) • Pearl’s structural equation models • aka Causal Graph models • Additive models, Dawid’s decision-oriented approach, Information Geometry, many others…
  4. 4. Overview • What is causal inference • Directed graphical models and conditional independence • That’s not it. • Interventions and how to design them • What to do if you cannot intervene
  5. 5. Independence of random variables S: Heavy smoker C: Lung cancer before 60 0 0 1 1 0 1 1 …. 1 …. Observational data How to check if S independent from C ?
  6. 6. Joint Pdf and Independence S: Heavy smoker C: Lung cancer before 60 0 0 1 1 0 1 1 …. 1 …. Observational data S=0 S=1 C=0 30/100 10/100 C=1 20/100 40/100 Joint pdf How to check if S independent from C ?
  7. 7. Joint Pdf and Independence S: Heavy smoker C: Lung cancer before 60 0 0 1 1 0 1 1 …. 1 …. Observational data S=0 S=1 C=0 30/100 10/100 C=1 20/100 40/100 Joint pdf How to check if S independent from C ? Compare P(S,C) with P(S)P(C) 0.4 0.6 0.5 0.5
  8. 8. Directed graphical models A B C Given data on A,B,C we can estimate the joint PDF p(A,B,C) See if it factorizes as P(A,B,C)= P(A) P(B|A) P (C|B) i.e. has some conditional indepedencies. A directed graphical model describes all distributions that have a given set of conditional independencies. This one: A ⫫ C |B P(C|A,B) = P(C| B) P(A,C|B) = P(A|B) P(C|B) A B C 0 1 0 1 1 1 … … …
  9. 9. Directed graphical models A B C Given data on A,B,C we can estimate the joint PDF p(A,B,C) See if it factorizes as P(A,B,C)= P(A) P(B|A) P (C|B) i.e. has some conditional indepedencies. A directed graphical model describes all distributions that have a given set of conditional independencies. This one: A ⫫ C |B P(C|A,B) = P(C| B) P(A,C|B) = P(A|B) P(C|B) A B C 0 1 0 1 1 1 … … … • learning a directed graphical model = learning all conditional independencies in data. • learning a causal graph is not learning a directed graphical model.
  10. 10. Smoking causes cancer S: Heavy smoker C: Lung cancer before 60 0 0 1 1 0 1 1 …. 1 …. Observational data S=0 S=1 C=0 30/100 10/100 C=1 20/100 40/100 Joint pdf
  11. 11. Causality= mechanism S C Pr(S,C)
  12. 12. Causality= mechanism S C S=0 0.5 S=1 0.5 Pr(S) S=0 S=1 C=0 30/50 10/50 C=1 20/50 40/50 Pr(C/S) Pr(S,C)
  13. 13. Universe 1 S C S=0 0.5 S=1 0.5 Pr(S) S=0 S=1 C=0 30/50 10/50 C=1 20/50 40/50 Pr(C/S) Pr(S,C) C=F(S,E) E ⫫ S
  14. 14. Universe 2 S C
  15. 15. Universe 2 S C C=0 0.4 C=1 0.6 Pr(C) C=0 C=1 S=0 30/(100*0.4) = 0.75 20/(100*0.6) = 0.33 S=1 10/(100*0.4) = 0.25 40/(100*0.6) = 0.66 Pr(S/C) Pr(S,C) S=F(C,E) E ⫫ C
  16. 16. How to find the causal direction? Pr(S,C) Pr(S) Pr(C/S) S C C=F(S,E) E ⫫ S
  17. 17. How to find the causal direction? S C Pr(S,C) Pr(C) Pr(S/C)Pr(S) Pr(C/S) S C C=F(S,E) E ⫫ S S=F’(C,E’) E’ ⫫ S
  18. 18. How to find the causal direction? S C Pr(S,C) Pr(C) Pr(S/C)Pr(S) Pr(C/S) S C C=F(S,E) E ⫫ S S=F’(C,E’) E’ ⫫ S • It is impossible to find the true causal direction from observational data for two random variables. • (Unless we make more assumptions) • You need interventions, i.e. messing with the mechanism. • For more than two r.v.s there is a rich theory and some directions can be learned without interventions. (Spirtes et al.)
  19. 19. Overview • What is causal inference • Directed graphical models and conditional independence • That’s not it. • Interventions and how to design them • What to do if you cannot intervene
  20. 20. Intervention: force people to smoke S C S=0 0.5 S=1 0.5 Pr(S) S=0 S=1 C=0 30/50 10/50 C=1 20/50 40/50 Pr(C/S) • Flip coin and force each person to smoke or not, with prob ½. • In Universe1 (i.e. Under S→C) , • new joint pdf stays same as before intervention.
  21. 21. Intervention: force people to smoke • Flip coin and force each person to smoke or not, with prob ½. • In Universe 2 (Under C→S) • S, C will become independent after intervention. C=0 0.4 C=1 0.6 Pr(C) C=0 C=1 S=0 30/(100*0.4) = 0.75 20/(100*0.6) = 0.33 S=1 10/(100*0.4) = 0.25 40/(100*0.6) = 0.66 Pr(S/C) S C
  22. 22. Intervention: force people to smoke • Flip coin and force each person to smoke or not, with prob ½. • In Universe 2 (Under C→S) • S, C will become independent after intervention. • So check correlation on data after intervention and find true direction! C=0 0.4 C=1 0.6 Pr(C) C=0 C=1 S=0 30/(100*0.4) = 0.75 20/(100*0.6) = 0.33 S=1 10/(100*0.4) = 0.25 40/(100*0.6) = 0.66 Pr(S/C) S C
  23. 23. who does interventions like that? -you’re giving dying people sugar pills?
  24. 24. More variables S2 S7 S1 S3 S4 S6 S5 True Causal DAG
  25. 25. More variables S2 S7 S1 S3 S4 S6 S5 True Causal DAG From observational Data we can learn Conditional independencies. Obtain Skeleton (lose directions)
  26. 26. More variables S2 S7 S1 S3 S4 S6 S5 S2 S7 S1 S3 S4 S6 S5 True Causal DAG Skeleton From observational Data we can learn Conditional independencies. Obtain Skeleton (lose directions)
  27. 27. PC Algorithm (Spirtes et al. Meek) S2 S7 S1 S3 S4 S6 S5 Skeleton There are a few directions we can learn from observational Data (Immoralities, Meek Rules) Spirtes, Glymour, Scheines 2001, PC Algorithm C. Meek , 1995. Andersson, Madigan, Perlman, 1997
  28. 28. PC Algorithm (Spirtes et al. Meek) S2 S7 S1 S3 S4 S6 S5 Skeleton There are a few directions we can learn from observational Data (Immoralities, Meek Rules) Spirtes, Glymour, Scheines 2001, PC Algorithm C. Meek , 1995. Andersson, Madigan, Perlman, 1997
  29. 29. How interventions reveal directions S2 S7 S1 S3 S4 S6 S5 Intervened Set S ={S1,S2,S4} We choose a subset of the variables S and Intervene (i.e. force random values )
  30. 30. How interventions reveal directions S2 S7 S1 S3 S4 S6 S5 Intervened Set S ={S1,S2,S4} We choose a subset of the variables S and Intervene (i.e. force random values ) Directions of edges between S and Sc are revealed to me.
  31. 31. How interventions reveal directions S2 S7 S1 S3 S4 S6 S5 Intervened Set S ={S1,S2,S4} We choose a subset of the variables S and Intervene (i.e. force random values ) Directions of edges between S and Sc are revealed to me. Re-apply PC Algorithm+Meek rules to learn a few more edges possibly
  32. 32. Learning Causal DAGs S2 S7 S1 S3 S4 S6 S5 Skeleton Given a skeleton graph, how many interventions are needed to learn all directions ? • A-priori fixed set of interventions (non-Adaptive)
  33. 33. Learning Causal DAGs S2 S7 S1 S3 S4 S6 S5 Skeleton Given a skeleton graph, how many interventions are needed to learn all directions ? • A-priori fixed set of interventions (non-Adaptive) • Adaptive • Randomized Adaptive
  34. 34. Learning Causal DAGs S2 S7 S1 S3 S4 S6 S5 Skeleton Given a skeleton graph, how many interventions are needed to learn all directions ? • A-priori fixed set of interventions (non-Adaptive) Theorem (Hauser & Buhlmann 2014): Log(χ) interventions suffice (χ= chromatic number of skeleton) Adaptive? (NIPS15): Adaptive does not help (in the worst case) • Randomized Adaptive (Li,Vetta, NIPS14): loglog(n) interventions with high probability suffice for complete skeleton.
  35. 35. A good algorithm for general graphs
  36. 36. Overview • What is causal inference • Interventions and how to design them • What to do if you cannot intervene • Make more assumptions • compare on standard benchmark
  37. 37. Data-driven causality • How to find causal direction without interventions • Impossible for two variables. Possible under assumptions. • Popular assumption Y= F(X) + E, (E ⫫ X) (Additive models)(Shimizu et al., Hoyer et al., Peters et al. Chen et al., Mooij et al.) • Entropic Causality: Use information theory for general data- driven causality. Y= F(X,E), (E ⫫ X) • (related work: Janzing, Mooij, Zhang, Lemeire: not additive assumption but no noise. Y=F(X) )
  38. 38. Conclusions • Learning causal graphs with interventions is an on-going field of research • Tetrad project (CMU) • http://www.phil.cmu.edu/projects/tetrad/ • When time is present more things can be done (Difference in Differences method, Granger, Potential outcomes etc.) • Additive models and entropic causality can give be used for data-driven causal inference.
  39. 39. Pointers • Tuebingen Benchmark: https://webdav.tuebingen.mpg.de/cause-effect/ • http://www.phil.cmu.edu/projects/tetrad/ • https://github.com/mkocaoglu/Entropic-Causality • P. Spirtes, C. Glymour and R. Scheines, Causation, Prediction, and Search. Bradford Books, 2001. • Causality by J. Pearl Cambridge University Press, 2009. • Causal Inference for Statistics, Social, and Biomedical Sciences: An Introduction, G. Imbens and D. Rubin • https://www.youtube.com/watch?v=9yEYZURoE3Y&feature=youtu.be CCD Summer Short Course 2016CMU Center for Causal Discovery short course. • Jonas Peters, Peter Buehlmann and Nicolai Meinshausen (2016) Causal inference using invariant prediction: identification and confidence intervals Journal of the Royal Statistical Society, Series B • Learning Causal Graphs with Small Interventions K. Shanmugam, M. Kocaoglu, A.G. Dimakis, S. Vishwanath (NIPS 2015) • Jonas Peters, Peter Buehlmann and Nicolai Meinshausen (2016) Causal inference using invariant prediction: identification and confidence intervals Journal of the Royal Statistical Society, Series B • Frederich Eberhardt, Clark Glymour, and Richard Scheines. On the number of experiments sufficient and in the worst case necessary to identify all causal relations among n variables. • Alain Hauser and Peter Buhlmann. Two optimal strategies for active learning of causal models from interventional data. International Journal of Approximate Reasoning, 55(4):926–939, 2014. • Hoyer, Patrik O., et al. "Nonlinear causal discovery with additive noise models." Advances in neural information processing systems. 2009. • Janzing, Dominik, et al. "Information-geometric approach to inferring causal directions." Artificial Intelligence 182 (2012) • Peters, Jonas, Dominik Janzing, and Bernhard Scholkopf. "Causal inference on discrete data using additive noise models.” IEEE Transactions on Pattern Analysis and Machine Intelligence 33.12 (2011)
  40. 40. fin
  41. 41. Learning Causal DAGs Theorem: Log(χ) interventions suffice Proof: 1.Color the vertices. (legal coloring) S2 S7 S1 S3 S4 S6 S5 Skeleton
  42. 42. Learning Causal DAGs S2 S7 S1 S3 S4 S6 S5 Skeleton Thm: Log(χ) interventions suffice Proof: 1.Color the vertices. 2. Form table with binary representations of colors Red: 0 0 Green: 0 1 Blue: 1 0 S1 0 0 S2 0 1 S3 1 0 S4 0 1 S5 1 0 S6 0 1 S7 1 0
  43. 43. Learning Causal DAGs S2 S7 S1 S3 S4 S6 S5 Skeleton Thm: Log(χ) interventions suffice Proof: 1.Color the vertices. 2. Form table with binary representations of colors Red: 0 0 Green: 0 1 Blue: 1 0 3. Each intervention is indexed by a column of this table. S1 0 0 S2 0 1 S3 1 0 S4 0 1 S5 1 0 S6 0 1 S7 1 0 Intervention 1
  44. 44. Learning Causal DAGs S2 S7 S1 S3 S4 S6 S5 For any edge, its two vertices have different colors. Their binary reps are different in 1 bit. So for some intervention, one is in set and other is not. So I will learn its direction. ΟΕΔ. Thm: Log(χ) interventions suffice Proof: 1.Color the vertices. 2. Form table with binary representations of colors Red: 0 0 Green: 0 1 Blue: 1 0 3. Each intervention is indexed by a column of this table. S1 0 0 S2 0 1 S3 1 0 S4 0 1 S5 1 0 S6 0 1 S7 1 0 Intervention 1
  45. 45. Learning Causal DAGs S2 S7 S1 S3 S4 S6 S5 Skeleton On-going Research on several problems • What if the size of the intervention sets is limited (NIPS 15) • What if some variables cannot be intervened on
  46. 46. Major problem: Size of interventions S2 S7 S1 S3 S4 S6 S5 Intervened Set S ={S1,S2,S4} We choose a subset of the variables S and Intervene (i.e. force random values ) Question: If each intervention has size up to k, how many interventions do we need ? Eberhardt: A separating system on χ elements with weight k is sufficient to produce a non-adaptive causal inference algorithm A separating system on n elements with weight k is a {0,1} matrix with n distinct columns and each row having weight at most k. Reyni, Kantona, Wegener: (n,k) separating systems have size
  47. 47. Major problem: Size of interventions S2 S7 S1 S3 S4 S6 S5 Intervened Set S ={S1,S2,S4} Open problem: Is a separating system necessary or can adaptive algorithms do better ? (NIPS15): For complete graph skeletons, separating systems are necessary. Even for adaptive algorithms. We can use lower bounds on size of separating systems to get lower bounds on the number of interventions. Randomized adaptive: loglogn interventions Our result: n/k loglog k interventions suffice , each of size up to k.
  48. 48. Entropic Causality • Extra slides
  49. 49. Entropic Causality • Given data Xi,Yi. • Search over explanations assuming X→Y • Y= F(X,E) , (E ⫫ X) • Simplest explanation: One that minimizes H(E). • Search in the other direction, assuming Y→X • X= F’(Y,E’) , (E’ ⫫ Y) • If H(E’) << H(E) decide Y→X • If H(E) <<H(E’) decide X→Y • If H(E), H(E’) close, say ‘don’t know’
  50. 50. Entropic Causality in pictures S C S C C= F(S,E) , (E ⫫ S) H(E) small S= F’(C,E’) , (E’ ⫫ C) H(E’) big
  51. 51. Entropic Causality in pictures S C S C C= F(S,E) , (E ⫫ S) H(E) small S= F’(C,E’) , (E’ ⫫ C) H(E’) big • You may be thinking that min H(E) is like minimizing H(C/S). • But it is fundamentally different • (we’ll prove its NP-hard to compute)
  52. 52. Question 1: Identifiability? • If data is generated from X→Y , i.e. Y= f(X,E), (E ⫫ X) and H(E) is small. • Is it true that all possible reverse explanations • X= f’(Y,E’) , (E’ ⫫ Y) must have H(E’) big, for all f’,E’ ? • Theorem 1: If X,E,f are generic, then identifiability holds for H0 (support of distribution of E’ must be large). • Conjecture 1: Same result holds for H1 (Shannon entropy).
  53. 53. Question 2: How to find simplest explanation? • Minimum entropy coupling problem: Given some marginal distributions U1,U2, .. Un , find the joint distribution that has these as marginals and has minimal entropy. • (NP-Hard, Kovacevic et al. 2012). • Theorem 2: Finding the simplest data explanation f,E, is equivalent to solving the minimum entropy coupling problem. • How to use: We propose a greedy algorithm that empirically performs reasonably well
  54. 54. Proof idea • Consider Y = f(X, E). (X,Y over n sized alphabet.) • pi,j =P(Y = i|X=j) = P(f(X,E) = i | X = j) = P( fj(E) = i ) since E ⫫ X e1 e2 e3 e4 e5 e6 . . . em Distribution of E p1,1 p2,1 p3,1 . . . pn,1 Distribution of Y conditioned on X = 1 f1 • Each conditional probability is a subset sum of distribution of E • Si,j: index set for pi,j
  55. 55. Performance on Tubingen dataset 20 30 40 50 60 70 80 90 100 Decision Rate, % 0 10 20 30 40 50 60 70 80 90 100 Accuracy,% Accuracy vs. Decision Percentage Entropy-based Causal Inference 68% Confidence Interval 95% Condifence Interval • Decision rate: • Fraction of pairs that algorithm makes a decision. • Decision made when |H(X,E)-H(Y,E’)|> t (t determines the decision rate) • Confidence intervals based on number of datapoints • Slightly better than ANMs
  56. 56. Conclusions 2 • Introduced a new framework for data-driven causality for two variables • Established Identifiability for generic distributions for H0 entropy. Conjectured it holds for Shannon entropy. • Inspired by Occam’s razor. Natural and different from prior works. • Natural for categorical variables (Additive models do not work there) • Proposed practical greedy algorithm using Shannon entropy. • Empirically performs very well for artificial and real causal datasets.
  57. 57. Existing Theory: Additive Noise Models • Assume Y = f(X)+E, X⫫E • Identifiability 1: • If f nonlinear, then ∄ g, N ⫫Y such that X = g(Y)+N (almost surely) • If E non-Gaussian, ∄ g, N ⫫Y such that X = g(Y)+N • Performs 63% on real data* • Drawback: Additivity is a restrictive functional assumption * Cause Effect Pairs Dataset: https://webdav.tuebingen.mpg.de/cause-effect/
  58. 58. Existing Theory: Independence of Cause and Mechanism • Function f chosen “independently” from distribution of X by nature • Notion of independence: Assign a variable to f, check log-slope integral • Boils down to: X causes Y if h(Y) < h(X) [h: differential entropy] • Drawback: • No exogenous variable assumption (deterministic X-Y relation) • Continuous variables only
  59. 59. Our Approach • Consider discrete variables X, Y, E. • Use total input (Renyi) entropy as a measure of complexity • Choose the simpler model • Assumption: (Renyi) entropy of exogenous variable E is small • Theoretical guarantees for H0 Renyi entropy (cardinality) Causal direction (almost surely) identifiable if E has small cardinality
  60. 60. Performance of Greedy Joint Entropy Minimization 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7 Log of number of states, log2 (n) 0 0.2 0.4 0.6 0.8 1 1.2 H* (X1 ,X2 ,...,Xn )-maxi (H(Xi )) Minimum Joint Entropy by Greedy Algorithm Average Gap H* (X 1 ,X 2 ,...X n )-max i H(X i ) Minimum Gap Maximum Gap • n marginal distributions each with n states are randomly generated for each n • The minimum joint entropy obtained by the greedy algorithm is at most 1 bit away from the largest marginal maxiH(Xi)
  61. 61. Results Shannon Entropy-based Identifiability H(E)/log(n) 0 0.2 0.4 0.6 0.8 1 ProbabilityofSuccess 0.82 0.84 0.86 0.88 0.9 0.92 0.94 0.96 0.98 1 Accuracy for Low Entropy E n = 4 n = 8 n = 16 n = 32 n = 64 n = 128 0 0.5 1 0.997 0.998 0.999 1 • Generate distributions of X,Y by randomly selecting f, X, E. • Probability of success is the fraction of points where H(X,E) < H(Y,N). • Larger n drives probability of success to 1 when $H(E) < log(n), supporting the conjecture.
  62. 62. Characterization of Conditionals • Define conditional distribution • Let p = [p1 T, p2 T, …, pn T]T. Then Ex.: where M is a block partition matrix: Each block of length n is a partitioning of columns
  63. 63. General Position Argument • Suppose Y|X = j are uniform over simplex (not realistic, toy example) • Note: Let xi ∼ exp(1). Then following is a uniform random vector over the simplex: • Drop n rows of p to make it (almost) i.i.d. • Claim: There does not exist an e with H0 < n(n-1) • Proof: Assume otherwise. • Rows of M are linearly dependent. • ∃ a such that aT M = 0 • Then aTp = 0 • Implies a random hyperplane being orthogonal to a vector, has probability 0.
  64. 64. Our contribution • Nature chooses X, E, f. Joint distribution over X, Y implied • Choose X, E randomly over simplex. • Derive X|Y from induced joint • Any ⫫ Y for which X = g(Y, ) implies • Corresponds to a non-zero polynomial being zero, has probability 0.
  65. 65. Formal Result • X, Y discrete r.v.’s with cardinality n • Y = f(X,E) where E ⫫ X is also discrete • f is generic (technical condition to avoid edge cases, true in real data) • Distribution vectors of X, E uniformly randomly sampled from simplex • Then with probability 1, there does not exist N ⫫ Y such that there exist g that satisfies X = g(Y, N)
  66. 66. Working with Shannon Entropy • Given Y|X, finding E with minimum Shannon entropy such that there is f that satisfies Y = f(X,E) is equivalent to • Given marginal distributions of n variables Xi, find the joint distribution with minimum entropy • NP hard problem. • We propose a greedy algorithm (that produces a local optimum)

×