SlideShare a Scribd company logo
1 of 25
Download to read offline
Hidehiko Ichimura 
Semiparametric Least Squares (SLS) and weighted 
SLS estimation of single-index models1 
Péter Tóth 
toth.peter@utexas.edu 
UT Austin 
1Journal of Econometrics 58 (1993) pp. 71-120.
Introduction
Index models 
I Semiparametric model: if an index parametrizing the DGP (a 
distribution) consists of two parts  and 
, where  2  in a 
nite dimensional space, while 
 2  in an innite dimensional 
space 
I Single-index model: The DGP is assumed to be 
yi = (h(xi; 0)) + i for i = 1; 2; 3; :::; n 
I where f(xi; yi)g is the observed iid sample 
I 0 2 RM is the true (nite) parameter vector, 
I E[ijxi] = 0 
I h() is known up to the parameter , but () is not. 
I Examples. 
1
(W)SLS estimator
Three observations 
I 1. The variation in y results from the variation in  and the 
variation in x (or h(x; 0)) 
I 2. On the 'contour line', where h(x; 0) = const all the 
variation in y comes from  
I 3. (2) does not (necessarily) hold for 6= 0 
I So what will be the identication strategy for 0? 
I What would be the estimation strategy? 
I caveats - we have conditional moments 
2
SLS estimator 1 
I an extremum estimator 
I if we know the conditional statistic, then the objective function 
is the sample analogue of some measure of variation, in 
particular now of the variance: 
Jh 
n () = 
1 
n 
X 
[yi  E(yijh(xi; )]2 
I you can also just sum for all the h-values 
I if we do not know the conditional mean, we estimate it 
nonparametrically with a smooth kernel estimator (why?), so 
Jn() = 
1 
n 
X 
I(xi 2 X)[yi  ^E 
(xi; )]2 + op(n1) 
3
SLS estimator 2 
I here  2 RL 
^E 
(xi; ) = 
P 
j6=i yjI(xj 2 Xn)K[h(xi; )  h(xj ; )]=an P 
j6=i I(xj 2 Xn)K[h(xi; )  h(xj ; )]=an 
I if 
P 
j6=i I(xj 2 Xn)K[h(xi; )  h(xj ; )]=an6= 0 
I otherwise: if yi  (ymin + ymax)=2, then ^E 
(xi; ) = ymin 
I otherwise ^E 
(xi; ) = ymax 
I where Xn = fxjjjx  x0jj  2an for some x0 2 Xg 
4
SLS estimator 3 
I contd: 
I an ! 0, but 
p 
nan ! 1 positive series 
I K : R ! R is a smooth kernel function 
I if all denominators of the kernel regression function are zero, ^ 
is assumed to be 0 
I NLS vs SLS 
I WSLS: W(x) weighting function will be introduced in the 
kernel denominator, numerator and to the objective function 
itself (in front of the quadratic fn) 
I 0  W(x)  W 
I From now on we will restrict our attention to linear h()-s 
5
Identication
Assumption 4.1.-2. 
I Assumption 4.1: The () function is dierentiable and not 
constant on the whole support of x0
0 
I nominal regressors: these are the x vectors with lth member 
(variable) xl, which are actually can be thought as functions 
xl(z) of underlying regressors (z1; z2; :::; zL0 
) 
I we assume the underlying regressors are either continuous or 
discrete 
I the rst L1 nominal and L0 
1 underlying regressors have 
continuous marginals (the rest discrete ones) 
I Assumption 4.2 
I (1) xl() for all l has partial derivatives wrt the continuous 
underlyings 
I (2) for discrete nominal regressors @xl=@ 
P 
zl0 
= 0 for 
l = L1 + 1;L1 + 2; :::;L and l0 = 1; 2; :::;L0 
1 almost 
everywhere in z 
6
I contd.: 
I (3) 
TL0 
1 l0 = 1fsl0 
1 ; :::; sl0 
L1 
l = @xl 
jsl0 
@zl0 for some z 2 Zg? = f0g 
I (4) (i) for each  2  there is an open T and at least 
L  L1 + 1 constant vectors cl = (cL+1; :::; cL) for 
l = 0; 1; :::;L  L1 such that 
I cl  c0 are linearly independent (for l = 1; 2; ::L  L1) 
I T is in 
LL1 
l=0 
ftjt = 1x1(z) + ::: 
::: + L1xL1(z) + L1+1cl 
L1+1 + ::: + Lcl 
L z 2 Z(cl)g 
where Z(cl) = fz 2 ZjxL1+1 = cl 
L1+1; :::; xL(z) = cl 
Lg 
I (ii) and  is not periodic on (T) 
7
Identication: Theorem 4.1. 
Let us have the linear single index model dened above. If there is 
a continuous regressor that has a non-zero coecient, then 
Assumption 4.1 and 4.2 (1-3) implies that 0 is identied up to a 
scalar constant for all continuous regressors. In addition, if 4.2 (4) 
is satised, then the coecients of the discrete regressors are also 
identied (up to a scalar constant). 
I Intuition for proof: assume there is another 0 that minimizes the 
objective function and get contradiction. 
8
What does assumption 4.2 rule out? 
I Ex. 4.2.: when x1 = z and x2 = z2 
I Ex. 4.3.: x1 = z1 2 [0; 1] and x2 = z2 2 f0; 1g 
I so 4.2. (3) is much a non-constant/inversibility condition 
I ... while 4.2. (4) is a support-like condition 
I either should have small enough  or large enough ( full support!) 
support for the continuous variable 
I what is the problem with periodic (:)? 
9
Asymptotic properties
Assumptions 5.1-2 
I Some objects: 
I X is a subset of x's support 
I T(X) = ft 2 Rjt = h(x; ) 8x 2 Xg 
I f(t; ) is the Lebesgue density of t = h(x; ) (aww Lord) 
I Assumption 5.1: iid sample 
I Assumption 5.2:  2 Int() where  2 RM is a compact set 
10
Assumptions 5.3-6 
I Assumption 5.3: 
I 1. X is compact 
I 2. infx2X f(h(x; ); )  0 
I 3. f(t; ) and E[yjh(x; ) = t] are three times dierentiable wrt t, 
and the third dierential is Lipschitz jointly in both arguments 
I Assumption 5.4: y is in Lm (for an m  2) and the conditional 
variance of y (on x) is uniformly bounded and bounded away from 
0 on X 
I Assumption 5.5: h(x; ) is Lipschitz jointly on X   
I Assumption 5.6: on the kernel K; besides the usuals we have that 
the second derivative is Lipschitz 
11
Consistency: Theorem 5.1 
If Assumption 5.1-6 hold, the (W)SLS estimator dened above is 
consistent. 
I the proof uses that P[Jn(^)  Jn(0)] = 1, and then observes that 
P[Jn(^)  Jn(0)] = P[Jn(^)  Jn(0); ^ 2 B(0)]+ 
+P[Jn(^)  Jn(0); ^ =2 B(0)]  
 P[^ 2 B(0)] + P[ inf 
2nB(0) 
Jn(^)  Jn(0)] 
I where B(0) is an open ball around 0 with  radius, and 
P[ inf 
2nB(0) 
Jn(^)  Jn(0)] ! 0: 
I Alternatively, I think after establishing consistency one could use 
the continuous mapping theorem and the consistency theorem for 
extremum estimators 
12
Asymptotic normality: Theorem 5.2 
Under Ass. 5.1-6, and if y has at least 3 absolute moments, 0 is 
identied, regularity conditions are satised for an, then 
p 
n(^  0) N(0; V 1V 1); 
where the variance is of just the usual sandwhich. 
I Note that the usual sandwhich formula is not feasible now, since we 
have the derivative of th (:) function in it, which is unknown... 
13
Some remarks 
I Optimal reweighting 
I 'inner weighting' 
I weights are reducing variance AND bias 
I the optimal weighting is the usual (X)1 
I one can show it achieves the semiparametric lower bound by Newey 
(1990) 
I Estimation of the covariance matrix 
I he introduces a kernel estimate for @ ^E 
W(xi; ^)=@ 
I Small sample properties (example): comparable with MRC, better 
than MS 
I the further we go from normality the better this performs relatively 
14
Proofs
Identication 1 
I Suppose there is  minimizes the objective function, so 
EfW(x)[(x; 0)  E((:)jx0)]2g = 0: 
I Moreover, since W(x)  0 for all x 2 X 
E((:)jx0 = t) = (x(z); 0); 
I but then after taking derivatives wrt z (normalize 01 =  
1 = 1) 
0(x; 0)[
2@x2=@zl0 
+ ::: + 
L1@xL1=@zl0 
] = 0 
I for all l0 2 f1; :::;L0 
1g a.s. for z 2 Z 
I where 
l = 0l  01 
l 
I Now we need that for the z-s for which 0(0; x(z))6= 0 
assumption 4.3 holds - then we see the rst statement proved. 
I Common trick: t = x0(  0), this is what you really condition 
on... 
15
Identication 2 
I So now we identied the 01; :::; 0L1 coecients up to a constant r 
I This leaves us with 
(x; 0) = E[()jx0] = 
= (t=r + (0L1+1=r  L 
1+1)xL1+1 + ::: + (0L=r  L 
)xL) 
I after starring for a minute or two you realize that Assumption 4.2. 
(4) is ready-made for this... 
16
Consistency 
I We only have to show 
P[ inf 
2nB(0) 
Jn(^)  Jn(0)] ! 0: 
I tedious algebra, the only idea: build a bridge of -s through using 
EW() instead of ^EW() + triangle inequality and identication gives 
the desired result 
I intuition 
17
Asympptotic Normality 
I the standard proof from Newey-McFadden basically 
I assuming kernel estimator is as consistent wsome restriction 
(Lipschitz) 
18

More Related Content

What's hot

Important Questions of fourier series with theoretical study Engg. Mathem...
Important Questions  of  fourier series with theoretical study   Engg. Mathem...Important Questions  of  fourier series with theoretical study   Engg. Mathem...
Important Questions of fourier series with theoretical study Engg. Mathem...Mohammad Imran
 
Module of algelbra analyses 2
Module of algelbra analyses 2Module of algelbra analyses 2
Module of algelbra analyses 2Bui Loi
 
On Double Elzaki Transform and Double Laplace Transform
On Double Elzaki Transform and Double Laplace TransformOn Double Elzaki Transform and Double Laplace Transform
On Double Elzaki Transform and Double Laplace Transformiosrjce
 
1531 fourier series- integrals and trans
1531 fourier series- integrals and trans1531 fourier series- integrals and trans
1531 fourier series- integrals and transDr Fereidoun Dejahang
 
Stochastic Schrödinger equations
Stochastic Schrödinger equationsStochastic Schrödinger equations
Stochastic Schrödinger equationsIlya Gikhman
 
Fourier series Introduction
Fourier series IntroductionFourier series Introduction
Fourier series IntroductionRizwan Kazi
 
Partial Differential Equation - Notes
Partial Differential Equation - NotesPartial Differential Equation - Notes
Partial Differential Equation - NotesDr. Nirav Vyas
 
Partial differential equations
Partial differential equationsPartial differential equations
Partial differential equationsaman1894
 
Fourier series of odd functions with period 2 l
Fourier series of odd functions with period 2 lFourier series of odd functions with period 2 l
Fourier series of odd functions with period 2 lPepa Vidosa Serradilla
 
On Frechet Derivatives with Application to the Inverse Function Theorem of Or...
On Frechet Derivatives with Application to the Inverse Function Theorem of Or...On Frechet Derivatives with Application to the Inverse Function Theorem of Or...
On Frechet Derivatives with Application to the Inverse Function Theorem of Or...BRNSS Publication Hub
 
Polya recurrence
Polya recurrencePolya recurrence
Polya recurrenceBrian Burns
 
the fourier series
the fourier seriesthe fourier series
the fourier seriessafi al amu
 

What's hot (20)

1532 fourier series
1532 fourier series1532 fourier series
1532 fourier series
 
Important Questions of fourier series with theoretical study Engg. Mathem...
Important Questions  of  fourier series with theoretical study   Engg. Mathem...Important Questions  of  fourier series with theoretical study   Engg. Mathem...
Important Questions of fourier series with theoretical study Engg. Mathem...
 
Fourier series
Fourier seriesFourier series
Fourier series
 
senior seminar
senior seminarsenior seminar
senior seminar
 
Fourier series
Fourier seriesFourier series
Fourier series
 
Module of algelbra analyses 2
Module of algelbra analyses 2Module of algelbra analyses 2
Module of algelbra analyses 2
 
On Double Elzaki Transform and Double Laplace Transform
On Double Elzaki Transform and Double Laplace TransformOn Double Elzaki Transform and Double Laplace Transform
On Double Elzaki Transform and Double Laplace Transform
 
1531 fourier series- integrals and trans
1531 fourier series- integrals and trans1531 fourier series- integrals and trans
1531 fourier series- integrals and trans
 
Fourier series 1
Fourier series 1Fourier series 1
Fourier series 1
 
Fourier series
Fourier seriesFourier series
Fourier series
 
Stochastic Schrödinger equations
Stochastic Schrödinger equationsStochastic Schrödinger equations
Stochastic Schrödinger equations
 
Fourier series Introduction
Fourier series IntroductionFourier series Introduction
Fourier series Introduction
 
Partial Differential Equation - Notes
Partial Differential Equation - NotesPartial Differential Equation - Notes
Partial Differential Equation - Notes
 
Chapter 4 (maths 3)
Chapter 4 (maths 3)Chapter 4 (maths 3)
Chapter 4 (maths 3)
 
Partial differential equations
Partial differential equationsPartial differential equations
Partial differential equations
 
D021018022
D021018022D021018022
D021018022
 
Fourier series of odd functions with period 2 l
Fourier series of odd functions with period 2 lFourier series of odd functions with period 2 l
Fourier series of odd functions with period 2 l
 
On Frechet Derivatives with Application to the Inverse Function Theorem of Or...
On Frechet Derivatives with Application to the Inverse Function Theorem of Or...On Frechet Derivatives with Application to the Inverse Function Theorem of Or...
On Frechet Derivatives with Application to the Inverse Function Theorem of Or...
 
Polya recurrence
Polya recurrencePolya recurrence
Polya recurrence
 
the fourier series
the fourier seriesthe fourier series
the fourier series
 

Similar to Semiparametric Least Squares estimation of single-index models

OrthogonalFunctionsPaper
OrthogonalFunctionsPaperOrthogonalFunctionsPaper
OrthogonalFunctionsPaperTyler Otto
 
APPROXIMATE CONTROLLABILITY RESULTS FOR IMPULSIVE LINEAR FUZZY STOCHASTIC DIF...
APPROXIMATE CONTROLLABILITY RESULTS FOR IMPULSIVE LINEAR FUZZY STOCHASTIC DIF...APPROXIMATE CONTROLLABILITY RESULTS FOR IMPULSIVE LINEAR FUZZY STOCHASTIC DIF...
APPROXIMATE CONTROLLABILITY RESULTS FOR IMPULSIVE LINEAR FUZZY STOCHASTIC DIF...Wireilla
 
APPROXIMATE CONTROLLABILITY RESULTS FOR IMPULSIVE LINEAR FUZZY STOCHASTIC DIF...
APPROXIMATE CONTROLLABILITY RESULTS FOR IMPULSIVE LINEAR FUZZY STOCHASTIC DIF...APPROXIMATE CONTROLLABILITY RESULTS FOR IMPULSIVE LINEAR FUZZY STOCHASTIC DIF...
APPROXIMATE CONTROLLABILITY RESULTS FOR IMPULSIVE LINEAR FUZZY STOCHASTIC DIF...ijfls
 
Double Robustness: Theory and Applications with Missing Data
Double Robustness: Theory and Applications with Missing DataDouble Robustness: Theory and Applications with Missing Data
Double Robustness: Theory and Applications with Missing DataLu Mao
 
Solution set 3
Solution set 3Solution set 3
Solution set 3慧环 赵
 
Complete l fuzzy metric spaces and common fixed point theorems
Complete l fuzzy metric spaces and  common fixed point theoremsComplete l fuzzy metric spaces and  common fixed point theorems
Complete l fuzzy metric spaces and common fixed point theoremsAlexander Decker
 
Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Pierre Jacob
 
Use the same variable names and write the function F - Force(x-ks-kc-l.pdf
Use the same variable names and write the function F - Force(x-ks-kc-l.pdfUse the same variable names and write the function F - Force(x-ks-kc-l.pdf
Use the same variable names and write the function F - Force(x-ks-kc-l.pdfacteleshoppe
 
Spanos lecture+3-6334-estimation
Spanos lecture+3-6334-estimationSpanos lecture+3-6334-estimation
Spanos lecture+3-6334-estimationjemille6
 
InfEntr_EntrProd_20100618_2
InfEntr_EntrProd_20100618_2InfEntr_EntrProd_20100618_2
InfEntr_EntrProd_20100618_2Teng Li
 
Using Eulers formula, exp(ix)=cos(x)+isin.docx
Using Eulers formula, exp(ix)=cos(x)+isin.docxUsing Eulers formula, exp(ix)=cos(x)+isin.docx
Using Eulers formula, exp(ix)=cos(x)+isin.docxcargillfilberto
 
Journey to structure from motion
Journey to structure from motionJourney to structure from motion
Journey to structure from motionJa-Keoung Koo
 
Fuzzy Group Ideals and Rings
Fuzzy Group Ideals and RingsFuzzy Group Ideals and Rings
Fuzzy Group Ideals and RingsIJERA Editor
 
Fixed Point Theorm In Probabilistic Analysis
Fixed Point Theorm In Probabilistic AnalysisFixed Point Theorm In Probabilistic Analysis
Fixed Point Theorm In Probabilistic Analysisiosrjce
 

Similar to Semiparametric Least Squares estimation of single-index models (20)

Lecture5
Lecture5Lecture5
Lecture5
 
Nokton theory-en
Nokton theory-enNokton theory-en
Nokton theory-en
 
OrthogonalFunctionsPaper
OrthogonalFunctionsPaperOrthogonalFunctionsPaper
OrthogonalFunctionsPaper
 
APPROXIMATE CONTROLLABILITY RESULTS FOR IMPULSIVE LINEAR FUZZY STOCHASTIC DIF...
APPROXIMATE CONTROLLABILITY RESULTS FOR IMPULSIVE LINEAR FUZZY STOCHASTIC DIF...APPROXIMATE CONTROLLABILITY RESULTS FOR IMPULSIVE LINEAR FUZZY STOCHASTIC DIF...
APPROXIMATE CONTROLLABILITY RESULTS FOR IMPULSIVE LINEAR FUZZY STOCHASTIC DIF...
 
APPROXIMATE CONTROLLABILITY RESULTS FOR IMPULSIVE LINEAR FUZZY STOCHASTIC DIF...
APPROXIMATE CONTROLLABILITY RESULTS FOR IMPULSIVE LINEAR FUZZY STOCHASTIC DIF...APPROXIMATE CONTROLLABILITY RESULTS FOR IMPULSIVE LINEAR FUZZY STOCHASTIC DIF...
APPROXIMATE CONTROLLABILITY RESULTS FOR IMPULSIVE LINEAR FUZZY STOCHASTIC DIF...
 
Double Robustness: Theory and Applications with Missing Data
Double Robustness: Theory and Applications with Missing DataDouble Robustness: Theory and Applications with Missing Data
Double Robustness: Theory and Applications with Missing Data
 
Solution set 3
Solution set 3Solution set 3
Solution set 3
 
Complete l fuzzy metric spaces and common fixed point theorems
Complete l fuzzy metric spaces and  common fixed point theoremsComplete l fuzzy metric spaces and  common fixed point theorems
Complete l fuzzy metric spaces and common fixed point theorems
 
Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...
 
Use the same variable names and write the function F - Force(x-ks-kc-l.pdf
Use the same variable names and write the function F - Force(x-ks-kc-l.pdfUse the same variable names and write the function F - Force(x-ks-kc-l.pdf
Use the same variable names and write the function F - Force(x-ks-kc-l.pdf
 
Spanos lecture+3-6334-estimation
Spanos lecture+3-6334-estimationSpanos lecture+3-6334-estimation
Spanos lecture+3-6334-estimation
 
InfEntr_EntrProd_20100618_2
InfEntr_EntrProd_20100618_2InfEntr_EntrProd_20100618_2
InfEntr_EntrProd_20100618_2
 
Estimation rs
Estimation rsEstimation rs
Estimation rs
 
Sol75
Sol75Sol75
Sol75
 
Sol75
Sol75Sol75
Sol75
 
Using Eulers formula, exp(ix)=cos(x)+isin.docx
Using Eulers formula, exp(ix)=cos(x)+isin.docxUsing Eulers formula, exp(ix)=cos(x)+isin.docx
Using Eulers formula, exp(ix)=cos(x)+isin.docx
 
Journey to structure from motion
Journey to structure from motionJourney to structure from motion
Journey to structure from motion
 
Fuzzy Group Ideals and Rings
Fuzzy Group Ideals and RingsFuzzy Group Ideals and Rings
Fuzzy Group Ideals and Rings
 
math camp
math campmath camp
math camp
 
Fixed Point Theorm In Probabilistic Analysis
Fixed Point Theorm In Probabilistic AnalysisFixed Point Theorm In Probabilistic Analysis
Fixed Point Theorm In Probabilistic Analysis
 

Recently uploaded

Hierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of managementHierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of managementmkooblal
 
Quarter 4 Peace-education.pptx Catch Up Friday
Quarter 4 Peace-education.pptx Catch Up FridayQuarter 4 Peace-education.pptx Catch Up Friday
Quarter 4 Peace-education.pptx Catch Up FridayMakMakNepo
 
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...JhezDiaz1
 
EPANDING THE CONTENT OF AN OUTLINE using notes.pptx
EPANDING THE CONTENT OF AN OUTLINE using notes.pptxEPANDING THE CONTENT OF AN OUTLINE using notes.pptx
EPANDING THE CONTENT OF AN OUTLINE using notes.pptxRaymartEstabillo3
 
ROOT CAUSE ANALYSIS PowerPoint Presentation
ROOT CAUSE ANALYSIS PowerPoint PresentationROOT CAUSE ANALYSIS PowerPoint Presentation
ROOT CAUSE ANALYSIS PowerPoint PresentationAadityaSharma884161
 
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdfAMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdfphamnguyenenglishnb
 
Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17Celine George
 
DATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersDATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersSabitha Banu
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
How to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPHow to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPCeline George
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxNirmalaLoungPoorunde1
 
ENGLISH6-Q4-W3.pptxqurter our high choom
ENGLISH6-Q4-W3.pptxqurter our high choomENGLISH6-Q4-W3.pptxqurter our high choom
ENGLISH6-Q4-W3.pptxqurter our high choomnelietumpap1
 
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdfFraming an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdfUjwalaBharambe
 
AmericanHighSchoolsprezentacijaoskolama.
AmericanHighSchoolsprezentacijaoskolama.AmericanHighSchoolsprezentacijaoskolama.
AmericanHighSchoolsprezentacijaoskolama.arsicmarija21
 
Judging the Relevance and worth of ideas part 2.pptx
Judging the Relevance  and worth of ideas part 2.pptxJudging the Relevance  and worth of ideas part 2.pptx
Judging the Relevance and worth of ideas part 2.pptxSherlyMaeNeri
 
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfLike-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfMr Bounab Samir
 
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...Nguyen Thanh Tu Collection
 

Recently uploaded (20)

Hierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of managementHierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of management
 
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
 
Quarter 4 Peace-education.pptx Catch Up Friday
Quarter 4 Peace-education.pptx Catch Up FridayQuarter 4 Peace-education.pptx Catch Up Friday
Quarter 4 Peace-education.pptx Catch Up Friday
 
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
 
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
 
EPANDING THE CONTENT OF AN OUTLINE using notes.pptx
EPANDING THE CONTENT OF AN OUTLINE using notes.pptxEPANDING THE CONTENT OF AN OUTLINE using notes.pptx
EPANDING THE CONTENT OF AN OUTLINE using notes.pptx
 
ROOT CAUSE ANALYSIS PowerPoint Presentation
ROOT CAUSE ANALYSIS PowerPoint PresentationROOT CAUSE ANALYSIS PowerPoint Presentation
ROOT CAUSE ANALYSIS PowerPoint Presentation
 
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdfAMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
 
Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17
 
DATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersDATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginners
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
 
Raw materials used in Herbal Cosmetics.pptx
Raw materials used in Herbal Cosmetics.pptxRaw materials used in Herbal Cosmetics.pptx
Raw materials used in Herbal Cosmetics.pptx
 
How to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPHow to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERP
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptx
 
ENGLISH6-Q4-W3.pptxqurter our high choom
ENGLISH6-Q4-W3.pptxqurter our high choomENGLISH6-Q4-W3.pptxqurter our high choom
ENGLISH6-Q4-W3.pptxqurter our high choom
 
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdfFraming an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
 
AmericanHighSchoolsprezentacijaoskolama.
AmericanHighSchoolsprezentacijaoskolama.AmericanHighSchoolsprezentacijaoskolama.
AmericanHighSchoolsprezentacijaoskolama.
 
Judging the Relevance and worth of ideas part 2.pptx
Judging the Relevance  and worth of ideas part 2.pptxJudging the Relevance  and worth of ideas part 2.pptx
Judging the Relevance and worth of ideas part 2.pptx
 
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfLike-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
 
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
 

Semiparametric Least Squares estimation of single-index models

  • 1. Hidehiko Ichimura Semiparametric Least Squares (SLS) and weighted SLS estimation of single-index models1 Péter Tóth toth.peter@utexas.edu UT Austin 1Journal of Econometrics 58 (1993) pp. 71-120.
  • 3. Index models I Semiparametric model: if an index parametrizing the DGP (a distribution) consists of two parts and , where 2 in a nite dimensional space, while 2 in an innite dimensional space I Single-index model: The DGP is assumed to be yi = (h(xi; 0)) + i for i = 1; 2; 3; :::; n I where f(xi; yi)g is the observed iid sample I 0 2 RM is the true (nite) parameter vector, I E[ijxi] = 0 I h() is known up to the parameter , but () is not. I Examples. 1
  • 5. Three observations I 1. The variation in y results from the variation in and the variation in x (or h(x; 0)) I 2. On the 'contour line', where h(x; 0) = const all the variation in y comes from I 3. (2) does not (necessarily) hold for 6= 0 I So what will be the identication strategy for 0? I What would be the estimation strategy? I caveats - we have conditional moments 2
  • 6. SLS estimator 1 I an extremum estimator I if we know the conditional statistic, then the objective function is the sample analogue of some measure of variation, in particular now of the variance: Jh n () = 1 n X [yi E(yijh(xi; )]2 I you can also just sum for all the h-values I if we do not know the conditional mean, we estimate it nonparametrically with a smooth kernel estimator (why?), so Jn() = 1 n X I(xi 2 X)[yi ^E (xi; )]2 + op(n1) 3
  • 7. SLS estimator 2 I here 2 RL ^E (xi; ) = P j6=i yjI(xj 2 Xn)K[h(xi; ) h(xj ; )]=an P j6=i I(xj 2 Xn)K[h(xi; ) h(xj ; )]=an I if P j6=i I(xj 2 Xn)K[h(xi; ) h(xj ; )]=an6= 0 I otherwise: if yi (ymin + ymax)=2, then ^E (xi; ) = ymin I otherwise ^E (xi; ) = ymax I where Xn = fxjjjx x0jj 2an for some x0 2 Xg 4
  • 8. SLS estimator 3 I contd: I an ! 0, but p nan ! 1 positive series I K : R ! R is a smooth kernel function I if all denominators of the kernel regression function are zero, ^ is assumed to be 0 I NLS vs SLS I WSLS: W(x) weighting function will be introduced in the kernel denominator, numerator and to the objective function itself (in front of the quadratic fn) I 0 W(x) W I From now on we will restrict our attention to linear h()-s 5
  • 10. Assumption 4.1.-2. I Assumption 4.1: The () function is dierentiable and not constant on the whole support of x0
  • 11. 0 I nominal regressors: these are the x vectors with lth member (variable) xl, which are actually can be thought as functions xl(z) of underlying regressors (z1; z2; :::; zL0 ) I we assume the underlying regressors are either continuous or discrete I the rst L1 nominal and L0 1 underlying regressors have continuous marginals (the rest discrete ones) I Assumption 4.2 I (1) xl() for all l has partial derivatives wrt the continuous underlyings I (2) for discrete nominal regressors @xl=@ P zl0 = 0 for l = L1 + 1;L1 + 2; :::;L and l0 = 1; 2; :::;L0 1 almost everywhere in z 6
  • 12. I contd.: I (3) TL0 1 l0 = 1fsl0 1 ; :::; sl0 L1 l = @xl jsl0 @zl0 for some z 2 Zg? = f0g I (4) (i) for each 2 there is an open T and at least L L1 + 1 constant vectors cl = (cL+1; :::; cL) for l = 0; 1; :::;L L1 such that I cl c0 are linearly independent (for l = 1; 2; ::L L1) I T is in LL1 l=0 ftjt = 1x1(z) + ::: ::: + L1xL1(z) + L1+1cl L1+1 + ::: + Lcl L z 2 Z(cl)g where Z(cl) = fz 2 ZjxL1+1 = cl L1+1; :::; xL(z) = cl Lg I (ii) and is not periodic on (T) 7
  • 13. Identication: Theorem 4.1. Let us have the linear single index model dened above. If there is a continuous regressor that has a non-zero coecient, then Assumption 4.1 and 4.2 (1-3) implies that 0 is identied up to a scalar constant for all continuous regressors. In addition, if 4.2 (4) is satised, then the coecients of the discrete regressors are also identied (up to a scalar constant). I Intuition for proof: assume there is another 0 that minimizes the objective function and get contradiction. 8
  • 14. What does assumption 4.2 rule out? I Ex. 4.2.: when x1 = z and x2 = z2 I Ex. 4.3.: x1 = z1 2 [0; 1] and x2 = z2 2 f0; 1g I so 4.2. (3) is much a non-constant/inversibility condition I ... while 4.2. (4) is a support-like condition I either should have small enough or large enough ( full support!) support for the continuous variable I what is the problem with periodic (:)? 9
  • 16. Assumptions 5.1-2 I Some objects: I X is a subset of x's support I T(X) = ft 2 Rjt = h(x; ) 8x 2 Xg I f(t; ) is the Lebesgue density of t = h(x; ) (aww Lord) I Assumption 5.1: iid sample I Assumption 5.2: 2 Int() where 2 RM is a compact set 10
  • 17. Assumptions 5.3-6 I Assumption 5.3: I 1. X is compact I 2. infx2X f(h(x; ); ) 0 I 3. f(t; ) and E[yjh(x; ) = t] are three times dierentiable wrt t, and the third dierential is Lipschitz jointly in both arguments I Assumption 5.4: y is in Lm (for an m 2) and the conditional variance of y (on x) is uniformly bounded and bounded away from 0 on X I Assumption 5.5: h(x; ) is Lipschitz jointly on X I Assumption 5.6: on the kernel K; besides the usuals we have that the second derivative is Lipschitz 11
  • 18. Consistency: Theorem 5.1 If Assumption 5.1-6 hold, the (W)SLS estimator dened above is consistent. I the proof uses that P[Jn(^) Jn(0)] = 1, and then observes that P[Jn(^) Jn(0)] = P[Jn(^) Jn(0); ^ 2 B(0)]+ +P[Jn(^) Jn(0); ^ =2 B(0)] P[^ 2 B(0)] + P[ inf 2nB(0) Jn(^) Jn(0)] I where B(0) is an open ball around 0 with radius, and P[ inf 2nB(0) Jn(^) Jn(0)] ! 0: I Alternatively, I think after establishing consistency one could use the continuous mapping theorem and the consistency theorem for extremum estimators 12
  • 19. Asymptotic normality: Theorem 5.2 Under Ass. 5.1-6, and if y has at least 3 absolute moments, 0 is identied, regularity conditions are satised for an, then p n(^ 0) N(0; V 1V 1); where the variance is of just the usual sandwhich. I Note that the usual sandwhich formula is not feasible now, since we have the derivative of th (:) function in it, which is unknown... 13
  • 20. Some remarks I Optimal reweighting I 'inner weighting' I weights are reducing variance AND bias I the optimal weighting is the usual (X)1 I one can show it achieves the semiparametric lower bound by Newey (1990) I Estimation of the covariance matrix I he introduces a kernel estimate for @ ^E W(xi; ^)=@ I Small sample properties (example): comparable with MRC, better than MS I the further we go from normality the better this performs relatively 14
  • 22. Identication 1 I Suppose there is minimizes the objective function, so EfW(x)[(x; 0) E((:)jx0)]2g = 0: I Moreover, since W(x) 0 for all x 2 X E((:)jx0 = t) = (x(z); 0); I but then after taking derivatives wrt z (normalize 01 = 1 = 1) 0(x; 0)[ 2@x2=@zl0 + ::: + L1@xL1=@zl0 ] = 0 I for all l0 2 f1; :::;L0 1g a.s. for z 2 Z I where l = 0l 01 l I Now we need that for the z-s for which 0(0; x(z))6= 0 assumption 4.3 holds - then we see the rst statement proved. I Common trick: t = x0( 0), this is what you really condition on... 15
  • 23. Identication 2 I So now we identied the 01; :::; 0L1 coecients up to a constant r I This leaves us with (x; 0) = E[()jx0] = = (t=r + (0L1+1=r L 1+1)xL1+1 + ::: + (0L=r L )xL) I after starring for a minute or two you realize that Assumption 4.2. (4) is ready-made for this... 16
  • 24. Consistency I We only have to show P[ inf 2nB(0) Jn(^) Jn(0)] ! 0: I tedious algebra, the only idea: build a bridge of -s through using EW() instead of ^EW() + triangle inequality and identication gives the desired result I intuition 17
  • 25. Asympptotic Normality I the standard proof from Newey-McFadden basically I assuming kernel estimator is as consistent wsome restriction (Lipschitz) 18
  • 26. Thank you for your attention!