• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Data enriched linear regression
 

Data enriched linear regression

on

  • 211 views

We present a linear regression method for predictions on a small data ...

We present a linear regression method for predictions on a small data
set making use of a second possibly biased data set that may be much
larger. Our method ts linear regressions to the two data sets while
penalizing the di erence between predictions made by those two models.

Statistics

Views

Total Views
211
Views on SlideShare
211
Embed Views
0

Actions

Likes
0
Downloads
0
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Data enriched linear regression Data enriched linear regression Document Transcript

    • Data enriched linear regression Aiyou Chen Art B. Owen∗ Minghui Shi Google Inc. Stanford University Google Inc. December 2012 Abstract We present a linear regression method for predictions on a small data set making use of a second possibly biased data set that may be much larger. Our method fits linear regressions to the two data sets while penalizing the difference between predictions made by those two models. The resulting algorithm is a shrinkage method similar to those used in small area estimation. Our main result is a Stein-type finding for Gaussian responses: when the model has 5 or more coefficients and 10 or more error degrees of freedom, it becomes inadmissible to use only the small data set, no matter how large the bias is. We also present both plug-in and AICc- based methods to tune the penalty parameter. Most of our results use an L2 penalty, but we also obtain formulas for L1 penalized estimates when the model is specialized to the location setting.1 IntroductionThe problem we consider here is how to combine linear regressions based on datafrom two sources. There is a small data set of expensive high quality observa-tions and a possibly much larger data set with less costly observations. The bigdata set is thought to have similar but not identical statistical characteristicsto the small one. The conditional expectation might be different there or thepredictor variables might have been measured in somewhat different ways. Themotivating application comes from within Google. The small data set is a panelof consumers, selected by a probability sample, who are paid to share theirinternet viewing data along with other data on television viewing. There is asecond and potentially much larger panel, not selected by a probability samplewho have opted in to the data collection process. The goal is to make predictions for the population from which the smallersample was drawn. If the data are identically distributed in both samples, weshould simply pool them. If the big data set is completely different from thesmall one, then it makes sense to ignore it and fit only to the smaller data set. ∗ Art Owen was a paid consultant for this project; it was not part of his Stanford respon-sibilities. 1
    • Many settings are intermediate between these extremes: the big data set issimilar but not necessarily identical to the small one. We stand to benefit fromusing the big data set at the risk of introducing some bias. Our goal is to gleansome information from the larger data set to increase accuracy for the smallerone. The difficulty is that our best information about how the two populationsare similar is our samples from them. The motivating problem at Google has some differences from the problemwe consider here. There were response variables observed in the small samplethat were not observed in the large one and the goal was to study the jointdistribution of those responses. That problem also had binary responses insteadof the continuous ones considered here. This paper studies linear regressionbecause it is more amenable to theoretical analysis and thus allows us to explainthe results we saw. The linear regression method we use is a hybrid between simply poolingthe two data sets and fitting separate models to them. As explained in moredetail below, we apply shrinkage methods penalizing the difference between theregression coefficients for the two data sets. Both the specific penalties we use,and our tuning strategies, reflect our greater interest in the small data set. Ourgoal is to enrich the analysis of the smaller data set using possibly biased datafrom the larger one. Section 2 presents our notation and introduces L1 and L2 penalties on theparameter difference. Most of our results are for the L2 penalty. For the L2penalty, the resulting estimate is a linear combination of the two within sampleestimates. Theorem 1 gives a formula for the degrees of freedom of that estimate.Theorem 2 presents the mean squared error of the estimator and forms the basisfor plug-in estimation of an oracle’s value when an L2 penalty is used. Section 3 considers in detail the case where the regression simplifies to esti-mation of a population mean. In that setting, we can determine how plug-in,bootstrap and cross-validation estimates of tuning parameters behave. We getan expression for how much information the large sample can add. Theorem 3gives a soft-thresholding expression for the estimate produced by L1 penaliza-tion and Theorem 4 can be used to find the penalty parameter that an L1 oraclewould choose when the data are Gaussian. Section 4 presents some simulated examples. We simulate the location prob-lem and find that numerous L2 penalty methods are admissible, varying in howaggressively they use the larger sample. The L1 oracle is outperformed by theL2 oracle in this setting. When the bias is small, the data enrichment methodsimprove upon the small sample, but when the bias is large then it is best to usethe small sample only. Things change when we simulate the regression model.For dimension d 5, data enrichment outperforms the small sample methodin our simulations at all bias levels. We did not see such an inadmissibilityoutcome when we simulated cases with d 4. Section 5 presents our main theoretical result, Theorem 5. When there are 5or more predictors and 10 or more degrees of freedom for error, then some of ourdata enrichment estimators make simply using the small sample inadmissible.The reduction in mean squared error is greatest when the bias is smallest, but 2
    • no matter how large the bias is, we gain an improvement. This result is similarto Stein’s classic result on estimation of a Gaussian mean (Stein, 1956), butthe critical threshold here is dimension 5, not dimension 3. The estimator westudy employs a data-driven weighting of the two within-sample least squaresestimators. We believe that our plug-in estimator is even better than this one. We have tested our method on some Google data. Privacy considerationsdo not allow us to describe it in detail. We have seen data enrichment performbetter than pooling the two samples and better than ignoring the larger one. Wehave also seen data enrichment do worse than pooling but better than ignoringthe larger sample. Our theory allows for pooling the data to be better than dataenrichment. That may just be a sign that the bias between the two populationswas very small. There are many ideas in different literatures on combining non-identicallydistributed data sets in order to share or borrow statistical strength. Of these,the closest to our work is small area estimation (Rao, 2003) used in surveysampling. In chemometrics there is a similar problem called transfer calibration(Feudale et al., 2002). Medicine and epidemiology among other fields use meta-analysis (Borenstein et al., 2009). Data fusion (D’Orazio et al., 2006) is widelyused in marketing. The problem has been studied for machine learning whereit is called transfer learning. An older machine learning term for the underly-ing issue is concept drift. Bayesian statisticians use hierarchical models. Ourmethods are more similar to empirical Bayes methods, drawing heavily on ideasof Charles Stein. A Stein-like result also holds for multiple regression in thecontext of just one sample. The result is intermediate between our two sampleregression setting and the one sample mean problem. In regression, shrinkagemakes the usual MLE inadmissible when in dimension p 4 (with the interceptcounted as one dimension) and a sufficiently large n. See Copas (1983) for adiscussion of shrinkage in regression and Stein (1960) who also obtained thisresult for regression, but under stronger assumptions. A more detailed discussion of these different but overlapping literatures isin Section 6. Some of our proofs are given in an Appendix. There are also settings where one might want to use a small data set to enricha large one. For example the small data set may have a better design matrix orsmaller error variance. Such possibilities are artificial in the motivating contextso we don’t investigate them further here.2 Data enrichment regressionConsider linear regression with a response Y ∈ R and predictors X ∈ Rd . Themodel for the small data set is Yi = Xi β + εi , i∈S dfor a parameter β ∈ R and independent errors εi with mean 0 and variance 2σS . Now suppose that the data in the big data set follow Yi = Xi (β + γ) + εi , i∈B 3
    • where γ ∈ Rd is a bias parameter and εi are independent with mean 0 and 2variance σB . The sample sizes are n in the small sample and N in the bigsample. There are several kinds of departures of interest. It could be, for instance,that the overall level of Y is different in S than in B but that the trends aresimilar. That is, perhaps only the intercept component of γ is nonzero. Moregenerally, the effects of some but not all of the components in X may differ inthe two samples. One could apply hypothesis testing to each component of γbut that is unattractive as the number of scenarios to test for grows as 2d . Let XS ∈ Rn×d and XB ∈ RN ×d have rows made of vectors Xi for i ∈ Sand i ∈ B respectively. Similarly, let YS ∈ Rn and YB ∈ RN be corresponding T Tvectors of response values. We use VS = XS XS and VB = XB XB .2.1 Partial pooling via shrinkage and weightingOur primary approach is to pool the data but put a shrinkage penalty on γ. Weestimate β and γ by minimizing (Yi − Xi β)2 + (Yi − Xi (β + γ))2 + λP (γ) (1) i∈S i∈Bwhere λ ∈ [0, ∞] and P (γ) 0 is a penalty function. There are several reason-able choices for the penalty function, including 2 2 γ 2, XS γ 2, γ 1, and XS γ 1. ˆ ˆ ˆFor each of these penalties, setting λ = 0 leads to separate fits β and β + γ inthe two data sets. Similarly, taking λ = ∞ constrains γ = 0 and amounts to ˆpooling the samples. In many applications one will want to regularize β as well,but in this paper we only penalize γ. The L1 penalties have an advantage in interpretation because they identifywhich parameters or which specific observations might be differentially affected.The quadratic penalties are simpler, so we focus most of this paper on them. Both quadratic penalties can be expressed as XT γ 2 for a matrix XT . 2The rows of XT represent a hypothetical target population of NT items for Tprediction. Or more generally, the matrix Σ = ΣT = XT XT is proportional tothe matrix of mean squares and mean cross-products for predictors in the targetpopulation. If we want to remove the pooling effect from one of the coefficients, suchas the intercept term, then the corresponding column of XT should contain allzeros. We can also constrain γj = 0 (by dropping its corresponding predictor)in order to enforce exact pooling on the j’th coefficient. ˆ A second, closely related approach is to fit βS by minimizing i∈S (Yi − 2 ˆB by minimizing 2Xi β) , fit β i∈B (Yi − Xi β) , and then estimate β by ˆ ˆ ˆ β(ω) = ω βS + (1 − ω)βB 4
    • for some 0 ω 1. In some special cases the estimates indexed by the weightingparameter ω ∈ [n/(n + N ), 1] are a relabeling of the penalty-based estimatesindexed by the parameter λ ∈ [0, ∞]. In other cases, the two families of estimatesdiffer. The weighting approach allows simpler tuning methods. Although wethink that the penalization method may be superior, we can prove strongerresults about the weighting approach. Given two values of λ we consider the larger one to be more ’aggressive’ inthat it makes more use of the big sample bringing with it the risk of more biasin return for a variance reduction. Similarly, aggressive estimators correspondto small weights ω on the small target sample.2.2 Special casesAn important special case for our applications is the cell partition model. Inthe cell partition model, Xi is a vector containing C − 1 zeros and one 1. Themodel has C different cells in it. Cell c has Nc observations from the large dataset and nc observations from the small data set. In an advertising context a cellmay correspond to one specific demographic subset of consumers. The cells maybe chosen exogenously to the given data sets. When the cells are constructedusing the regression data then cross-validation or other methods should be used. T A second special case, useful in theoretical investigations, has XS XS ∝ TXB XB . This is the proportional design matrix case. The simplest case of all is the location model. It is the cell mean modelwith C = 1 cell, and it has proportional design matrices. We can get formulasfor the optimal tuning parameter in the location model and it is also a goodworkbench for comparing estimates of tuning parameters. Furthermore, we areable to get some results for the L1 case in the location model setting.2.3 Quadratic penalties and degrees of freedomThe quadratic penalty takes the form P (γ) = XT γ 2 = γ T VT γ for a matrix 2XT ∈ Rr×d and VT = XT XT ∈ Rd×d . The value r is d or n in the examples Tabove and could take other values in different contexts. Our criterion becomes 2 2 2 YS − XS β + YB − XB (β + γ) + λ XT γ . (2)Here and below x means the Euclidean norm x 2 . Given the penalty matrix XT and a value for λ, the penalized sum of ˆsquares (2) is minimized by βλ and γλ satisfying ˆ ˆ βλ X TX = X TY γλ ˆwhere     XS 0 YS X = XB XB  ∈ R(n+N +r)×2d , and Y = YB  . (3) 1/2 0 0 λ XT 5
    • To avoid uninteresting complications we suppose that the matrix X T X isinvertible. The representation (3) also underlies a convenient computational ˆapproach to fitting βλ and γλ using r rows of pseudo-data just as one does in ˆridge regression. ˆ ˆ −1 T The estimate βλ can be written in terms of βS = VS XS YS and βB = ˆ −1 TVB XB YB as the next lemma shows.Lemma 1. Let XS , XB , and XT in (2) all have rank d. Then for any λ 0, ˆthe minimizers β and γ of (2) satisfy ˆ ˆ ˆ ˆ β = Wλ βS + (I − Wλ )βB ˆ ˆand γ = (VB + λVT )−1 VB (βB − β) for a matrix ˆ −1 −1 Wλ = (VS + λVT VB VS + λVT )−1 (VS + λVT VB VS ). (4)If VT = VS , then Wλ = (VB + λVS + λVB )−1 (VB + λVS ).Proof. The normal equations of (2) are ˆ ˆ ˆ (VB + VS )β = VS βS + VB βB − VB γ ˆ and ˆ ˆ (VB + λVT )ˆ = VB βB − VB β. γSolving the second equation for γ , plugging the result into the first and solving ˆ ˆfor β, yields the result with Wλ = (VS + VB − VB (VB + λVT )−1 VB )−1 VS . Thisexpression for Wλ simplifies as given and simplifies further when VT = VS . The remaining challenge in model fitting is to choose a value of λ. Becausewe are only interested in making predictions for the S data, not the B data,the ideal value of λ is one that optimizes the prediction error on sample S. Onereasonable approach is to use cross-validation by holding out a portion of sampleS and predicting the held-out values from a model fit to the held-in ones as wellas the entire B sample. One may apply either leave-one-out cross-validation ormore general K-fold cross-validation. In the latter case, sample S is split into Knearly equally sized parts and predictions based on sample B and K − 1 partsof sample S are used for the K’th held-out fold of sample S. In some of our applications we prefer to use criteria such as AIC, AICc,or BIC in order to avoid the cost and complexity of cross-validation. Thesealternatives are of most value when data enrichment is itself the inner loop of amore complicated algorithm. To compute AIC and alternatives, we need to measure the degrees of freedomused in fitting the model. We follow Ye (1998) and Efron (2004) in defining thedegrees of freedom to be 1 ˆ df(λ) = 2 cov(Yi , Yi ), (5) σS i∈S 6
    • ˆ ˆwhere YS = XS βλ . Because of our focus on the S data, only the S data appearin the degrees of freedom formula. We will see later that the resulting AICtype estimates based on the degrees of freedom perform similarly to our focusedcross-validation described above.Theorem 1. For data enriched regression the degrees of freedom given at (5)satisfies df(λ) = tr(Wλ ) where Wλ is given in Lemma 1. If VT = VS , then d 1 + λνj df(λ) = (6) j=1 1 + λ + λνj 1/2−1 1/2 1/2where ν1 , . . . , νd are the eigen-values of VS VB VS in which VS is a sym-metric matrix square root of VS .Proof. Please see Section 8.1 in the Appendix. With a notion of degrees of freedom customized to the data enrichmentcontext we can now define the corresponding criteria such as 2df(λ) σ2 AIC(λ) = n log(ˆS (λ)) + n 1 + and n df(λ) df(λ) + 2 σ2 AICc(λ) = n log(ˆS (λ)) + n 1 + 1− , (7) n n n ˆwhere σS (λ) = (n−d)−1 i∈S (Yi −Xi β(λ))2 . The AIC is more appropriate than ˆ2BIC here since our goal is prediction accuracy, not model selection. We preferthe AICc criterion of Hurvich and Tsai (1989) because it is more conservativeas the degrees of freedom become large compared to the sample size. Next we illustrate some special cases of the degrees of freedom formula inTheorem 1. First, suppose that λ = 0, so that there is no penalization on γ.Then df(0) = tr(I) = d as is appropriate for regression on sample S only. We can easily see that the degrees of freedom are monotone decreasing in λ. dAs λ → ∞ the degrees of freedom drop to df(∞) = j=1 νj /(1 + νj ). This canbe much smaller than d. For instance in the proportional design case, VS = nΣand VB = N Σ for a matrix Σ. Then all νj = n/N and so df(∞) = d/(1 + N/n),which is quite small when n N. For the cell partition model, d becomes C, ΣS = diag(nc ) and ΣB = Cdiag(Nc ). In this case df(∞) = c=1 nc /(nc + Nc ) which will usually be muchsmaller than df(0) = C. Monotonicity of the degrees of freedom makes it easy to search for the valueλ which delivers a desired degrees of freedom. We have found it useful to inves-tigate λ over a numerical grid corresponding to degrees of freedom decreasingfrom d by an amount ∆ (such as 0.25) to the smallest such value above df(∞).It is easy to adjoin λ = ∞ (sample pooling) to this list as well. 7
    • 2.4 Predictive mean square errorsHere we develop an oracle’s choice for λ and a corresponding plug-in estimate.We work in the case where VS = VT and we assume that VS has full rank. Given ˆλ, the predictive mean square error is E( XS (β − β) 2 ). 1/2 We will use a symmetric square root VS of VS as well as the matrix M = 1/2 −1 1/2VS VB VS with eigendecomposition M = U DU T where the j’th column ofU is uj and D = diag(νj ).Theorem 2. The predictive mean square error of the data enrichment estimatoris d d (1 + λνj )2 λ2 κ2 j ˆ E XS (β − β) 2 2 = σS + (8) j=1 (1 + λ + λνj )2 j=1 (1 + λ + λνj )2 1/2 1/2 −1where κ2 = uT VS ΘVS uj for Θ = γγ T + σB VB . j j 2Proof. Please see Section 8.2. 2 The first term in (8) is a variance term. It equals dσS when λ = 0 but forλ > 0 it is reduced due to the use of the big sample. The second term representsthe error, both bias squared and variance, introduced by the big sample.2.5 A plug-in methodA natural choice of λ is to minimize the predictive mean square error, whichmust be estimated. We propose a plug-in method that replaces the unknownparameters σS and κ2 from Theorem 2 by sample estimates. For estimates σS 2 j ˆ2 2and κj we choose ˆ d σS (1 + λνj )2 + λ2 κ2 ˆ2 ˆj ˆ λ = arg min . (9) λ 0 (1 + λ + λνj ) 2 j=1 ˆ2 ˆ From the sample data we take σS = YS −XS βS 2 /(n−d). A straightforwardplug-in estimate of Θ is ˆ 2 −1 Θplug = γ γ T + σB VB , ˆˆ ˆ ˆ 1/2 1/2where γ = βB − βS . Now we take κ2 = uT VS ΘVS uj recalling that uj and ˆ ˆj j −1 1/2 1/2νj derive from the eigendecomposition of M = VS VB VS . The resulting ˆoptimization yields an estimate λplug . −1 The estimate Θplug is biased upwards because E(ˆ γ T ) = γγ T + σB VB + γˆ 2 2 −1σS VS . We have used a bias-adjusted plug-in estimate ˆ 2 −1 ˆ 2 −1 ˆ 2 −1 Θbapi = σB VB + (ˆ γ T − σB VB − σS VS )+ γˆ (10) 8
    • where the positive part operation on a symmetric matrix preserves its eigenvec-tors but replaces any negative eigenvalues by 0. Similar results can be obtained ˆ 2 −1with Θbapi = γ γ T − σS VS + . This latter estimator is somewhat simpler but ˆˆ ˆ 2 −1the former has the advantage of being at least as large as σB VB while thelatter can degenerate to 0.3 The location modelThe simplest instance of our problem is the location model where XS is a columnof n ones and XB is a column of N ones. Then the vector β is simply a scalarintercept that we call µ and the vector γ is a scalar mean difference that we callδ. The response values in the small data set are Yi = µ + εi while those in thebig data set are Yi = (µ + δ) + εi . Every quadratic penalty defines the samefamily of estimators as we get using penalty λδ 2 . The quadratic criterion is i∈S (Yi − µ)2 + i∈B (Yi − µ − δ)2 + λδ 2 . TakingVS = n, VB = N and VT = 1 in Lemma 1 yields ¯ ¯ nN + nλ 1 + λ/N µ = ω YS + (1 − ω)YB ˆ with ω = = . nN + nλ + N λ 1 + λ/N + λ/nChoosing a value for ω corresponds to choosing nN (1 − ω) λ= . N ω − n(1 − ω)The degrees of freedom in this case reduce to df(λ) = ω, which ranges fromdf(0) = 1 down to df(∞) = n/(n + N ).3.1 Oracle estimator of ωThe mean square error of µ(ω) is ˆ 2 σS σ2 MSE(ω) = ω 2 + (1 − ω)2 B + δ 2 . n NThe mean square optimal value of ω (available to an oracle) is δ 2 + σB /N 2 ωorcl = 2 2 . δ 2 + σB /N + σS /nPooling the data corresponds to ωpool = n/(N +n) and makes µ equal the pooled ˆ ¯ ¯ ¯mean YP ≡ (nYS + N YB )/(n + N ). Ignoring the large data set corresponds toωS = 1. Here ωpool ωorcl ωS . The oracle’s choice of ω can be used to inferthe oracle’s choice of λ. It is 2 nN (1 − ωorcl ) N σS λorcl = = 2 + σ2 − σ2 . (11) N ωorcl − n(1 − ωorcl ) Nδ B S 9
    • The mean squared error reduction for the oracle is MSE(ωorcl ) = ωorcl , (12) MSE(ωS )after some algebra. If δ = 0, then as min(n, N ) → ∞ we find ωorcl → 1 and theoptimal λ corresponds to simply using the small sample and ignoring the largeone. If we suppose that δ = 0 and N → ∞ then the effective sample size fordata enrichment may be defined using (12) as n δ 2 + σB /N + σS /n 2 2 σ2 n= =n 2 → n+ S. (13) ωorcl δ 2 + σB /N δ2The mean squared error from data enrichment with n observations in the smallsample, using the oracle’s choice of λ, matches that of n IID observations fromthe small sample. We effectively gain up to σS /δ 2 observations worth of infor- 2mation. This is an upper bound on the gain because we will have to estimate λ. Equation (13) shows that the benefit from data enrichment is a small samplephenomenon. The effect is additive not multiplicative on the small sample sizen. As a result, more valuable gains are expected in small samples. In someof the motivating examples we have found the most meaningful improvementsfrom data enrichment on disaggregated data sets, such as specific groups ofconsumers. Some large data sets resemble the union of a great many smallones.3.2 Plug-in and other estimators of ωA natural approach to choosing ω is to plug in sample estimates ˆ ¯ ¯ 1 ¯ 1 ¯ δ0 = YB − YS , ˆ2 σS = (Yi − YS )2 , ˆ2 and σB = (Yi − YB )2 . n N i∈S i∈B ˆ2 ˆ 2 ˆ2 ˆ 2 ˆ2We then use ωplug = (δ0 + σB /N )/(δ0 + σB /N + σS /n) or alternatively λplug = 2 2 ˆ2 ). Our bias-adjusted plug-in method reduces toσS /(ˆB + N δ0ˆ σ ˆ θbapi σ2 ˆ ˆ2 σ2 ωbapi = , where ˆ ˆ2 σ ˆ θbapi = B + δ0 − S − B ˆ ˆ2 θbapi + σS /n N n N + ˆ2 ˆ 2 ˆ2The simpler alternative ωbapi = ((δ0 − σS /n)/δ0 )+ gave virtually identical valuesin our numerical results reported below. If we bootstrap the S and B samples independently M times and choose ωto minimize M 1 ¯ ¯ m∗ ¯ m∗ 2 , YS − ω YS − (1 − ω)YB M m=1then the minimizing value tends to ωplug as M → ∞. Thus bootstrap methodsgive an approach analogous to plug-in methods, when no simple plug-in formula 10
    • exists. This is perhaps not surprising since the bootstrap is often described asan example of a plug-in principle. We can also determine the effects of cross-validation in the location setting,and arrive at an estimate of ω that we can use without actually cross-validating.Consider splitting the small sample into K parts that are held out one by onein turn. The K − 1 retained parts are used to estimate µ and then the squarederror is judged on the held-out part. That is K 1 ¯ ¯ ¯ 2 ωcv = arg min YS,k − ω YS,−k − (1 − ω)YB , ω K k=1 ¯ ¯where YS,k is the average of Yi over the k’th part of S and YS,−k is the averageof Yi over all K − 1 parts excluding the k’th. We suppose for simplicity that ¯ ¯ ¯n = rK for an integer r. In that case YS,−k = (nYS − rYS,k )/(n − r). Now ¯ ¯ ¯ ¯ − YB )(YS,k − YB ) k (YS,−k ωcv = ¯S,−k − YB )2 ¯ (14) k (YAfter some algebra, the numerator of (14) is K ¯ ¯ r ¯ ¯ K(YS − YB )2 − (YS,k − YS )2 n−r k=1and the denominator is 2 K ¯ ¯ r ¯ ¯ K(YS − YB )2 + (YS,k − YS )2 . n−r k=1 ˆ ¯ ¯ K ¯ ¯ ˆ2Letting δ0 = YB − YS and σS,K = (1/K) k=1 (YS,k − YS )2 , we have ˆ2 ˆ 2 δ0 − σS,K /(K − 1) ωcv = . ˆ δ 2 + σ 2 /(K − 1)2 ˆ 0 S,K The only quantity in ωcv which depends on the specific K-way partition ˆ2used is σS,K . If the groupings are chosen by sampling without replacement,then under this sampling, ¯ ¯ s2 E(ˆS,K ) = E((YS,1 − YS )2 ) = σ2 S (1 − 1/K) rusing the finite population correction for simple random sampling, where s2 = Sˆ2σS n/(n − 1). This simplifies to n 1K −1 K −1 σ2 ˆ2 E(ˆS,K ) = σS ˆ2 = σS . n−1r K n−1 11
    • Thus K-fold cross-validation chooses a weighting centered around ˆ2 ˆ 2 δ0 − σS /(n − 1) ωcv,K = . (15) ˆ2 ˆ 2 δ0 + σS /[(n − 1)(K − 1)]Cross-validation has the strange property that ω < 0 is possible. This can arisewhen the bias is small and then sampling alone makes the held-out part of thesmall sample appear negatively correlated with the held-in part. The effect canappear with any K. We replace any ωcv,K < n/(n + N ) by n/(n + N ). Leave-one-out cross-validation has K = n (and r = 1) so that ˆ2 ˆ 2 δ0 − σS /n ωcv,n ≈ . ˆ δ 2 + σ 2 /n2 ˆ 0 SSmaller K, such as choosing K = 10 versus n, tend to make ωcv,K smaller ¯ ˆresulting in less weight on YS . In the extreme with δ0 = 0 we find ωcv,K ≈−(K − 1) so 10 fold CV is then very different from leave-one-out CV.Remark 1. The cross-validation estimates do not make use of σB becauseˆ2the large sample is held fixed. They are in this sense conditional on the largesample. Our oracle takes account of the randomness in set B, so it is notconditional. One can define a conditional oracle without difficulty, but we omitthe details. Neither the bootstrap nor the plug-in methods are conditional, asthey approximate our oracle. Comparing cross-validation to the oracle we expect 2this to be reasonable if σB /N min(δ 2 , σs /n). Taking ωbapi as a representor 2of unconditional methods and ωcv,n as a representor of conditional ones, wesee that the latter has a larger denominator while they both have the same ˆ2 ˆ2numerator, at least when δ0 > σS /n. This suggests that conditional methodsare more aggressive and we will see this in the simulation results.3.3 L1 penaltyFor the location model, it is convenient to write the L1 penalized criterion as (Yi − µ)2 + (Yi − µ − δ)2 + 2λ|δ|. (16) i∈S i∈B ˆThe minimizers µ and δ satisfy ˆ ¯ ¯ ˆ nYS + N (YB − δ) µ= ˆ , and n+N (17) ˆ ¯ δ = Θ(YB − µ; λ/N ) ˆfor the well-known soft thresholding operator Θ(z; τ ) = sign(z)(|z| − τ )+ . ¯ ¯ The estimate µ ranges from YS at λ = 0 to the pooled mean YP at λ = ∞. ˆ ¯ ¯ ¯In fact µ reaches YP at a finite value λ = λ∗ ≡ nN |YB − YS |/(N + n) and both ˆ ˆµ and δ are linear in λ on the interval [0, λ∗ ]:ˆ 12
    • Theorem 3. If 0 λ ¯ ¯ nN |YB − YS |/(n + N ) then the minimizers of (16) are ¯ λ ¯ ¯ µ = YS + sign(YB − YS ), and ˆ n (18) ˆ ¯ ¯ N +n ¯ ¯ δ = YB − YS − λ sign(YB − YS ). NnIf λ > nN |Y ˆ ¯B − YS |/(n + N ) then they are δ = 0 and µ = YP . ¯ ˆ ¯ ¯ ¯Proof. If λ > nN |YB − YS |/(n + N ) then we may find directly that with anyvalue of δ > 0 and corresponding µ given by (17), the derivative of (16) with ˆ ˆrespect to δ is positive. Therefore δ 0 and a similar argument gives δ 0, so ˆ ¯ ¯that δ = 0 and then µ = (nYS + N YB )/(n + N ). ˆ Now suppose that λ λ∗ . We verify that the quantities in (18) jointly ˆsatisfy equations (17). Substituting δ from (18) into the first line of (17) yields ¯ ¯ nYS + N (YS + λ(N + n)η/(N n)) λ ¯ ¯ ¯ = YS + sign(YB − YS ), n+N nmatching the value in (18). Conversely, substituting µ from (18) into the second ˆline of (17) yields ¯ λ ¯ ¯ λ ¯ ¯ λ Θ YB − µ; ˆ = Θ YB − YS − sign(YB − YS ); . (19) N n N ¯ ¯ ¯Because of the upper bound on λ, the result is YB − YS −λ(1/n+1/N )sign(YB −¯S ) which matches the value in (18).Y With an L1 penalty on δ we find from Theorem 3 that ¯ ¯ ¯ µ = YS + min(λ, λ∗ )sign(YB − YS )/n. ˆ ¯ ¯That is, the estimator moves YS towards YB by an amount λ/n except that ¯P . The optimal choice of λ is notit will not move past the pooled average Yavailable in closed form.3.4 An L1 oracleUnder a Gaussian data assumption, it is possible to derive a formula for themean squared error of the L1 penalized data enrichment estimator at any valueof λ. While it is unwieldy, the L1 mean square error formula is computable andwe can optimize it numerically to compute an oracle formula. As with the L2setting we must plug in estimates of some unknowns first before optimizing. Thisallows us to compare L1 to L2 penalization in the location setting simulationsof Section 4. To obtain a solution we make a few changes of notation just for this subsec- ˆ ¯tion. We replace λ/n by λ and define a = N/(N + n) and use δ0 = YB − YS . ¯Then ¯ ˆ ˆ ¯ ¯ ˆ µ(λ) = (YS + λ · sign(δ0 ))I(|δ0 |a λ) + (aYB + (1 − a)YS )I(|δ0 |a < λ) ˆ ¯ ¯ ˆ ˆ ˆ = (aYB + (1 − a)YS ) − (aδ0 − λ · sign(δ0 ))I(|δ0 |a λ). (20) 13
    • Without loss of generality we may center and scale the Gaussian distributions ¯ ¯so that YS ∼ N (0, 1) and YB ∼ N (δ, σ 2 ). The next Theorem defines thedistributions of Yi for i ∈ S and i√ B to obtain that scaling. We also introduce ∈ √ ˜constants b = σ 2 /(1+σ 2 ), δ = δ/ 1 + σ 2 , x = (λ/a)/ 1 + σ 2 , and the function ˜g(x) = Φ(x) − xϕ(x) where ϕ and Φ are the N (0, 1) probability density functionand cumulative distribution function, respectively. iid iidTheorem 4. Suppose that Yi ∼ N (0, n) for i ∈ S independently of Yi ∼N (δ, σ 2 N ) for i ∈ B. Let µ be the L1 estimate from (20), using parameter ˆλ 0. Then the predictive mean squared error is E(ˆ(λ)2 ) = a2 δ 2 + (a + b − 1)2 (1 + σ 2 ) + b µ x ˜ x ˜ − a(a + 2b − 2)(1 + σ 2 )[1 − g(˜ − δ) + g(−˜ − δ)] x ˜ − [2aλ + 2(a + b − 1)(aδ − λ)] 1 + σ 2 ϕ(˜ − δ) (21) x ˜ − [2aλ − 2(a + b − 1)(aδ + λ)] 1 + σ 2 ϕ(−˜ − δ) x ˜ x ˜ − (aδ − λ)(aδ + λ)[1 − Φ(˜ − δ) + Φ(−˜ − δ)].Proof. Please see Section 8.3 in the Appendix.3.5 Cell meansThe cell mean setting is simply C copies of the location problem. One couldestimate separate values of λ in each of them. Here we remark briefly on theconsequences of using a common λ or ω over all cells. We do not simulate the various choices. We look instead at what assumptionswould make them match the oracle formula. In applications we can choose themethod whose matching assumptions are more plausible. In the L2 setting, one could choose a common λ using either the penalty C 2 C 2λ c=1 nc δc or λ c=1 δc . Call these cases L2,n and L2,1 respectively. Droppingthe subscript c we find 1 + λn/N 1 + λ/N ωL2,n = , and ωL2,1 = 1 + λn/N + λ 1 + λ/N + λ/ncompared to ωorcl = (nδ 2 + σB n/N )/(nδ 2 + σB n/N + σS ). 2 2 2 We can find conditions under which a single value of λ recovers the oracle’s 2 2 2 2weighting. For ωL2,1 these are σB,c = σS,c in all cells as well as λ = σS,c /δc 2 2 2 2constant in c. For ωL2,n these are σB,c = σS,c and λ = σS,c /(nc δc ) constantin c. The L2,1 criterion looks more reasonable here because we have no reason √to expect the relative bias δc /σS,c to be inversely proportional to nc . 2 2 For a common ω to match the oracle, we need σB,c /Nc = σS,c /nc to hold in 2 2all cells as well as a σS,c /(nc δc ) to be constant in c. The first clause seems quiteunreasonable and so we prefer common-λ approaches to common weights. For a common L1 penalty, we cannot get good expressions for the weightvariable ω. But we can see how the L1 approach shifts the mean. An L1,1 14
    • ¯ ¯approach moves µc from YS,c towards YB,c by the amount λ/nc in cell c, but ˆ ¯ ¯ ¯not going past the pooled mean YP,c = (nYS,c + N YB,c )/(N + n) for that cell. ¯The other approaches use different shifts. An L1,n approach moves µc from YS,c ˆ ¯ ¯towards YB,c by the amount λ in cell c (but not past YP,c ). It does not seemreasonable to move µc by the same distance in all cells, or to move them by an ˆ ¯amount proportional to 1/nc and stopping at YP,c doesn’t fix this. We could √use a common moving distance proportional to 1/ nc (which is the order of ¯ C √statistical uncertainty in YS,c ) by using the penalty c=1 nc |γc |.4 Numerical examplesWe have simulated some special cases of the data enrichment problem. Firstwe simulate the pure location problem which has d = 1. Then we consider theregression problem with varying d.4.1 LocationWe simulated Gaussian data for the location problem. The large sample hadN = 1000 observations and the small sample had n = 100 observations: Xi ∼ 2 2N (µ, σS ) for i ∈ S and Xi ∼ N (µ + δ, σB ) for i ∈ B. Our data had µ = 0 and 2 2σS = σB = 1. We define the relative bias as |δ| √ δ∗ = √ = n|δ|. σS / nWe investigated a range of relative bias values. It is only a small simplification 2 2 2to take σS = σB . Doubling σB has a very similar effect to halving N . Equalvariances might have given a slight relative advantage to the hypothesis testingmethod as described below. The accuracy of our estimates is judged by the relative mean squared error µ 2 ˆ ¯E((ˆ − µ)2 )/(σS /n). Simply taking µ = YS attains a relative mean squarederror of 1. Figure 1 plots relative mean squared error versus relative bias for a collectionof estimators, with the results averaged over 10,000 simulated data sets. We usedthe small sample only method as a control variate. The solid curve in Figure 1 shows the oracle’s value. It lies strictly belowthe horizontal S-only line. None of the competing curves lie strictly below that ¯line. None can because YS is an admissible estimator for d = 1 (Stein, 1956).The second lowest curve in Figure 1 is for the oracle using the L1 version ofthe penalty. The L1 penalized oracle is not as effective as the L2 oracle andit is also more difficult to approximate. The highest observed predictive MSEscome from a method of simply pooling the two samples. That method is verysuccessful when the relative bias is near zero but has an MSE that becomesunbounded as the relative bias increases. Now we discuss methods that use the data to decide whether to use thesmall sample only, pool the samples or choose an amount of shrinkage. We may 15
    • 2.5 L2 oracle plug−in leave−1−out S only hypo. testing 2.0 10−fold 5−fold Relative predictive MSE AICc pooling 1.5 L1 oracle 1.0 0.5 0.0 0 2 4 6 8 Relative biasFigure 1: Numerical results for the location problem. The horizontal line at 1represents using the small sample only and ignoring the large one. The lowestline shown is for an oracle choosing λ in the L2 penalization. The green curveshows an oracle using the L1 penalization. The other curves are as described inthe text.list them in order of their worst case performance. From top (worst) to bottom(best) in Figure 1 they are: hypothesis testing, 5-fold cross-validation, 10-foldcross-validation, AICc, leave-one-out cross-validation, and then the simple plug-in method which is minimax among this set of choices. AICc and leave-one-outare very close. Our cross-validation estimators used ω = max(ωcv,K , n/(n + N ))where ωcv,K is given by (15). The hypothesis testing method is based on a two-sample t-test of whetherδ = 0. If the test is rejected at α = 0.05, then only the small sample data isused. If the test is not rejected, then the two samples are pooled. That test was 2 2based on σB = σS which may give hypothesis testing a slight advantage in thissetting (but it still performed poorly). The AICc method performs virtually identically to leave-one-out cross-validation 16
    • over the whole range of relative biases. None of these methods makes any other one inadmissible: each pair of curvescrosses. The methods that do best at large relative biases tend to do worstat relative bias near 0 and vice versa. The exception is hypothesis testing.Compared to the others it does not benefit fully from low relative bias but itrecovers the quickest as the bias increases. Of these methods hypothesis testingis best at the highest relative bias, K-fold cross-validation with small K is bestat the lowest relative bias, and the plug-in method is best in between. Aggressive methods will do better at low bias but worse at high bias. Whatwe see in this simulation is that K-fold cross-validation is the most aggressivefollowed by leave-one-out and AICc and that plug-in is least aggressive. Thesefindings confirm what we saw in the formulas from Section 3. Hypothesis testingdoes not quite fit into this spectrum: its worst case performance is much worsethan the most aggressive methods yet it fails to fully benefit from pooling whenthe bias is smallest. Unlike aggressive methods it does very well at high bias.4.2 RegressionWe simulated our data enrichment method for the following scenario. The smallsample had n = 1000 observations and the large sample had N = 10,000. Thetrue β was taken to be 0. This has no loss of generality because we are notshrinking β towards 0. The value of γ was taken uniformly on the unit spherein d dimensions and then multiplied by a scale factor that we varied. We considered d = 2, 4, 5 and 10. All of our examples included an interceptcolumn of 1s in both XS and XB . The other d−1 predictors were sampled from aGaussian distribution with covariance CS or CB , respectively. In one simulationwe took CS and CB to be independent Wishart(I, d − 1, d − 1) random matrices.In the other they were sampled as CS = Id−1 + ρuuT and CB = Id−1 + ρvv Twhere u and v are independently and uniformly sampled from the unit spherein Rd−1 and ρ 0 is a parameter that measures the lack of proportionalitybetween covariances. We chose ρ = d so that the sample specific portion of thevariance has comparable magnitude to the common part. We scaled the results so that regression using sample S only yields a meansquared error of 1 at all values of the relative bias. We computed the risk of anL2 oracle, as well as sampling errors when λ is estimated by the plug-in formula,by our bias-adjusted plug-in formula and via AICc. In addition we considered ˆ ˆthe simple weighted combination ω βS + (1 − ω)βB with ω chosen by the plug-informula. Figure 2 shows the results. For d = 2 and also d = 4 none of our methodsuniversally outperforms simply using the S sample. For d = 5 and d = 10, allof our estimators have lower mean squared error than using the S sample alone,though the difference becomes small at large relative bias. We find in this setting that our bias-adjusted plug-in estimator closely matchesthe AICc estimate. The relative performance of the other methods varies withthe problem. Plain plug-in always seemed worse than AICc and adjusted plug-in at low relative bias and better than these at high biases. Plug-in’s gains 17
    • L2 oracle AICc weighting plug−in adj plug−in 0123456 0123456 Wishart Wishart Wishart Wishart d=2 d=4 d=5 d=10 1.2 1.0 0.8 Relative predictive MSE 0.6 0.4 0.2 orthogonal orthogonal orthogonal orthogonal d=2 d=4 d=5 d=10 1.2 1.0 0.8 0.6 0.4 0.2 0123456 0123456 Relative biasFigure 2: This figure shows relative predicted MSE versus relative bias for twosimulated regression problems described in the text.at high biases appear to be less substantial than its losses at low biases. Ofthe other methods, simple scalar weighting is worst for the high dimensionalWishart case without being better in the other cases. The best overall choicesare bias-adjusted plug-in and AICc.5 Proportional design and inadmissibilityThe proportional design case has VB ∝ VS and VT ∝ VS . Suppose that VB =N Σ, VS = nΣ and VT = Σ for a positive definite matrix Σ. Our data enrichmentestimator simplifies greatly in this case. The weighting matrix Wλ in Lemma 1simplifies to Wλ = ωI where ω = (N + nλ)/(N + nλ + N λ). As a result ˆ ˆ ˆβ = ω βS + (1 − ω)βB and we can find and estimate an oracle’s value for ω. Ifdifferent constants of proportionality, say M and m are used, then the effect islargely to reparameterize λ giving the same family of estimates under differentlabels. There is one difference though. The interval of possible values for ω is[n/N, 1] in our case versus [m/M, 1] for the different constants. To attain the 18
    • same sets of ω values could require use of negative λ. ˆ ˆ The resulting estimator of β with estimated ω dominates βS (making itinadmissible) under mild conditions. These conditions given below even allowviolations of the proportionality condition VB ∝ VS but they still require VT ∝VS . Among these conditions we will need the model degrees of freedom to beat least 5, and it will suffice to have the error degrees of freedom in the smallsample regression be at least 10. The result also requires a Gaussian assumptionin order to use a lemma of Stein’s. iid 2 We write YS = XS β + εS and YB = XB (β + γ) + εB for εS ∼ N (0, σS ) iid 2 ˆand εB ∼ N (0, σB ). The data enrichment estimators are β(λ) and γ (λ). The ˆparameter of most interest is β. If we were to use only the small sample we ˆ ˆwould get βS = (XS XS )−1 XS YS = β(0). T T In the proportional design setting, the mean squared prediction error is ˆ f (ω) = E( XT (β(ω) − β) 2 ) = tr((ω 2 σS Σ−1 + (1 − ω)2 (γγ T + σB Σ−1 ))Σ). 2 S 2 BThis error is minimized by the oracle’s parameter value tr((γγ T + σB Σ−1 )Σ) 2 B ωorcl = . tr((γγ T + σB ΣB )Σ) + σS tr(Σ−1 Σ) 2 −1 2 S With ΣS = nΣ and ΣB = N Σ, we find γ T Σγ + dσB /N 2 ωorcl = 2 /N + dσ 2 /n . γ T Σγ + dσB SThe plug-in estimator is γ T Σˆ + dˆB /N ˆ γ σ2 ωplug = ˆ 2 /N + dˆ 2 /n (22) γ T Σˆ ˆ γ + dˆB σ σS ˆ2 ˆ ˆwhere σS = YS − XS βS 2 /(n − d) and σB = YB − XB βB 2 /(N − d). We ˆ2 σ2will have reason to generalize this plug-in estimator. Let h(ˆB ) be any nonneg- ˆ2 σ2ative measurable function of σB with E(h(ˆB )) < ∞. The generalized plug-inestimator is γ T Σˆ + h(ˆB ) ˆ γ σ2 ωplug,h = ˆ 2 ) + dˆ 2 /n . (23) γ T Σˆ ˆ γ + h(ˆB σ σS ˆ Here are the conditions under which βS is made inadmissible by the dataenrichment estimator.Theorem 5. Let XS ∈ Rn×d and XB ∈ RN ×d be fixed matrices with XS XS = T T 2nΣ and XB XB = N ΣB where Σ and ΣB both have rank d. Let YS ∼ N (XS β, σS In ) 2independently of YB ∼ N (XB (β + γ), σB IN ). If d 5 and m ≡ n − d 10,then ˆω E( XT β(ˆ ) − XT β 2 ˆ ) < E( XT βS − XT β 2 ) (24) 19
    • Tholds for any nonrandom matrix XT with XT XT = Σ and any ω = ωplug,h given ˆ ˆby (23).Proof. Please see Section 8.5 in the Appendix. The condition on m can be relaxed at the expense of a more complicatedstatement. From the details in the proof, it suffices to have d 5 and m(1 −4/d) 2. The result in Theorem 5 is similar to the Stein estimator result. There, thesample mean of a Gaussian population is an inadmissible estimator in d = 3dimensions or higher but is admissible in 1 or 2 dimensions. Here there are twosamples to pool and the change takes place at d = 5. Because E(ˆ T Σˆ ) = γ T Σγ + dσS /n + dσB /N it is biased high and so there- γ γ 2 2fore is ωplug , making it a little conservative. We can make a bias adjustment, ˆreplacing γ T Σˆ by γ T Σˆ − dˆS /n − dˆB /N . The result is ˆ γ ˆ γ σ2 σ2 γ T Σˆ − dˆS /n ˆ γ σ2 n ωbapi = ˆ T Σˆ ∨ , (25) γ γ ˆ n+Nwhere values below n/(n + N ) get rounded up. This bias-adjusted estimate of ω ˆ2 ˆ2 ˆ2is not covered by Theorem 5. Subtracting only σB /N instead of σB /N + σS /nis covered, yielding γ T Σˆ ˆ γ ωbapi = ˆ , (26) γ T Σˆ ˆ γ σ2 + dˆS /n σ2which corresponds to taking h(ˆB ) ≡ 0 in equation (23).6 Related literaturesThere are many disjoint literatures that study problems like the one we havepresented. They do not seem to have been compared before and the literaturesseem to be mostly unaware of each other. We give a summary of them here,kept brief because of space limitations. The key ingredient in this problem is that we care more about the smallsample than the large one. Were that not the case, we could simply pool all thedata and fit a model with indicator variables picking out one or indeed manydifferent small areas. Without some kind of regularization, that approach endsup being similar to taking λ = 0 and hence does not borrow strength. The closest match to our problem setting comes from small area estimation insurvey sampling. The monograph by Rao (2003) is a comprehensive treatmentof that work and Ghosh and Rao (1994) provide a compact summary. In thatcontext the large sample may be census data from the entire country and thesmall sample (called the small area) may be a single county or a demographicallydefined subset. Every county or demographic group may be taken to be thesmall sample in its turn. The composite estimator (Rao, 2003, Chapter 4.3) is aweighted sum of estimators from small and large samples. The estimates being 20
    • combined may be more complicated than regressions, involving for exampleratio estimates. The emphasis is usually on scalar quantities such as smallarea means or totals, instead of the regression coefficients we consider. Oneparticularly useful model (Ghosh and Rao, 1994, Equation (4.2)) allows thesmall areas to share regression coefficients apart from an area specific intercept.Then BLUP estimation methods lead to shrinkage estimators similar to ours. The methods of Copas (1983) can be applied to our problem and will result ˆin another combination that makes βS inadmissible. That combination requiresonly four dimensional regressions instead of the five used in Theorem 5 forpooling weights. That combination yields less aggressive predictions. In chemometrics a calibration transfer problem (Feudale et al., 2002) comesup when one wants to adjust a model to new spectral hardware. There may be aregression model linking near-infrared spectroscopy data to a property of somesample material. The transfer problem comes up for data from a new machine.Sometimes one can simply run a selection of samples through both machinesbut in other cases that is not possible, perhaps because one machine is remote(Woody et al., 2004). Their primary and secondary instruments correspond toour small and big samples respectively. Their emphasis is on transfering eitherprincipal components regression or partial least squares models, not the plainregressions we consider here. A common problem in marketing is data fusion, also known as statisticalmatching. Variables (X, Y ) are measured in one sample while variables (X, Z)are measured in another. There may or may not be a third sample with somemeasured triples (X, Y, Z). The goal in data fusion is to use all of the data toform a large synthetic data set of (X, Y, Z) values, perhaps by imputing missingZ for the (X, Y ) sample and/or missing Y for the (X, Z) sample. When there isno (X, Y, Z) sample some untestable assumptions must be made about the jointdistribution, because it cannot be recovered from its bivariate margins. Thetext by D’Orazio et al. (2006) gives a comprehensive summary of what can andcannot be done. Many of the approaches are based on methods for handlingmissing data (Little and Rubin, 2009). Our problem is an instance of what machine learning researchers call do-main adaptation. They may have fit a model to a large data set (the ’source’)and then wish to adapt that model to a smaller specialized data set (the ’tar-get’). This is especially common in natural language processing. NIPS 2011included a special session on domain adaptation. In their motivating problemsthere are typically a very large number of features (e.g., one per unique wordappearing in a set of documents). They also pay special attention to problemswhere many of the data points do not have a measured response. Quite oftena computer can gather high dimensional X while a human rater is necessaryto produce Y . Daum´ (2009) surveys various wrapper strategies, such as fit- eting a model to weighted combinations of the data sets, deriving features fromthe reference data set to use in the target one and so on. Cortes and Mohri(2011) consider domain adaptation for kernel-based regularization algorithms,including kernel ridge regression, support vector machines (SVMs), or support 21
    • vector regression (SVR). They prove pointwise loss guarantees depending onthe discrepancy distance between the empirical source and target distributions,and demonstrate the power of the approach on a number of experiments usingkernel ridge regression. A related term in machine learning is concept drift (Widmer and Kubat,1996). There a prediction method may become out of date as time goes on.The term drift suggests that slow continual changes are anticipated, but theyalso consider that there may be hidden contexts (latent variables in statisticalteminology) affecting some of the data.7 ConclusionsWe have studied a middle ground between pooling a large data set into a smallertarget one and ignoring it completely. In dimension d 5 only a small number oferror degrees of freedom suffice to make ignoring the large data set inadmissible.When there is no bias, pooling the data sets may be optimal. Theorem 5 doesnot say that pooling is inadmissible. When there is no bias, pooling the datasets may be optimal. We prefer our hybrid because the risk from pooling growswithout bound as the bias increases.AcknowledgmentsWe thank the following people for helpful discussions: Penny Chu, CorinnaCortes, Tony Fagan, Yijia Feng, Jerome Friedman, Jim Koehler, Diane Lambert,Elissa Lee and Nicolas Remy.ReferencesBorenstein, M., Hedges, L. V., Higgins, J. P. T., and Rothstein, H. R. (2009). Introduction to Meta-Analysis. Wiley, Chichester, UK.Copas, J. B. (1983). Regression, prediction and shrinkage. Journal of the Royal Statistical Society, Series B, 45(3):311–354.Cortes, C. and Mohri, M. (2011). Domain adaptation in regression. In Proceed- ings of The 22nd International Conference on Algorithmic Learning Theory (ALT 2011), pages 308–323, Heidelberg, Germany. Springer.Daum´, H. (2009). Frustratingly easy domain adaptation. (arXiv:0907.1815). eD’Orazio, M., Di Zio, M., and Scanu, M. (2006). Statistical Matching: Theory and Practice. Wiley, Chichester, UK.Efron, B. (2004). The estimation of prediction error. Journal of the American Statistical Association, 99(467):619–632. 22
    • Feudale, R. N., Woody, N. A., Tan, H., Myles, A. J., Brown, S. D., and Ferr´, J. e (2002). Transfer of multivariate calibration models: a review. Chemometrics and Intelligent Laboratory Systems, 64:181–192.Ghosh, M. and Rao, J. N. K. (1994). Small area estimation: an appraisal. Statistical Science, 9(1):55–76.Hurvich, C. and Tsai, C. (1989). Regression and time series model selection in small samples. Biometrika, 76(2):297–307.Little, R. J. A. and Rubin, D. B. (2009). Statistical Analysis with Missing Data. John Wiley & Sons Inc., Hoboken, NJ, 2nd edition.Rao, J. N. K. (2003). Small Area Estimation. Wiley, Hoboken, NJ.Stein, C. M. (1956). Inadmissibility of the usual estimator for the mean of a mul- tivariate normal distribution. In Proceedings of the Third Berkeley symposium on mathematical statistics and probability, volume 1, pages 197–206.Stein, C. M. (1960). Multiple regression. In Olkin, I., Ghurye, S. G., Hoeffding, W., Madow, W. G., and Mann, H. B., editors, Contributions to probability and statistics: essays in honor of Harald Hotelling. Stanford University Press, Stanford, CA.Stein, C. M. (1981). Estimation of the mean of a multivariate normal distribu- tion. The Annals of Statistics, 9(6):1135–1151.Widmer, G. and Kubat, M. (1996). Learning in the presence of concept drift and hidden contexts. Machine Learning, 23:69–101.Woody, N. A., Feudale, R. N., Myles, A. J., and Brown, S. D. (2004). Transfer of multivariate calibrations between four near-infrared spectrometers using orthogonal signal correction. Analytical Chemistry, 76(9):2596–2600.Ye, J. (1998). On measuring and correcting the effects of data mining and model selection. Journal of the American Statistical Association, 93:120–131.8 Appendix: proofsThis appendix presents proofs of the results in this article. They are groupedinto sections by topic, with some technical supporting lemmas separated intotheir own sections. 23
    • 8.1 Proof of Theorem 1 −2 ˆ −2First df(λ) = σS tr(cov(XS β, YS )) = σS tr(XS Wλ (XS XS )−1 XS σS ) = tr(Wλ ). T T 2 1/2 −1 1/2Next with XT = XS , and M = VS VB VS , −1 −1 tr(Wλ ) = tr(VS + λVS VB VS + λVS )−1 (VS + λVS VB VS ). 1/2 −1/2We place VS VS between these factors and absorb them left and right. Thenwe reverse the order of the factors and repeat the process, yielding tr(Wλ ) = tr(I + λM + λI)−1 (I + λM ).Writing M = U diag(ν1 , . . . , νd )U T for an orthogonal matrix U and simplifyingyields the result.8.2 Proof of Theorem 2 ˆ ˆ ˆProof. First E( XT β − XT β 2 ) = tr(VS E((β − β)(β − β)T )). Next using W =Wλ , we make a bias-variance decomposition, ˆ ˆ ˆ ˆ E (β − β)(β − β)T = (I − W )γγ T (I − W )T + cov(W βS ) + cov((I − W )βB ) −1 = σS W VS W T + (I − W )Θ(I − W )T , 2 2 −1 ˆfor Θ = γγ T + σB VB . Therefore E XS (β − β) 2 −1 = σS tr(VS W VS W T ) + 2tr(Θ(I − W )T VS (I − W )). 1/2 −1/2 Now we introduce W = VS W VS finding 1/2 −1/2 W = VS (VB + λVS + λVB )−1 (VB + λVS )VS = (I + λM + λI)−1 (I + λM ) = U DU T ,where D = diag((1 + λνj )/(1 + λ + λνj )). This allows us to write the first termof the mean squared error as d −1 (1 + λνj )2 σS tr(VS W VS W T ) = σS tr(W W T ) = σS 2 2 2 . j=1 (1 + λ + λvj )2 1/2 1/2For the second term, let Θ = VS ΘVS . Then tr Θ(I − W )T VS (I − W ) = tr(Θ(I − W )T (I − W )) ˜ = tr(ΘU (I − D)2 U T ) d 1/2 1/2 uT VS ΘVS uk = λ2 k . (1 + λ + λνk )2 k=1 24
    • 8.3 Proof of Theorem 4We will use this small lemma.Lemma 2. If X ∼ N (0, 1), then E(XI(X x)) = −ϕ(x), E(X 2 I(X x)) =g(x) and E(X 2 I(|X + b| x)) = 1 − g(x − b) + g(−x − b)where g(x) = Φ(x) − xϕ(x). x xProof. First E(XI(X x)) = −∞ zϕ(z) dz = − −∞ ϕ (z) dz = −ϕ(x). Next, x x x x z 2 ϕ(z) dz = − zϕ (z) dz = − ϕ(z) dz − zϕ(z) = g(x). −∞ −∞ −∞ −∞Then E(X 2 I(|X + b| x)) = E(X 2 I(X + b x)) + E(X 2 I(X + b −x)) 2 = E(X (1 − I(X + b x)) + g(−x − b)) 2 2 = E(X ) − E(X I(X + b x)) + g(−x − b) = 1 − g(x − b) + g(−x − b). Now we prove Theorem 4. We let ˆ ¯ ¯ = δ0 − δ and η = YB + σ 2 YS − δ. Then ¯ η− cov( , η) = 0, ∼ N (0, 1 + σ 2 ), η ∼ N (0, σ 2 + σ 4 ), and YS = . 1 + σ2Recall that we defined b = σ 2 /(1 + σ 2 ), and so ¯ η− YB = δ + η − σ 2 = δ + b + (1 − b)η. 1 + σ2Also with a = N/(N + n), ¯ ¯ η− aYB + (1 − a)YS = aδ + a(b + (1 − b)η) + (1 − a) 1 + σ2 = aδ + (ab − (1 − a)(1 − b)) + (a(1 − b) + (1 − a)(1 − b))η = aδ + (a + b − 1) + (1 − b)η.Letting S = + δ, we have µ = aδ + (a + b − 1) + (1 − b)η − (aS − λ · sign(S))I(|S| ˆ a−1 λ)from which the MSE can be calculated: E(ˆ2 (λ)) = E (aδ + (a + b − 1) + (1 − b)η)2 µ − 2E (aδ + (a + b − 1) + (1 − b)η)(aS − λ · sign(S))I(|S| a−1 λ) + E (aS − λ · sign(S))2 I(|S| a−1 λ) ≡ [1] − 2 × [2] + [3]. 25
    • First [1] = a2 δ 2 + (a + b − 1)2 (1 + σ 2 ) + (1 − b)2 σ 2 (1 + σ 2 ) = a2 δ 2 + (a + b − 1)2 (1 + σ 2 ) + b. ¯Next using Φ(x) = 1 − Φ(x),[2] = E [aδ + (a + b − 1) ][a(S) − λ · sign(S)]I(|S| a−1 λ) =E aδ(aδ − λ · sign(S)) + [aδa + (a + b − 1)(aδ − λ · sign(S))] + a(a + b − 1) 2 }I(|S| a−1 λ) = E aδ(aδ − λ · sign(S))I(|S| a−1 λ) + E a2 δ + (a + b − 1)(aδ − λ · sign(S)) I(|S| a−1 λ) + E a(a + b − 1) 2 I(|S| a−1 λ) −1 −1 ¯ a λ − δ + aδ(aδ + λ)Φ −a λ − δ = aδ(aδ − λ)Φ √ √ 1 + σ2 1 + σ2 2 −1 + [a δ + (a + b − 1)(aδ − λ)]E I(S a λ) + [a2 δ + (a + b − 1)(aδ + λ)]E I(S < −a−1 λ) + a(a + b − 1)E 2 I(|S| a−1 λ) . √ √ ˜ Recall that we defined x = a−1 λ/ 1 + σ 2 and δ = δ/ 1 + σ 2 . Now using ˜Lemma 2 δ a−1 λ E 2 I(|S| a−1 λ) = (1 + σ 2 )E X 2 I |X + | (1 + σ 2 ) (1 + σ 2 ) x ˜ x ˜ = (1 + σ 2 )[1 − g(˜ − δ) + g(−˜ − δ)].Next E I(|S| a−1 λ) = E I(S a−1 λ) + E I(S −a−1 λ) = −E I(S a−1 λ) + E I(S −a−1 λ) = x ˜ 1 + σ 2 ϕ(˜ − δ) − x ˜ 1 + σ 2 ϕ(−˜ − δ). So, ¯ x ˜ x ˜ [2] = aδ(aδ − λ)Φ(˜ − δ) + aδ(aδ + λ)Φ(−˜ − δ) + [a2 δ + (a + b − 1)(aδ − λ)] x ˜ 1 + σ 2 ϕ(˜ − δ) − [a2 δ + (a + b − 1)(aδ + λ)] x ˜ 1 + σ 2 ϕ(−˜ − δ) 2 x ˜ x ˜ + a(a + b − 1)(1 + σ )[1 − g(˜ − δ) + g(−˜ − δ)]. 26
    • Finally, [3] = E [a(S) − λ · sign(S)]2 I(|S| a−1 λ) = E [a2 2 + 2a(aδ − λ · sign(S)) + (aδ − λ · sign(S))2 ]I(|S| a−1 λ) = E a2 2 I(|S| a−1 λ) + 2E a(aδ − λ · sign(S)) I(|S| a−1 λ) + E (aδ − λ · sign(S))2 I(|S| a−1 λ) x ˜ x ˜ = a2 (1 + σ 2 )[1 − g(˜ − δ) + g(−˜ − δ)] x ˜ x ˜ + 2a(aδ − λ) 1 + σ 2 ϕ(˜ − δ) − 2a(aδ + λ) 1 + σ 2 ϕ(−˜ − δ) ¯ x ˜ x ˜ + (aδ − λ)2 Φ(˜ − δ) + (aδ + λ)2 Φ(−˜ − δ). Hence, the MSE is E µ2 = [1] − 2 × [2] + [3] ˆ = a2 δ 2 + (a + b − 1)2 (1 + σ 2 ) + b x ˜ x ˜ − a(a + 2b − 2)(1 + σ 2 )[1 − g(˜ − δ) + g(−˜ − δ)] x ˜ − [2aλ + 2(a + b − 1)(aδ − λ)] 1 + σ 2 ϕ(˜ − δ) x ˜ − [2aλ − 2(a + b − 1)(aδ + λ)] 1 + σ 2 ϕ(−˜ − δ) x ˜ x ˜ − (aδ − λ)(aδ + λ)[1 − Φ(˜ − δ) + Φ(−˜ − δ)].8.4 Supporting lemmas for inadmissibilityIn this section we first recall Stein’s Lemma. Then we prove two technicallemmas used in the proof of Theorem 5.Lemma 3. Let Z ∼ N (0, 1) and let g : R → R be an indefinite integral of theLebesgue measurable function g , essentially the derivative of g. If E(|g (Z)|) <∞ then E(g (Z)) = E(Zg(Z)).Proof. Stein (1981).Lemma 4. Let η ∼ N (0, Id ), b ∈ Rd , and let A > 0 and B > 0 be constants.Let A(b − η) Z =η+ . b−η 2+BThen 2 A(A + 4 − 2d) AB(A + 4) E( Z )=d+E −E b−η 2+B ( b − η 2 + B)2 A(A + 4 − 2d) <d+E . b−η 2+B 27
    • Proof. First, d 2 A2 b − η 2 ηk (bk − ηk ) E( Z )=d+E + 2A E . ( b − η 2 + B)2 b−η 2+B k=1Now define bk − ηk bk − η k g(ηk ) = = . b−η 2+B (bk − ηk )2 + b−k − η−k 2 +BBy Stein’s lemma (Lemma 3), we have ηk (bk − ηk ) 2(bk − ηk )2 1 E = E(g (ηk )) = E − b−η 2+B ( b−η 2 + B)2 b−η 2+Band thus 2 (4A + A2 ) b − η 2 2Ad E( Z )=d+E − ( b − η 2 + B)2 b−η 2+B (4A + A2 ) b − η 2 2Ad( b − η 2 + B) =d+E − ( b − η 2 + B)2 ( b − η 2 + B)2 2 (4A + A − 2Ad) (4A + A2 )B =d+E − , b−η 2+B ( b − η 2 + B)2after collecting terms.Lemma 5. For integer m 1, let Q ∼ χ2 , C > 1, D > 0 and put (m) Q(C − m−1 Q) Z= . Q+DThen (C − 1)m − 2 E(Z) . m+2+Dand so E(Z) > 0 whenever C > 1 + 2/m.Proof. The χ2 (m) density function is pm (x) = (2 m/2−1 Γ( m ))−1 xm/2−1 e−x/2 . 2Thus ∞ 1 x(C − m−1 x) m/2−1 −x/2 E(Z) = m/2 m x e dx 2 Γ( 2 ) 0 x+D ∞ 1 C − m−1 x (m+2)/2−1 −x/2 = x e dx 2m/2 Γ( m ) 2 0 x+D 2 m/2+1 Γ( 2 ) ∞ m+2 C − m−1 x = pm+2 (x) dx 2m/2 Γ( m ) 2 0 x+D ∞ −1 C −m x =m pm+2 (x) dx x+D 0 C − (m + 2)/m m m+2+Dby Jensen’s inequality. 28
    • 8.5 Proof of Theorem 5. σ2 σ2We prove this first for ωplug,h = ωplug , that is, taking h(ˆB ) = dˆB /n. We also ˆ ˆassume at first that ΣB = Σ. ˆ T T ˆ Note that βS = β + (XS XS )−1 XS εS and βB = β + γ + (XB XB )−1 XB εB . T TIt is convenient to define ηS = Σ1/2 (XS XS )−1 XS εS T T and ηB = Σ1/2 (XB XB )−1 XB εB . T T ˆ ˆThen we can rewrite βS = β + Σ−1/2 ηS and βB = β + γ + Σ−1/2 ηB . Similarly,we let ˆ YS − XS βS 2 ˆ YB − XB βB 2 ˆ2 σS = ˆ2 and σB = . n−d N −d ˆ2 ˆ2Now (ηS , ηB , σS , σB ) are mutually independent, with 2 2 σS σB ηS ∼ N 0, Id , ηB ∼ N 0, Id , n N 2 2 σS 2 σB ˆ2 σS ∼ χ , and ˆ2 σB ∼ χ2 . n − d (n−d) N − d (N −d) ˆ We easily find that E( X βS − Xβ 2 ) = dσS /n. Next we find ω and a bound 2 ˆ ˆ ω ) − Xβ 2 ).on E( X β(ˆ ˆ ˆ Let γ ∗ = Σ1/2 γ so that γ = βB − βS = Σ−1/2 (γ ∗ + ηB − ηS ). Then ˆ γ T Σˆ + dˆB /N ˆ γ σ2 ω = ωplug = ˆ ˆ σ2 σ2 γ T Σˆ + dˆB /N + dˆS /n ˆ γ γ ∗ + ηB − ηS 2 + dˆB /N σ2 = . σ2 ˆ2 γ ∗ + ηB − ηS 2 + d(ˆB /N + σS /n)Now we can express the mean squared error as ˆω E( X β(ˆ ) − Xβ 2 ) = E( XΣ−1/2 (ˆ ηS + (1 − ω )(γ ∗ + ηB )) 2 ) ω ˆ = E( ω ηS + (1 − ω )(γ ∗ + ηB ) 2 ) ˆ ˆ = E( ηS + (1 − ω )(γ ∗ + ηB − ηS ) 2 ) ˆ (γ ∗ + ηB − ηS )dˆS /n σ2 2 =E ηS + 2 /N + σ 2 /n) . γ ∗ + ηB − ηS 2 + d(ˆB σ ˆS To simplify the expression for mean squared error we introduce σ2 2 Q = mˆS /σS ∼ χ2(m) ∗ √ ηS = n ηS /σS ∼ N (0, Id ), √ b = n(γ ∗ + ηB )/σS , σ2 2 A = dˆS /σS = dQ/m, and B= σ2 nd(ˆB /N ˆ2 2 + σS /n)/σS = σ2 2 d((n/N )ˆB /σS + Q/m). 29
    • The quantities A and B are, after conditioning, the constants that appear intechnical Lemma 4. Similarly C and D introduced below match the constantsused in Lemma 5. With these substitutions and some algebra, 2 ∗ 2 ˆω 2 σS ∗ A(b − ηS ) E( X β(ˆ ) − Xβ )= E ηS + ∗ 2+B n b − ηS 2 ∗ 2 σS ∗ A(b − ηS ) = E E ηS + ∗ ˆ2 ˆ2 η B , σS , σB . n b − ηS 2 + BWe now apply two technical lemmas from Section 8.4. ∗ Since ηS is independent of (b, A, B) and Q ∼ χ2 , by Lemma 4, we have (m) ∗ 2 ∗ A(b − ηS ) A(A + 4 − 2d)E ηS + ∗ 2+B ˆ2 ˆ2 η B , σS , σB <d+E ∗ ˆ2 ˆ2 ηB , σS , σB . b − ηS b − ηS 2 + BHence ˆ ∆ ≡ E( X βS − Xβ 2 ˆω ) − E( X β(ˆ ) − Xβ 2 ) 2 σS A(2d − A − 4) = E ∗ n b − ηS 2 + B 2 σS (dQ/m)(2d − dQ/m − 4) = E ∗ n b − ηS 2 + (B − A) + dQ/m) 2 dσS Q(2 − Q/m − 4/d) = E ∗ n b− ηS 2 m/d + (B − A)m/d + Q dσ 2 Q(C − Q/m) = SE n Q+Dwhere C = 2 − 4/d and D = (m/d)( b − ηS 2 + dnN −1 σB /σS ). ∗ ˆ2 2 Now suppose that d 5. Then C 2 − 4/5 > 1 and so conditionally on ˆ2ηS , ηB , and σB , the requirements of Lemma 5 are satisfied by C, D and Q.Therefore 2 dσS m(1 − 4/d) − 2 ∆ E (27) n m+2+D ∗where the randomness in (27) is only through D which depends on ηS , ηB 2(through b) and σB . By Jensen’s inequality ˆ 2 dσS m(1 − 4/d) − 2 ∆> 0 (28) n m + 2 + E(D)whenever m(1 − 4/d) 2. The first inequality in (28) is strict because var(D) >0. Therefore ∆ > 0. The condition on m and d holds for any m 10 whend 5. 30
    • σ2 σ2 For the general plug-in ωplug,h we replace dˆB /N above by h(ˆB ). This ˆ 2 2quantity depends on σB and is independent of σS , ηB and ηS . It appears within ˆ ˆB where we need it to be non-negative in order to apply Lemma 4. It also ∗appears within D which becomes (m/d)( b − ηS 2 + nh(ˆB )/σS ). Even when σ2 2 2we take var(h(ˆB )) = 0 we still get var(D) > 0 and so the first inequality in (28) σis still strict. ˆ2 Now suppose that ΣB is not equal to Σ. The distributions of ηS , σS and σB ˆ2remain unchanged but now 2 σB 1/2 −1 1/2 ηB ∼ N 0, Σ ΣB Σ Nindependently of the others. The changed distribution of ηB does not affectthe application of Lemma 4 because that lemma is invoked conditionally on ηB .Similarly, Lemma 5 is applied conditionally on ηB . The changed distribution ofηB changes the distribution of D but we can still apply (28). 31