Your SlideShare is downloading. ×
Mathematical Theory and Modeling
ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online)
Vol.3, No.11, 2013

www.iiste.org

Large S...
Mathematical Theory and Modeling
ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online)
Vol.3, No.11, 2013

www.iiste.org

2. Semi...
Mathematical Theory and Modeling
ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online)
Vol.3, No.11, 2013

www.iiste.org

<
<
<
<...
Mathematical Theory and Modeling
ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online)
Vol.3, No.11, 2013

www.iiste.org

b

|~

...
Mathematical Theory and Modeling
ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online)
Vol.3, No.11, 2013

v) _ b = {U, {
.
= U,
...
Mathematical Theory and Modeling
ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online)
Vol.3, No.11, 2013

(

). ”, = (

).

www....
Mathematical Theory and Modeling
ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online)
Vol.3, No.11, 2013

∑67@7 ²

~
|}

,

And
...
Mathematical Theory and Modeling
ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online)
Vol.3, No.11, 2013

_b
,,

≤

˜„,q

www.ii...
Mathematical Theory and Modeling
ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online)
Vol.3, No.11, 2013

a(

,

<
). ”, a(

) ≤...
Mathematical Theory and Modeling
ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online)
Vol.3, No.11, 2013

where

1
det (_ ,, ) 1...
Mathematical Theory and Modeling
ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online)
Vol.3, No.11, 2013

www.iiste.org

[3] Cho...
This academic article was published by The International Institute for Science,
Technology and Education (IISTE). The IIST...
Upcoming SlideShare
Loading in...5
×

Large sample property of the bayes factor in a spline semiparametric regression model

56

Published on

International peer-reviewed academic journals call for papers, http://www.iiste.org

Published in: Technology, Business
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
56
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
2
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Transcript of "Large sample property of the bayes factor in a spline semiparametric regression model"

  1. 1. Mathematical Theory and Modeling ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online) Vol.3, No.11, 2013 www.iiste.org Large Sample Property of The Bayes Factor in a Spline Semiparametric Regression Model Ameera Jaber Mohaisen and Ammar Muslim Abdulhussain Mathematics Department College of Education for Pure Science AL-Basrah University-Iraq E-mail: ammar.muslim1@yahoo.com Abstract In this paper, we consider semiparametric regression model where the mean function of this model has two part, the first is the parametric part is assumed to be linear function of p-dimensional covariates and nonparametric ( second part ) is assumed to be a smooth penalized spline. By using a convenient connection between penalized splines and mixed models, we can representation semiparametric regression model as mixed model. In this model, we investigate the large sample property of the Bayes factor for testing the polynomial component of spline model against the fully spline semiparametric alternative model. Under some conditions on the prior and design matrix, we identify the analytic form of the Bayes factor and show that the Bayes factor is consistent. Keywords: Mixed Models, Semiparametric Regression Model, Penalized Spline, Bayesian Model, , Marginal Distribution, Prior Distribution, Posterior Distribution, Bayes Factor, Consistent. 1. Introduction In many applications in different fields, we need to use one of a collection of models for correlated data structures, for example, multivariate observations, clustered data, repeated measurements, longitudinal data and spatially correlated data. Often random effects are used to describe the correlation structure in clustered data, repeated measurements and longitudinal data. Models with both fixed and random effects are called mixed models. The general form of a linear mixed model for the ith subject (i = 1,…, n) is given as follows (see [10,13,15]), = +∑ + , ~ 0, , ~ (0, ) (1) where the vector has length , and are, respectively, a × design matrix and a × design matrix of fixed and random effects. β is a p-vector of fixed effects and u ! are the q -vectors of random effects. The variance matrix is a × matrix and is a × matrix. We assume that the random effects { ; % = 1, … , ( ; ) = 1, … , *} and the set of error terms { , … , , } are independent. In matrix notation (see [13,15]), = + + (2) . Here = ( , … . , , ). has length = ∑, , = ( . , … , , ). is a × design matrix of fixed effects, Z is a × block diagonal design matrix of random effects, q = ∑/ q! , u = (u0 , … , u0 )0 is a q-vector of / ! random effects, = 1%23( , … , , ) is a × matrix and = 1%23( , … , ) is a × block diagonal matrix. In this paper, we consider semiparametric regression model (see [1,6,8,9,10,13,15]), for which the mean function has two part, the parametric ( first part ) is assumed to be linear function of p-dimensional covariates and nonparametric ( second part) is assumed to be a smooth penalized spline. By using a convenient connection between penalized splines and mixed models, we can representation semiparametric regression model as mixed model. In this model we investigate large sample properties of the Bayes factor for testing the pure polynomial component of spline null model whose mean function consists of only the polynomial component against the fully spline semiparametric alternative model whose mean function comprises both the pure polynomial and the component spline basis functions. The asymptotic properties of the Bayes factor in nonparametric or semiparametric models have been studied mainly in nonparametric density estimation problems related to goodness of fit testing. These theoretical results include (see [4,7,12,14]). Compared to the previous approaches to density estimations problems for goodness of fit testing and model selection, little work has been done on nonparametric regression problems. Choi and et al 2009 studied the semiparametric additive regression models as the encompassing model with algebraic smoothing and obtained the closed form of the Bayes factor and studied the asymptotic behavior of the Bayes factor based on the closed form ( see [3] ). In this paper, we obtain the closed form and studied the asymptotic behavior of the Bayes factor in spline semiparametric regression model and we proved that the Bayes factor converges to infinity under the pure polynomial model and the Bayes factor converges to zero almost surely under the spline semiparametric regression alternative and we show that the Bayes factor is consistent. 91
  2. 2. Mathematical Theory and Modeling ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online) Vol.3, No.11, 2013 www.iiste.org 2. Semiparametric and Penalized Spline Consider the model: 6 4 = ∑ 8 5 + (567 , ) + , % = 1,2, … , ( (3) Where 4 , … , 4, response variables and the unobserved errors are , … , , are known to be i.i.d. normal with < < mean 0 and covariance :; = with :; known. The mean function of the regression model in (3) has two parts. The parametric ( first part ) is assumed to be linear function of p-dimensional covariates 5 and nonparametric (second part) (567 , ) is function defined on some index set > ⊂ . The model (3) can be expressed as a smooth penalized spline with q degree, then it's become as: 6 @ @ D (4) 4 = ∑ 8 5 +∑ 67 567 , + ∑A { A (567 , − CA )7 + Where C , … , CD are inner knots 2 < C < , , , < CD < F. By using a convenient connection between penalized splines and mixed models. Model (4) is rewritten as follows (see [13,15]) = + + (5) where 4 = G⋮I , 4, L ⋮ O K N 6 N , = K K 67 N K ⋮ N J 67@ M 8 = G ⋮ I, = P D (567 (567 − C )7 ⋮ @ ,, − C )7 , @ ⋯ ⋱ ⋯ (567 (567 − CD )7 S ⋮ @ ,, − CD )7 @ , 567 , … 567 , 1 5 … 56 L O @ K 1 5 < … 56< 567 ,< … 567 ,< N = K N ⋮ ⋱ ⋮ ⋮ K ⋮ ⋮ ⋱ N @ J1 5 , … 56, 567 ,, … 567 ,, M We assume ( < C + + + 1, ( + T = C + + + 1, (T is integer number and greater than 1 ), and we assume that the design matrix is orthogonal in the following way: . U, U, = (=, ∀( ≥ 1 (6) where: be the ( − ( + + 1) × ( − ( + + 1) matrix, and let = + X U, = ( , ) and Based on the above setup for regression model (5) and the assumption of the orthogonal design matrix (6), we consider a Bayesian model selection problem in a spline semiparametric regression problem. Specifically, we would like to choose between a Bayesian spline semiparametric model and its pure polynomial counterpart by the criterion of the Bayes factor for two hypotheses, Y8 : = + versus Y: = + + (7) As for the prior of , 2(1 under Y , we assume and u are independent and < ~ (0, ∑8 ) , ∑8 = :[ =67@7 (8) < ~ (0, ∑ ) , ∑ = : =D (9) @ 3. Bayes factor and marginal distribution 6 < The response 4 in (4) follows the normal distribution with mean ] and variance :; , where ] = ∑ 8 5 + )=1 )5 +1,%)+C=1^{ C(5 +1,%−^C)+ , for, %=1,2,…,(. Thus, given covariates and ](=(]1, …, ]()>, the n-dimensional response vector , = (4 , … , 4, ). follows the n-dimensional normal distribution with mean ], < and covariance matrix :; =, where =, is the ( × ( identity matrix. Also from the prior distributions specified in (8) and (9), we can deduce the joint distribution of ], is the multivariate normal distribution (U_ ) with mean zero and ( × ( covariance matrix ∑8 . + ∑ . . In summary, we have the following:, |], ~ U_ < , (], , :; =, ) , ], ~ U_ , (0, , ∑8 . + ∑ . ) (10) Result 1: Suppose the distribution of ], and , are given by (10). Then the posterior distribution of P(], | , )is the multivariate normal with the following mean and variance. < < a(], | , , :; ) = ∑< (∑< + :; =, )b , 92
  3. 3. Mathematical Theory and Modeling ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online) Vol.3, No.11, 2013 www.iiste.org < < < < c2*(], | , , :; ) = (∑b + (:; =, )b )b = ∑< (∑< + :; =, )b :; =, < . . Where ∑< = ∑8 + ∑ . Furthermore, marginally , follows < (11) , ~ U_ (0, :; =, + ∑< ) Note the (%, ))th element of ∑< , 67@7 < < ∑< (%, )) = ∑A :[ 5 A 5A + ∑D : d A dA , %, ) = 1, … , ( (12) A @ where d A = (567 , − CA )7 Applications of the previous result yield that, the marginal distribution of , under Y8 and Y are respectively , (0, _8 ) and , (0, _ ), where < _8 = :; =, + ∑8 . < _ = :; =, + ∑8 . + ∑ . Hence, the Bayes factor for testing problem (7) is given by: 1 1 . b exp { − , _8 , } 2 ( |Y8 ) f2gdet (_8 ) e8 = = 1 1 . ( |Y ) exp { − , _ b , } 2 f2gdet (_ ) = fmno (pq ) fmno (pr ) exp{− . ( < , b _8 −_ b ) , } (13) 4. Asymptotic behavior of Bayes factor The Bayes factor is said to be consistent if ( see[3,14]) u s% ,→u e8 = ∞ , 2. T. w8 under the Y8 , (14) s% ,→u e8 = 0 , 2. T. wu under the Y , (15) We investigate the consistency of the Bayes factor, by establishing (14) and (15), under the spline semiparametric regression model described in the section 2. By truncating the regression up to the first ( terms. ,b(67@7 ) 6 @ @ We get 4 = ∑ 8 5 + ∑ { A (567 , − CA )7 } + 67 567 , + ∑A Now define _ ,, as the covariance matrix of the marginal distribution of , under Y as:< _ ,, = :; =, + ∑8 . + ∑ , . < . = :; =, + U, x, U, (16) < Where ∑ , is diagonal matrix with diagonal elements : from size ( − ( + + 1) × ( − ( + + 1) and ∑ 0 z x, = y 8 0 ∑, where 0 , 0 are the matrices for zeros element with size ( + + 1) × ( − ( + + 1) and ( − ( + + 1) × ( + + 1), respectively. Then, we can rewrite (16) by using x, as: |~ . _ ,, = U, { } =, + x, • U, (17) , Let _8 be the covariance matrix of the marginal distribution of , under Y8 , then _8 can be represented as: < _8 = :; =, + ∑8 . + 0 . < . = :; =, + U, €, U, where: ∑ 0 €, = y 8 z 0 0 where 0 is the matrix for zeros element with size ( − ( + + 1) × ( − ( + + 1), then we can rewrite _8 as the following: _8 = U, { ~ |} , . =, + €, • U, (18) Then we can use the two matrices above (_8 , _ ,, ) for matrix inversions and calculating determinants as shown in following lemma. Lemma1: Let _ ,, and _8 be defined as in (17) and (18) respectively, and suppose U, satisfied (6). Then, the following hold. . . i) U, U, = U, U, = (=, ii) det(_8 ) = (67@7 ( ,~ U, { , ~ |} iii) det _ ,, = (, ( b iv) _8 = ~ |} , ~ |} , < < + :[ )67@7 :; < + :[ )67@7 ( =, + €, • b ~ |} , . U, (,b(67@7 )) < + : ),b(67@7 93 )
  4. 4. Mathematical Theory and Modeling ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online) Vol.3, No.11, 2013 www.iiste.org b |~ . v) _ b = ~ U, { } =, + x, • U, ,, , , Proof:. i) By multiplying U, and U, to equation (6) from right and left side, we have:. . . U, U, U, U, = U, (=, U, . = U, (U, . = (U, U, . Multiplying (U, U, )b to the above equation from left side, we get:. . . . . U, U, U, U, (U, U, )b = (U, U, (U, U, )b . U, U, = (=, ~ |} ii) _8 = U, { = • , ~ |} Then L(( , K K = K K K J , . =, + €, • U, , and by using (6), we get |~ } L( , K K ‚K K K K J < + :[ ) 0 ⋮ 0 0 < + :[ ) 0 ⋮ 0 0 det(_8 ) = (67@7 ( ~ |} , L(( , K K = K K K K J |~ < + :[ ) 0 , ⋮ 0 0 0 < + :[ ) ⋮ 0 0 ⋮ 0 0 < + :[ ) ⋮ 0 0 (( ~ |} , 0 + ⋮ 0 0 ~ |} ⋯ , ⋯ ⋯ 0 0 0O N 0N ⋮N N 0N < :; M (,b(67@7 )) ( , 0 < + :[ ) ⋮ 0 0 < :[ ) ⋱ ⋯ ⋯ ⋯ = = = = .b U, …q ƒ„ , { , ~ |} ⋱ ⋯ ⋯ , . (=, U, U U. ,~ , , ,~ U, { b =, + €, • b { .b U, ~ |} , ~ |} , { U, b =, + €, • ~ |} , b =, + €, • b =, + €, • . U, ⋯ 0 ⋯ ~ |} , 0 0 ⋮ + 0 ‚. 0 < : O N 0 N ⋮ N• , N 0 N ~ N |} < + : M , 0 O N ⋯ 0 0 N N ⋮ ⋮ ~ N |} < (( + : ) 0 N , ~ N |} < 0 (( + : )M , < < :; :; < < det _ ,, = (, ( + :[ )67@7 ( + : ),b(67@7 ( ( |~ b . iv) _8 = {U, { } =, + €, • U, }b Then 0O N 0N ⋮N • , N 0N ~ |} N ,M ⋯ 0 ⋱ ⋮ ⋯ 0 ⋱ ⋮ < ⋯ :; ⋯ 0 ~ |} 0 ⋯ 0 < :[ ) . =, + x, • U, } L( , K K ‚K K K K J = • , ~ |} (( + , < < + :[ )67@7 :; , ~ |} iii) _ ,, = U, { ~ |} ( 0 ~ |} ) U, b (=, b U, b …q ƒ„ , . U, U, by lemma 1.i 94 ‚.
  5. 5. Mathematical Theory and Modeling ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online) Vol.3, No.11, 2013 v) _ b = {U, { . = U, = = = …q ƒ„ b { ~ |} = , , ~ |} = , , , . (=, U, ,~ U, { U U. ,~ , , b . + x, • U, }b + x, • { .b U, ~ |} , www.iiste.org ~ |} , { b U, b =, + x, • ~ |} , =, + x, • b =, + x, • Now let b . U, † e8 = U, b (=, b U, b , . U, U, fmno (pq,„) fmno (pr ) …q ƒ„ exp{− . ( < , b _8 −_ b ) , } ,, Which an approximation of e8 with _ replaced by _ ,, in (13) Thus, 1 det (_ ,, ) 1 . b † s‡3 e8 = s‡3 − ( _ −_ b ) , ,, det (_8 ) 2 , 8 2 det (_ ,, ) . b † − , ( _8 −_ b ) , 2 log e8 = log ,, det (_8 ) By results of lemma1 we have::< :< < < (, ( ; + :[ )67@7 ( ; + : ),b(67@7 ) ( ( † 2 log e8 = log (,b(67@7 )) :< < < (67@7 ( ; + :[ )67@7 :; ( b b < < 1 . :; :; . − < ‹ , U, ‹Œ =, + €, • − Œ =, + x, • Ž U, , Ž ( ( ( = log where ”, = = log( ~ ’ ~ ,„…(••‘•q) ( } 7|“ )„…(••‘•q) ~ ~ |} 7,|“ ~ |} „ ~ („…(••‘•q)) |} ),b(67@7 ) − 1 0(67@7 )×(67@7 U y < , 0 ( and ” = x%23 ( ~ ,~ |“ ~ 7,| ~ )| ~ (|} “ } ) ) − ,~ . , ”, , } { . , U, 0(67@7 y )×(67@7 ) 0 0 . z U, ” (19) ,} 0 . z U, ” † To establish the consistency of the Bayes factor we focus on e8 , first and the remaining terms will be < considered later. Without loss of generality we assume :; = 1 in the remainder of this paper. Now let •,, = ∑, •,,< = ∑ , ~ ( 7,|“ ) {∑ ~ ,~ |“ ~ 7,|“ ) = ( ( ~ 7,|“ ) + (20) < (: ) (21) (22) = (4 , … , 4, ). where 4 ′T are independent normal random variable with mean ∑ +)5 +1,%) and variance : 2=1, then there exist a positive constant €1 such that Lemma2: let ˜„,q = ~ ( 7,|“ ) < log(1 + (: ) = ( log(1 ~ ~ ,|“ < |“ – < •,,– = ∑, ( )=1 ~ ,|“ , ,b(67@7 ) < log(1 + (: ) − . , ”, , • This implies that, under Y8 . ,→u † ›œ log e8 š › ∞, in probability. > € , with probability tending to 1. Proof Let , = ( , < , … , , ) are independent standard normal random variables, Note that . ). ”, ( , + ) , ”, , = ( , + . . = , ”, , + ( ). ”, ( ) + , ”, + ( ). ”, , . By the orthogonality of the design matrix 95 , • , + 6 8 and 5 +
  6. 6. Mathematical Theory and Modeling ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online) Vol.3, No.11, 2013 ( ). ”, = ( ). www.iiste.org 0(67@7 )×(67@7 ) U y ,~ , 0 . , ”, , . 0 . z U, = 0. ” . Then , ”, , = Using the expectation formula of the quadratic form given in (12), we obtain , < (: . 0 < a( , ”, , ) = ž*(”, ) = Ÿ = •,, < (1 + (: ) Note that 1 0(67@7 )×(67@7 ) 0 0(67@7 )×(67@7 < . ”, = U, y z U, U, y 0 ” 0 ( 0(67@7 )×(67@7 ) 0 . zU = ¡ U, y , 0 ”< , and < ž*(”, ) = ∑, ( ,~ ~ ,~ |“ ~ 7,|“ ) < = ∑, ( ~ ,|“ ~ 7,|“ ) 0 . z U, ” )< = •,,– . Similarly, using the variance formula of the quadratic form of the multivariate normal variables, we have . , ”, , ) c2*( Let ¢, = w* { ˜„,q { < = 2ž*(”, ) = 2 ∑, ( ˜„,~ b˜„,q {∑ <˜„,q ,b(67@7 ) . Then, . , ”, , } ~ ,|“ ~ 7,|“ < log(1 + (: )< − )< = 2•,,– . . , ”, , • ≤ ¢, } •,,< ≥ − ¢, ‚ •,, •,, . . . { , ”, , } a( , ”, , ) •,,< a( , ”, , ) = w* • − ≥ − ¢, − ‚ •,, •,, •,, •,, . 1 •,,< − a( , ”, , ) . . = w* • {( , ”, , ) − a( , ”, , )} ≥ − ¢, ‚ •,, •,, = w* • ≤ ¤¥ § ¦„ ¨„ ¦„ ~ ˜„,q (©„,~ …©„,q )~ ~©„,q = ª˜„,¡ (˜„,~ b˜„,q )~ ,→u š › 0, ›œ where the last statement holds from the Chebyshev inequality. Then •,,< − •,, ≥ ¢ •,, for some ¢ > 0 and let ¢, ≥ ¢ /2. Hence, we conclude that there exists a positive constant € = ¢ /2 such that ,b(67@7 ) < . {∑ log(1 + (: )< − , ”, , • > € , with probability tending to 1. ˜„,q Consequently, it follows that ,b(67@7 ) < † 2 log e8 = ∑ log(1 + (: )< − ,→u † ›œ log e8 š › ∞ . , ”, , ,→u š › ∞ in probability under Y8 ›œ Lemma3 Suppose we have the penalized spline smoothing, then ,→u 1 ž* _ b − _ b š› 0 ›œ ,, •,, Proof: Since _ − _ ,, is the covariance matrix of ] ∗ (5) = ∑,7X A ,b(67@)7 dA A = X it is a positive definite matrix. Thus _ b − _ b is also a positive definite matrix. Thus, ž* _ b ≥ ž*( _ b ), also from the matrix analogue ,, ,, of the Cauchy-Schwarz inequality, we have ž* . X where ¯_ b ¯ ,, _b ,, = { ž* Note that ≤ -ž*( _ b< ,, }< . X X )ž* _ b< = ® X ® ¯_ b ¯ ,, ,, 67@7 < 1 :; < = • < { Ÿ ° + :[ ± ( ( b< + ,b(67@7 ) Ÿ 96 < :; < ° + : ± ( b< }‚ <
  7. 7. Mathematical Theory and Modeling ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online) Vol.3, No.11, 2013 ∑67@7 ² ~ |} , And ,b(67@7 ) Ÿ ° < + :[ ³ b< = < :; < + : ± ( 67@7 ~ ’~ ~ ´ } 7|µ ¶ „ b< = = + 1) < :; < + : ¶ ( < ⁄< ,~ (67@7 ) = 67@7 ~ ’~ •„ ’~ } µ ¸ · „ (−( + ´ www.iiste.org ≤ ´ ≤ ~ ~ (|} 7, |µ )~ ( < :; < + : ¶ ( < = 67@7 ¹ |µ < (:; = º(1) (– (– ≤ = º(() < )< < ((: )< + (: Thus, it follows that ¯_ b ¯ = »º ² ³¼ = º(()b ⁄< ,, , Since sup¿∈•8, ‚ ® X ® < 1 from the distances between truncated power functions, it follows that ® X ®< = < < ∑, XA (5 ) ≤ ( , where XA (5 ) = (567 , − CA )@ is element in X 7 Thus, ˜„,q = ≤ ž* _ b − _ b ,, ˜„,q ~ |“ ž* _ b ( _ − _ ,, ) _ b ,, ˜„,q . < ∑,7X : ( A A ,b(67@7 ) ˜„,q ~ ,|“ ≤ = ¯_ b ¯ ∑,7X ,, A ,b(67@7 , ˜„,q < ∑,7X A ,b(67@7 Thus 1 ž* _ b − _ b ,, •,, ) 1≤ _b ( ,, ) ® ~ D |“ ˜„,q . A A® = _ b ). ) < ~ D|“ „~ ’~ “ (q•„’~ ) “ < ^(1 + (: ) ,→u š› 0 ›œ < ( ≤ ~ D( 7,|“ ) = ,~ Theorem(1) , Consider the testing problem in (7), suppose that the true model is Y8 ( pure polynomial model ) and let w8 . denote the true distribution of the whole data, with p.d.f w8 (4| 8 ) = ∅{( 4 − 8 )/ :; }, where ∅(∙) is the standard normal density, then, the Bayes factor is consistent under the null hypothesis Y8 : , lim,→u e8 = ∞ in w8 probability. 1 det (_ ) 1 . b log − ( _ −_ b ) 2 det (_8 ) 2 , 8 mno (pq,„ ) mno (pq ) = log + log − Proof log e8 = < . b ( _8 < mno (pr ) < , mno (pq ) . log − , (_ b −_ b ,, < mno (pq,„) < mno (pq,„ ) † = loge8 + † = loge8 + Å + Å< where Å = log < mno (pq ) mno (pq,„ ) , Å< = − , . (_ b −_ b ,, < , ) −_ b ) ,, ) , , − . (_ b −_ b ,, < , ) , , Since _ − _ ,, is a positive definite matrix, 1Æž (_ ) ≥ 1Æž (_ ,, ), thus Å ≥ 0 . < a( , _ b −_ b ) , = :; ž* _ b − _ b + ( ). _ b − _ b , ,, ,, ,, b b b since _ ,, − _ , _ ,, and _ b are all nonnegative definite matrices, it follows that 0 ≤ ( ). (_ b − _ b ) ≤ ( ). _ b . ,, ,, . By orthogonality of the design matrix (6), we have . U, = U, = ( =67@7 , 0,b(67@7 Thus similarly to the proof of lemma (3), we get: ( ). _ b ,, = = = ,~ ,~ ( . . ). U, { ~ |} , b =, + x, • ( =67@7 , 0,b(67@7 =67@7 , 0,b(67@7 ) b ) { ~ |} , . U, { ~ |} , =, + x, • =, + x, • b b ( =67@7 , 0,b(67@7 =67@7 , 0,b(67@7 67@7 < < = ∑ { + :[ • = º(1). , Let Ç > 0, by the Markov inequality when r = 1 and lemma (3), we have: . 1 a( , _ b −_ b ) , 1 . b ,, b { } *{ , _ ,, −_ ) , > Ç È ≤ •,, Ç •,, ~ |} 97 ) ) ) .
  8. 8. Mathematical Theory and Modeling ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online) Vol.3, No.11, 2013 _b ,, ≤ ˜„,q www.iiste.org …q …q pq,„ bpq É + Ê ˜„,q …q (Ë[)§ pq,„ Ë[ ,→u Ê . (_ b −_ b , ,, š › 0. ›œ Note that − _ is nonnegative definite and thus, ) Therefore, Å< = ‡ •,, . The above results with lemma (2) imply there exists a constant € such that 1 1 † log e8 ≥ log e8 + ‡(1) •,, •,, > € + ‡(1). That b ,→u , is nonnegative random variable. , w8 e8 ≥ exp( •,, { € + ‡(1)}) š › ∞ in ›œ probability. This completes the proof. Now we consider asymptotic behavior of the Bayes factor under Y and assuming that the true model for in Y . Lemma(4): 6 Let , = (4 , … , 4, ) where 4 ′T are independent normal random variables with mean ∑ )=1 +)5 +1,%)+C=1(−( + +1)d%C C and variance : 2=1 and let Ì(= a ( > ”( a ( . Then, lim,→u . , ”, , Í„ = 1, in wu probability. 8 is, , is 5 + Proof: From the moment formula of the quadratic form of normal random variables, the expectation and variance of . quadratic form , ”, , are given by: . a( , ”, , ) = ž* ( ”, ) + a( , ). ”, a( , ) c2*( Note Ì, = a( =∑ = = . = = . since T , ). ”, a( , . ,b(67@7 ) , , ~ ,|“ + a( ~ ( 7,|“ ) = •,, + Ì, ) = 2ž* ”, < + 4a( . , ”, , . ,b(67@7 ) , . ,b(67@7 ) . Ab(67@7 ) ” ) ,~ Ì, ≤ T ( ∑, ”, a( < ). , , ) ~ ,|“ ~ 7,|“ ,b(67@7 = ∑ = º ( •,, ~ ~ ) ,~ |“ Ï ~ 7,|“ ~ 7,|“ Ì, = ≥ . For such ( and < ,b(67@7 ) Ÿ . ≥ 1 such that . < < (< : < (< : ≥ < < 1 + (: 1 + (: =( ~ ,|“ ~ 7,|“ < Ð < Ð ≥( ~ Ñ < , . , ) ”, a( , ) . ,b(67@7 ) . . < Ð > 0 and a sufficiently large ( with . Further, similarly to the previous calculation, we have 0(67@7 )×(67@7 1 . < . a( , ). ”, a( , ) = . , ,b(67@7 ) U, – U, y ( 0 1 . = ”< ) ( ,b(67@7 ) ,,b(67@7 ) ,b(67@7 ¹ ~ ,¹ |“ Ï ,b(67@7 ) = ∑ ~ ~ . As in case of a( ) and , On the other hand, consider a fixed positive integer ~ ,|“ , U, y ,b(67@7 ) < ∞ , we have < ). ”, a( 0(67@7 )×(67@7 ) 0 . z U, U, . , 0 ” 0(67@7 )×(67@7 ) 0 . . (. ~ y z( , ,b(67@7 , 0 ” 0(67@7 )×(67@7 ) 0 . . . y z , ,b(67@7 ) 0 ” . U, . ,b(67@7 ) , ). , ( 7,|“ ) 98 ) 0 . z U, U, ”< . , . . ,b(67@7 )
  9. 9. Mathematical Theory and Modeling ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online) Vol.3, No.11, 2013 a( , < ). ”, a( ) ≤ ∑, , ¹ ~ ,¡ |“ Ï ~ ~ 7,|“ Consider a fixed positive integer such n and N, a( , < ). ”, a( , ) = ∑,b(67@7 www.iiste.org ≤( T < ≥ 1 such that ¹ ~ ) ,¡ |“ Ï ~ ~ 7,|“ ≥( ∑, < Ð < Ð ² ¹ ,~ |“ ~ 7,|“ > 0 and a sufficiently large n with ~ ,|“ ~ 7,|“ < Í„ . , ”, , Since lim,→u lim,→u Í„ − § Ó ¦„ ¨„ ¦„ § Ó ¦„ ¨„ ¦„ Í„ . , ”, , Í„ Ò> Ǽ≤ = lim,→u ² ¤¥ ˜„,q Í„ = 1 , in probability + § ¦„ ¨„ ¦„ ~ Í„ Ê ~ Í„ Í„ ~ Ñ ³ ≥ By combining value of •,, and the previous result, we have . c2*( , ”, , ) ≤ º ( •,, . Furthermore, note that a( , ). ”, a( , ) ≥ a( , ). ”, < a( Let Ç > 0, by the Chebshev inequality. *»Ò = º( (•,,– ). ~ , (. ~ ,|“ ~ 7,|“ ≥ < . Again, for ). ,→u = º( (b< ) š › 0. ›œ ³=1 This completes the proof. Lemma(5): Suppose we have the penalized spline smoothing, then 1 1Æž(_ ) lim log =0 ,→u Ì, 1Æž(_ ,, ) Proof: From a property of the determinant of the positive definite matrix, we have 67@7 < < < 1Æž(_ ) ≤ ∏, :; + ∑A :[ 5 < + ∑,7X : d < A A A = (, ∏, ´ ~ |} , 67@7 + ∑A ~ ~ |µ ¿ÏÕ , + ∑,7X A ~ ~ |“ ÖÏÕ , where 5 A , d A is the k th element of 5 , d respectively. From lemma (1), we have: |~ |~ < < det _ ,, = (, ( } + : ),b(67@7 ) ( } + :[ )67@7 . , , Thus 1Æž(_ ) log = log(1Æž(_ )) − log(1Æž _ ,, ) 1Æž(_ ,, ) ≤ ( log ( + ∑, log ´ ,→u ¶, ~ |} , 67@7 + ∑A ~ ~ |µ ¿ÏÕ , + ∑, A ~ ~ |“ ÖÏÕ : 2+(− + +1: 2(+: 2=º1−º1=º(1). , ¶ − { ( log ( + ( + + 1) log ² ~ |} , + Therefore, it follows that Í„ log •×É(pq ) •×É(pq,„ ) š› 0 ›œ Theorem(2): Consider the testing problem in (7), suppose that the true model is in Y ( the penalized spline semiparametric model ) and let w, denote the true distribution of the whole data, with p.d.f. (4| , ) = ∅{(4 − ( . + ))/:; }, where ∅(∙) is the standard normal density. Then the Bayes factor is consistent under the alternative hypothesis Y : lim,→u e8 = 0 in w, probability. Proof: 1 det (_ ) 1 . b log − ( _ −_ b ) 2 det (_8 ) 2 , 8 mno (pq,„ ) mno (pq ) = log + log − log e8 = < mno (pq,„ ) < mno (pr ) b _8 −_ b ) , ,, † = loge8 − † = loge8 + Å + Å< , . ( < , , . b ( _8 −_ b ) , ,, < , . b b − , (_ ,, −_ ) , < 99 − . (_ b −_ b ,, < , ) ,
  10. 10. Mathematical Theory and Modeling ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online) Vol.3, No.11, 2013 where 1 det (_ ,, ) 1 . b log − , ( _8 −_ b ) , ,, det (_8 ) 2 2 mno (pq ) . log , Å< = − , (_ b −_ b ) ,, † loge8 = Å = < www.iiste.org mno (pq,„ ) First, by lemma (1), mno (pq,„ ) mno (pr ) = ~ ~ < ’ ’ ~ ~ ,„ ( } 7|µ )••‘•q ( } 7|“ )„…(••‘•q) „ „ ’~ ~ ~ („…(••‘•q)) ,••‘•q ( } 7|µ )••‘•q |} „ ~ ,b(67@7 ) ,|“ ~ |} =²1 + ³ = , ’~ } ~ ,„…(••‘•q) ´ „ 7|“ ¶ = º(() ⟹ log ²1 + And from lemma (4) Ì, ≤ º((< ). Then ,→u mno (pq,„) log ≤ º((b ) š › 0. ›œ Í„ . „…(••‘•q) ~ („…(••‘•q)) |} ~ ,|“ ~ |} ³ ,b(67@7 ) =² ~ ~ |} 7,|“ = º((). ~ |} ³ ,b(67@7 ) mno (pr ) Therefore, 1 1 1 det _ ,, 1 . b † loge8 = ° log − ( _ −_ b ) , ± ,, det(_8 ) Ì, 2 Ì Ì , 8 b b Since _8 − _ ,, = ”, , then b b . . , ( _8 −_ ,, ) , = , ”, , , from lemma (4), Í„ lim,→u Í„ . , ”, , Í„ = 1. Then ,→u 1 1 det _ ,, 1 1 † ›œ loge8 š › = ° log − Ì, 2 Ì det(_8 ) Ì ,→u b † ∴ loge8 š › › œ , in the w, probability. Í„ < ,→u . ,( ,→u b _8 −_ b ) , ± š › = ›œ ,, 1 −1 (0 − 1) = 2 2 And from lemma (5), Í„ Å š › 0, finally, note that Å< is nonpositive random variable since _ b − _ b is a nonnegative definite ›œ ,, matrix. Therefore, by combining the above results, there exist €< > 0 such that bÚ lim,→u sup log e8 ≤ ~ , with probability tending to 1. Í„ ,→u < Hence, e8 š › 0, in w, probability. ›œ This completes the proof. 5.Conclusion 1- The Bayes factor for testing problem Y8 : 2- = + versus Y : = + + b . e8 = exp{− , ( _8 −_ b ) , }. < fmno (pr ) If , = (4 , … , 4, ). where 4 ′T are independent normal random variable < ∑6 8 5 + ∑@ 67 567 , and variance :; = 1, then there exist a positive constant ,b(67@7 ) < . {∑ log(1 + (: ) − , ”, , • > € , with probability tending to 1. ˜„,q ,→u † ›œ This implies that, under Y8 . log e8 š › ∞, in probability. fmno (pq ) is given by: with mean € such that 3- If the true model is Y8 ( the pure polynomial model ), then, the Bayes factor is consistent under the null , hypothesis Y8 . This implies that, lim,→u e8 = ∞ in w8 probability. 6 4- If , = (4 , … , 4, ) where 4 ′T are independent normal random variables with mean ∑ 8 5 + )=1 +)5 +1,%)+C=1(−( + +1)d%C C and variance : 2=1 and if Ì(=a (>”(a( (). Then, . u lim,→u , ”, , = 1, in w probability. Í„ 5- If the true model is in Y ( the penalized spline semiparametric model ) ,then the Bayes factor is consistent under the alternative hypothesis Y . This implies that, lim,→u e8 = 0 in w, probability. 6.References [1] Angers, J.-F. and Delampady, M. (2001). Bayesian nonparametric regression using wavelets. Sankhy2 Ser. B Û 63, 287–308. [2] Berger, J. O. (1985). Statistical Decision Theory and Bayesian Analysis. Springer, New York. 100
  11. 11. Mathematical Theory and Modeling ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online) Vol.3, No.11, 2013 www.iiste.org [3] Choi, T., Lee, J. and Roy, A. (2008). A note on the Bayes factor in a semiparametric regression model, Journal of Multivariate Analysis. 1316-1327. [4] G.Gosal, J. Lember, A. Van der Vaart, Nonparametric Bayesian model selection and averaging, Electron.J. Stat. 2 (2008) 63-89. [5] Ghosh, J. K., Delampady, M. and Samanta, T. (2006). Introduction to Bayesian Analysis: Theory and Methods. Springer, New York. [6] Ghosh, J. K. and Ramamoorthi, R. V. (2003). Bayesian Nonparametric. Springer, New York. [7] I. Verdinelli, L. Wasserman, Bayesian goodness –of-fit testing using infinite- dimentional exponential families, Ann. Stat.26 (4) (1998)1215-1241. [8] L. H. Zhao, Bayesian aspects of some nonparametric problems., Ann.Statist. 28 (2000) 532-552. [9] P. J. Lenk, Bayesian inference for semiparametric regression using a Fourier representation, J. R, Stat. Soc. Ser. B Stat. Methodol. 61 (4) 1999) 863-879. [10] Natio, k. (2002) " Semiparametric regression with multiplicative adjustment '' Communications in statistics, Theory and methods 31, 2289-2309. [11] Pelenis, J (2012) Bayesian Semiparametric Regression. Institute for Advanced Studies, Vienna 1-39. [12] R.McVinish, J. Rousseau,K. Mengerson, Bayesian goodness –of-fit testing with mixtures of triangular distributions, Scand. J, statist. (2008), in press (doi:10.1111/j.1467-9469.2008.00620.x). [13] Ruppert, D, Wand, M. P. and Carroll, R. J.(2003) Semiparametric Regression. Cambridge University Press. [14] S.C. Dass, lee, A note on the consistency of Bayes factors for testing point null versus non-parametric alternative, J.Statist. 26 (4) (1998) 1215-1241. [15] Tarmaratram, K. (2011) Robust Estimation and Model Selection in Semiparametric Regression Models, proefschrift voorgedragen tot het behalen van de grad van Doctor. 101
  12. 12. This academic article was published by The International Institute for Science, Technology and Education (IISTE). The IISTE is a pioneer in the Open Access Publishing service based in the U.S. and Europe. The aim of the institute is Accelerating Global Knowledge Sharing. More information about the publisher can be found in the IISTE’s homepage: http://www.iiste.org CALL FOR JOURNAL PAPERS The IISTE is currently hosting more than 30 peer-reviewed academic journals and collaborating with academic institutions around the world. There’s no deadline for submission. Prospective authors of IISTE journals can find the submission instruction on the following page: http://www.iiste.org/journals/ The IISTE editorial team promises to the review and publish all the qualified submissions in a fast manner. All the journals articles are available online to the readers all over the world without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself. Printed version of the journals is also available upon request of readers and authors. MORE RESOURCES Book publication information: http://www.iiste.org/book/ Recent conferences: http://www.iiste.org/conference/ IISTE Knowledge Sharing Partners EBSCO, Index Copernicus, Ulrich's Periodicals Directory, JournalTOCS, PKP Open Archives Harvester, Bielefeld Academic Search Engine, Elektronische Zeitschriftenbibliothek EZB, Open J-Gate, OCLC WorldCat, Universe Digtial Library , NewJour, Google Scholar

×