UtilitasMathematica
ISSN 0315-3681 Volume 120, 2023
163
Asymptotic Method for One-Way Repeated Measurements Model for
Unbalanced data
Sarah Abbas Abdul-Zahra1, Abdul Hussein Saber AL-Mouel2
1
Mathematics Department, College of Education for Pure Science Basrah University-Iraq,
sarhabbasabulzhra@gmail.com
2
Mathematics Department, College of Education for Pure Science Basrah University-Iraq,
abdulhusseinsaber@yahoo.com
Abstract
In this paper, we consider the one-way repeated measurements model for unbalanced data, and
discuss method to estimate the parameter of the model, also introduce some properties of
estimators, and the distribution of them.We will identify the analytic form of the likelihood-ratio
test for the model, and obtaining asymptotic statements from the likelihood function.
Keywords: One-Way Repeated Measurements Model, Maximum Likelihood Method; Restricted
Maximum Likelihood Method, Likelihood-Ratio test, Asymptotic.
1. Introduction
Repeated measurements analysis is extensively utilized in a variety of sectors, including
epidemiology, biomedical research, health and life sciences, and others. There is a lot of
literature on univariate repeated measurements analysis of variance[4][7]. The expression
"repeated measurements" means information collected when the response variable of each
experimental unit is measured repeatedly, possibly under different experimental conditions. [5].
Data comes in two varieties: balanced data and unbalanced data. In a balance RM design, all of
the experimental units' measurement occasions are the same, however in an unbalance data
design, some of the experimental units' measurement times are different [6]. Several studies and
research projects are interested in examining repeated measures models, particularly in those
models' analysis of variance and model parameter estimation. Following are some recent research
on estimators and analysis of variance for repeated measurement models that we will quickly
cover in this historical review: Al-Mouel (2004)[1] examined the multivariate repeated measures
models, compared estimators, and discussed the results. AL-Mouel and Ali in(2021)[3],they
study the variance components of the model and use the maximum likelihood method to estimate
it. In their study on the estimation of variance components for the repeated measurements model
published in (2020)[2], AL-Mouel and Abd-Ali.
UtilitasMathematica
ISSN 0315-3681 Volume 120, 2023
164
In this paper, we estimate the parameters for our model by using the maximum likelihood
method and restricted maximum likelihood method ,and obtaining asymptotic statements from
the likelihood function.
2. One-Way Repeated Measurements Model
We consider the one-way repeated measurements model for unbalanced data(where the number
of observations is unequal in each levels).A mathematical expression for the model is presented
and probability distributions for its components are discussed. The parameter of the model are
estimator in various method, and some properties of the estimators.
2.1 The Mathematical model
Consider the one-way repeated measurements model for unbalanced data,
𝜐ℓℒ𝜘 = 𝜗 + 𝜉ℒ + 𝜁𝜘 + (𝜉𝜁)ℒ𝜘 + 𝜍ℓ(ℒ) + 𝑒ℓℒ𝜘 , (2.1)
where
𝜐ℓℒ𝜘 is the response measurement at time 𝜘 for unit ℓ within group ℒ,
ℓ = 1,2, … , 𝑛ℒ is an index for experimental unit within group ℒ,
ℒ = 1,2, … , 𝑞 is an index for levels of the between-units factor (Group) ,
𝜘 = 1,2, … , 𝑝 is an index for levels of the within-units factor (Time) ,
𝜗 is the overall mean ,and 𝝃𝓛is the added effect for treatment group ℒ,
𝜻𝝒 is the added effect for time 𝜘 ,
(𝜉𝜁)ℒ𝜘 is the added effect for the group ℒ × time 𝜘 (Interaction) ,
𝜍ℓ(ℒ) is the random effect due to experimental unit ℓ within treatment group ℒ ,
𝑒ℓℒ𝜘 is the random error time 𝜘 for unit ℓ within group ℒ ,
under the following considerations for the added parameter (added effect)
∑ 𝜉ℒ
𝑞
ℒ=1 = 0 , ∑ 𝜁𝜘
𝑝
𝜘=1 = 0 , ∑ (𝜉𝜁)ℒ𝜘 = 0 ,
𝑞
ℒ=1 ∀ 𝜘 = 1,2, … , 𝑝 ,
∑ (𝜉𝜁)ℒ𝜘
𝑝
𝜘=1 = 0 , ∀ℒ = 1,2, … , 𝑞 , 𝑤𝑖𝑡ℎ 𝑁 = ∑ 𝑛ℒ
ℒ , 𝑎𝑛𝑑 ∑ 𝑛ℒ𝜉ℒ
𝑞
ℒ=1 = 0 .
} (2.2)
Considering that the 𝜍ℓ(ℒ)′𝑠 and 𝑒ℓℒ𝜘′𝑠 are independent and identically with
UtilitasMathematica
ISSN 0315-3681 Volume 120, 2023
165
𝑒ℓℒ𝜘 ∼ 𝑖. 𝑖. 𝑑 𝑁(0, 𝜎𝑒
2) , 𝜍ℓ(ℒ) ~𝑖. 𝑖. 𝑑 𝑁(0, 𝜎𝜍
2
) , (2.3)
In the following manner we display the sum of square terms
𝑆𝑆𝜉 = 𝑃 ∑ 𝑛ℒ(𝜐̅.ℒ. − 𝜐̅...)2
,
𝑞
ℒ=1
𝑆𝑆𝜁 = 𝑁 ∑ (𝜐̅..𝜘 − 𝜐̅...)2
𝑝
𝜘=1 ,
𝑆𝑆𝜍 = 𝑝 ∑ ∑ (𝜐̅ℓℒ. − 𝜐̅.ℒ.)2
𝑛ℒ
ℓ=1
𝑞
ℒ=1 ,
𝑆𝑆𝜉𝜁 = ∑ ∑ 𝑛ℒ(𝜐̅.ℒ𝜘 − 𝜐̅.ℒ. − 𝜐̅..𝜘 + 𝜐̅…)2
𝑝
𝜘=1
𝑞
ℒ=1 ,
𝑆𝑆𝑒 = ∑ ∑ ∑ (𝜐ℓℒ𝜘 − 𝜐̅.ℒ𝜘 − 𝜐̅ℓℒ. + 𝜐̅.ℒ.)2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1 , }
(2.4)
and
υ
̅… =
1
Np
∑ ∑ ∑ 𝜐ℓℒ𝜘
p
ϰ=1
𝑛ℒ
ℓ=1
q
ℒ=1 is the overall mean ,
𝜐̅.ℒ. =
1
𝑛ℒp
∑ ∑ 𝜐ℓℒ𝜘
p
ϰ=1
𝑛ℒ
ℓ=1 , ∀ℒ = 1,2, … , 𝑞 , is the mean for group ℒ ,
𝜐̅.ℒ𝜘 =
1
𝑛ℒ
∑ 𝜐ℓℒ𝜘
𝑛ℒ
ℓ=1 , ∀ℒ = 1,2, … , 𝑞 , is the mean for the group ℒ at time ϰ ,
𝜐̅..𝜘 =
1
N
∑ ∑ 𝜐ℓℒ𝜘
𝑛ℒ
ℓ=1
q
ℒ=1 is the mean for time ϰ ,
𝜐̅ℓℒ. =
1
p
∑ 𝜐ℓℒ𝜘
p
ϰ=1 , is a mean for the ℓth
subject in groupn ℒ , }
(2.5)
2.2 Estimation
There are various methods available for estimating the model parameters from unbalanced data,
we discuss some these methods and their properties.
2.2.1. Maximum likelihood method (ML)
The joint density function of 𝜐ℓℒ𝜘 has the following form:
𝑓(𝜐𝜗 , 𝜉ℒ , 𝜁𝜘 , (𝜉𝜁)ℒ𝜘 , 𝜍ℓ(ℒ), σe
2
, σς
2
) = (2𝜋( 𝜎𝑒
2
+ 𝜎𝜍
2
))
−𝑁𝑝
2 𝑒𝑥𝑝 [
−(𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘)2
2( 𝜎𝑒
2+𝜎𝜍
2)
],
Following is a derivation of the likelihood function for the model (2.1):
𝐿(𝜐𝜗 , 𝜉ℒ , 𝜁𝜘 , (𝜉𝜁)ℒ𝜘 , 𝜍ℓ(ℒ), 𝜎𝑒
2
, 𝜎𝜍
2
) ∝
(2𝜋( 𝜎𝑒
2
+ 𝜎𝜍
2
))
−𝑁𝑝
2 𝑒𝑥𝑝 [
− ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘)2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
2( 𝜎𝑒
2+𝜎𝜍
2)
],
then, by taking the logarithm of both sides, we get
ln(L) ∝
−𝑁𝑝
2
ln(2𝜋) −
𝑁𝑝
2
ln( 𝜎𝑒
2
+ 𝜎𝜍
2
) − [
∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘)2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
2( 𝜎𝑒
2+𝜎𝜍
2)
] (2.6),
UtilitasMathematica
ISSN 0315-3681 Volume 120, 2023
166
The solution is reached by dividing by 𝜗 twice (2.6) and setting the derivative to zero.
𝜕 𝑙𝑛(𝐿)
𝜕𝜗
= −2 [−
∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘)
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
2( 𝜎𝑒
2+𝜎𝜍
2)
] = 0
∑ ∑ ∑ (𝜐ℓℒ𝜘) − 𝑁𝑝𝜗
̂ − 𝑛ℒ𝑝 ∑ 𝜉ℒ
𝑞
ℒ=1 − 𝑁 ∑ 𝜁𝜘
𝑝
𝜘=1 − 𝑁 ∑ (𝜉𝜁)ℒ𝜘
𝑝
𝜘=1
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1 = 0
⇒ 𝑁𝑝𝜗
̂ = ∑ ∑ ∑ (𝜐ℓℒ𝜘)
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1 ⟹ 𝜗
̂ =
1
𝑁𝑝
∑ ∑ ∑ (𝜐ℓℒ𝜘)
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
∴ 𝜗
̂ = 𝜐̅… (2.7)
𝑎𝑛𝑑
𝜕2𝑙𝑛(𝐿)
𝜕𝜗2
= −
𝑁𝑝
( 𝜎𝑒
2+𝜎𝜍
2)
(2.8)
,
We may now differentiate twice (2.6) with regard to 𝜉ℒ and set the derivative equal to zero to get
𝜕 𝑙𝑛(𝐿)
𝜕𝜉ℒ
= −2𝑞 [−
∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘)
𝑝
𝜘=1
𝑛ℒ
ℓ=1
2( 𝜎𝑒
2+𝜎𝜍
2)
] = 0 ,
∑ ∑ (𝜐ℓℒ𝜘)
𝑝
𝜘=1
𝑛ℒ
ℓ=1 − 𝑛ℒ𝑝𝜗
̂ − 𝑛ℒ𝑝𝜉
̂ℒ − 𝑛ℒ ∑ 𝜁𝜘
𝑝
𝜘=1 − 𝑛ℒ ∑ (𝜉𝜁)ℒ𝜘
𝑝
𝜘=1 = 0
⇒ 𝜉
̂ℒ =
1
𝑛ℒ𝑝
[ ∑ ∑ (𝜐ℓℒ𝜘)
𝑝
𝜘=1
𝑛ℒ
ℓ=1 − 𝑛ℒ𝑝𝜗
̂] =
1
𝑛ℒ𝑝
∑ ∑ (𝜐ℓℒ𝜘)
𝑝
𝜘=1
𝑛ℒ
ℓ=1 − 𝜗
̂
∴ 𝜉
̂ℒ = 𝜐̅.ℒ. − 𝜐̅… . (2.9)
𝑎𝑛𝑑
𝜕2𝑙𝑛(𝐿)
𝜕𝜉ℒ
2 = −
𝑛ℒ𝑝𝑞
( 𝜎𝑒
2+𝜎𝜍
2)
(2.10)
The solution is reached by dividing by 𝜁𝜘 twice (2.6) and setting the derivative to zero.
𝜕 𝑙𝑛(𝐿)
𝜕𝜁𝜘
= −2𝑝 [−
∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘)
𝑛ℒ
ℓ=1
𝑞
ℒ=1
2( 𝜎𝑒
2+𝜎𝜍
2)
] = 0
∑ ∑ 𝜐ℓℒ𝜘
𝑛ℒ
ℓ=1
𝑞
ℒ=1 − 𝑁𝜗
̂ − 𝑛ℒ ∑ 𝜉ℒ
𝑞
ℒ=1 − 𝑁𝜁
̂𝜘 − 𝑛ℒ ∑ (𝜉𝜁)ℒ𝜘
𝑞
ℒ=1 = 0
⇒ 𝜁
̂𝜘 =
1
𝑁
[∑ ∑ 𝜐ℓℒ𝜘
𝑛ℓ
ℓ=1
𝑞
ℒ=1 − 𝑁𝜗
̂] =
1
𝑁
∑ ∑ 𝜐ℓℒ𝜘
𝑛ℓ
ℓ=1
𝑞
ℒ=1 − 𝜗
̂
→ 𝜁
̂𝜘 = 𝜐̅..𝜘 − 𝜐̅… (2.11)
𝑎𝑛𝑑
𝜕2𝑙𝑛(𝐿)
𝜕𝜁𝜘
2 = −
𝑝𝑁
( 𝜎𝑒
2+𝜎𝜍
2)
(2.12)
,
The solution is reached by dividing by (𝜉𝜁)ℒ𝜘 twice (2.6) and setting the derivative to zero.
UtilitasMathematica
ISSN 0315-3681 Volume 120, 2023
167
𝜕 𝑙𝑛(𝐿)
𝜕(𝜉𝜁)ℒ𝜘
= −2𝑞𝑝 [−
∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘)
𝑛ℒ
ℓ=1
2(𝜎𝑒
2+𝜎𝜍
2)
] = 0 ,
∑ 𝜐ℓℒ𝜘
𝑛ℒ
ℓ=1 − 𝑛ℒ𝜗
̂ − 𝑛ℒ𝜉
̂ℒ − 𝑛ℒ𝜁
̂𝜘 − 𝑛ℒ(𝜉𝜁
̂)ℒ𝜘
= 0 ⇒ (𝜉𝜁
̂)ℒ𝜘
=
1
𝑛ℒ
[∑ 𝜐ℓℒ𝜘
𝑛ℒ
ℓ=1 − 𝑛ℒ𝜗
̂ − 𝑛ℒ𝜉
̂ℒ − 𝑛ℒ𝜁
̂𝜘]
∴ (𝜉𝜁
̂)ℒ𝜘
= 𝜐̅.ℒ𝜘 − 𝜐̅.ℒ. − 𝜐̅..𝜘 + 𝜐̅… (2.13)
𝑎𝑛𝑑
𝜕2𝑙𝑛(𝐿)
𝜕(𝜉𝜁)ℒ𝜘
2 = −
𝑞𝑝𝑛ℒ
( 𝜎𝑒
2+𝜎𝜍
2)
(2.14)
𝑠𝑖𝑛𝑐𝑒 𝜎
̂𝑒
2
=
𝑆𝑆𝑒
(𝑝−1)(𝑁−𝑞)
, then 𝜎
̂𝑒
2
=
∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐
̅.ℒ𝜘−𝜐
̅ℓℒ.+𝜐
̅.ℒ.)2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
(𝑝−1)(𝑁−𝑞)
, ∴ 𝜎
̂𝑒
2
= 𝑀𝑆𝑒 . (2.15)
When
𝜕 𝑙𝑛(𝐿)
𝜕𝜎𝑒
2 =
−𝑁𝑝
2
(
1
𝜎𝜍
2+𝜎𝑒
2) +
2(𝜎𝜍
2+𝜎𝑒
2)(0)+2 ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘)2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
(2(𝜎𝜍
2+𝜎𝑒
2))
2 = 0
−𝑁𝑝(𝜎
̂𝜍
2
+ 𝜎
̂𝑒
2
) + ∑ ∑ ∑ (𝜐ℓℒ𝜘 − 𝜗
̂ − 𝜉
̂ℒ − 𝜁
̂𝜘 − (𝜉𝜁
̂)ℒ𝜘
)
2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1 = 0
𝑎𝑛𝑑,
𝜕2 𝑙𝑛(𝐿)
𝜕(𝜎𝑒
2)
2 =
𝑁𝑝
2(𝜎𝜍
2+𝜎𝑒
2)
2 −
2 ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘)2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
(2(𝜎𝜍
2+𝜎𝑒
2))
3 . (2.16)
The solution is reached by dividing by 𝜎𝜍
2
twice (2.6) and setting the derivative to zero.
𝜕 𝑙𝑛(𝐿)
𝜕𝜎𝜍
2 =
−𝑁𝑝
2
(
1
𝜎𝜍
2+𝜎𝑒
2) +
2(𝜎𝜍
2+𝜎𝑒
2)(0)+2 ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘)2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
(2(𝜎𝜍
2+𝜎𝑒
2))
2 = 0 ,
−𝑁𝑝(𝜎
̂𝜍
2
+ 𝜎
̂𝑒
2
) + ∑ ∑ ∑ (𝜐ℓℒ𝜘 − 𝜗
̂ − 𝜉
̂ℒ − 𝜁
̂𝜘 − (𝜉𝜁
̂)ℒ𝜘
)
2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1 = 0
⇒ 𝜎
̂𝜍
2
=
∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗
̂−𝜉
̂ℒ−𝜁
̂𝜘−(𝜉𝜁
̂)ℒ𝜘
)
2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
𝑁𝑝
− 𝜎
̂𝑒
2
=
∑ ∑ ∑ [(𝜐ℓℒ𝜘−𝜐
̅.ℒ𝜘−𝜐
̅ℓℒ.+𝜐
̅.ℒ.)2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1 ]+(𝜐
̅ℓℒ.−𝜐
̅.ℒ.)2
𝑁𝑝
−
∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐
̅.ℒ𝜘−𝜐
̅ℓℒ.+𝜐
̅.ℒ.)2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
(𝑝−1)(𝑁−𝑞)
⇒ 𝜎
̂𝜍
2
=
𝑆𝑆𝑒+𝑆𝑆𝜍
𝑁𝑝
−
𝑆𝑆𝑒
(𝑝−1)(𝑁−𝑞)
= [
𝑆𝑆𝜍
𝑁𝑝(𝑝−1)(𝑁−𝑞)
+
((𝑝−1)(𝑁−𝑞)−𝑁𝑝)𝑆𝑆𝑒
𝑁𝑝(𝑝−1)(𝑁−𝑞)
]
∴ 𝜎
̂𝜍
2
=
1
𝑁𝑝(𝑝−1)
𝑀𝑆𝜍 +
((𝑝−1)(𝑁−𝑞)−𝑁𝑝)
𝑁𝑝
𝑀𝑆𝑒 (2.17)
𝑎𝑛𝑑,
𝜕2 ln(𝐿)
𝜕(𝜎𝜍
2)
2 = −
𝑁𝑝
2
(
−1
(𝜎𝜍
2+𝜎𝑒
2)
2) −
4(𝜎𝜍
2+𝜎𝑒
2) ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘)2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
(2(𝜎𝜍
2+𝜎𝑒
2))
4
𝜕2 𝑙𝑛(𝐿)
𝜕(𝜎𝜍
2)
2 =
𝑁𝑝
2(𝜎𝜍
2+𝜎𝑒
2)
2 −
2 ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘)2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
(2(𝜎𝜍
2+𝜎𝑒
2))
3 (2.18)
The result is obtained by differentiating (2.6) with respect to 𝜎𝜍
2
+ 𝜎𝑒
2
and setting the derivative
to zero
UtilitasMathematica
ISSN 0315-3681 Volume 120, 2023
168
∂ ln(L)
∂(σς
2+σe
2)
=
−Np
2
(
1
σς
2+σe
2) +
2(σς
2+σe
2)(0)+2 ∑ ∑ ∑ (υℓℒϰ−ϑ−ξℒ−ζϰ−(ξζ)ℒϰ)2
p
ϰ=1
nℒ
ℓ=1
q
ℒ=1
(2(σς
2+σe
2))
2 = 0
−Np(σ
̂ς
2
+ σ
̂e
2
) + ∑ ∑ ∑ (υℓℒϰ − ϑ
̂ − ξ
̂ℒ − ζ
̂ϰ − (ξζ
̂)ℒϰ
)
2
p
ϰ=1
nℒ
ℓ=1
q
ℒ=1 = 0
(σ
̂ς
2
+ σ
̂e
2
) =
∑ ∑ ∑ (υℓℒϰ−υ
̅.ℒϰ)2
p
ϰ=1
nℒ
ℓ=1
q
ℒ=1
Np
(2.19)
Now we can write the derivatives of parameters as following
𝒥 = −
𝜕2 𝑙𝑛(𝐿)
𝜕(𝜃)2 |
𝜃=𝜃
̂
, 𝑤ℎ𝑒𝑟𝑒 𝜃 ∈ {𝜗, 𝜉ℒ, 𝜁𝜘, (𝜉𝜁)ℒ𝜘, 𝜎𝜍
2
, 𝜎𝑒
2
}
𝒥1 =
𝜕2𝑙𝑛(𝐿)
𝜕𝜗2 = −
𝑁𝑝
( 𝜎𝑒
2+𝜎𝜍
2)
, 𝒥2 =
𝜕2𝑙𝑛(𝐿)
𝜕𝜉ℒ
2 = −
𝑛ℒ𝑝𝑞
( 𝜎𝑒
2+𝜎𝜍
2)
𝒥3 =
𝜕2𝑙𝑛(𝐿)
𝜕𝜁𝜘
2 = −
𝑝𝑁
( 𝜎𝑒
2+𝜎𝜍
2)
, 𝒥4 =
𝜕2𝑙𝑛(𝐿)
𝜕(𝜉𝜁)ℒ𝜘
2 = −
𝑞𝑝𝑛ℒ
( 𝜎𝑒
2+𝜎𝜍
2)
𝒥5 =
𝜕2 𝑙𝑛(𝐿)
𝜕(𝜎𝜍
2)
2 =
𝑁𝑝
2(𝜎𝜍
2+𝜎𝑒
2)
2 −
2 ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘)2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
(2(𝜎𝜍
2+𝜎𝑒
2))
3
𝒥6 =
𝜕2 𝑙𝑛(𝐿)
𝜕(𝜎𝑒
2)
2 =
𝑁𝑝
2(𝜎𝜍
2+𝜎𝑒
2)
2 −
2 ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘)2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
(2(𝜎𝜍
2+𝜎𝑒
2))
3
}
(2.20) ,and
𝑆(𝜃) =
𝜕 𝑙𝑛(𝐿)
𝜕𝜃
, 𝑤ℎ𝑒𝑟𝑒 𝜃 ∈ {𝜗, 𝜉ℒ, 𝜁𝜘, (𝜉𝜁)ℒ𝜘, 𝜎𝜍
2
, 𝜎𝑒
2
} then we have
𝑆(𝜗) =
∑ ∑ ∑ (𝜐ℓℒ𝜘)−𝑁𝑝𝜗
̂
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
( 𝜎𝑒
2+𝜎𝜍
2)
, 𝑆(𝜉ℒ) =
∑ ∑ (𝜐ℓℒ𝜘)
𝑝
𝜘=1
𝑛ℒ
ℓ=1
−𝑛ℒ𝑝𝜗
̂−𝑛ℒ𝑝𝜉
̂ℒ
( 𝜎𝑒
2+𝜎𝜍
2)
𝑆(𝜁𝜘) =
𝑝 ∑ ∑ 𝜐ℓℒ𝜘
𝑛ℒ
ℓ=1
𝑞
ℒ=1 −𝑝𝑁𝜗
̂−𝑝𝑁𝜁
̂𝜘
( 𝜎𝑒
2+𝜎𝜍
2)
, 𝑆((𝜉𝜁)ℒ𝜘) =
𝑞𝑝(∑ 𝜐ℓℒ𝜘
𝑛ℒ
ℓ=1
−𝑛ℒ𝜗
̂−𝑛ℒ𝜉
̂ℒ−𝑛ℒ𝜁
̂𝜘−𝑛ℒ(𝜉𝜁
̂)ℒ𝜘
)
( 𝜎𝑒
2+𝜎𝜍
2)
𝑆(𝜎𝜍
2
) =
−𝑁𝑝
2
(
1
𝜎𝜍
2+𝜎𝑒
2) +
2 ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘)2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
(2(𝜎𝜍
2+𝜎𝑒
2))
2 ,
𝑆(𝜎𝑒
2) =
−𝑁𝑝
2
(
1
𝜎𝜍
2+𝜎𝑒
2) +
2 ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘)2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
(2(𝜎𝜍
2+𝜎𝑒
2))
2 .
}
(2.21)
2.2.2. Restricted Maximum Likelihood Method (RMLM)
The restricted maximum likelihood estimators of 𝜗 , 𝜉ℒ , 𝜁𝜘 , (𝜉𝜁)ℒ𝜘 , 𝜍ℓ(ℒ), 𝜎𝑒
2
, 𝑎𝑛𝑑 𝜎𝜍
2
are
determined by the parameter values, which maximize the function:
𝐿 ∝ (𝜎𝑒
2)
−(𝑝−1)(𝑁−𝑞)
2 (𝜎𝑒
2
+ 𝑝𝜎𝜍
2
)
−(𝑁−𝑞)
2
𝑒𝑥𝑝 {−
1
2
[
𝑆𝑆𝑒
𝜎𝑒
2 +
𝑆𝑆𝜍
(𝜎𝑒
2+𝑝𝜎𝜍
2)
+
𝑁𝑝(𝜐
̅…−𝜗)2
(𝜎𝑒
2+𝑝𝜎𝜍
2)
+
𝑁𝑝(𝜐
̅.ℒ.−𝜐
̅…−𝜉ℒ)2
(𝜎𝑒
2+𝑝𝜎𝜍
2)
+
𝑁𝑝(𝜐
̅..𝜘−𝜐
̅…−𝜁𝜘)2
(𝜎𝑒
2+𝑝𝜎𝜍
2)
+
𝑁𝑝(𝜐
̅.ℒ𝜘+𝜐
̅…−𝜐
̅.ℒ.−𝜐
̅..𝜘−(𝜉𝜁)ℒ𝜘)2
(𝜎𝑒
2+𝑝𝜎𝜍
2)
] } , (2.2.1),
UtilitasMathematica
ISSN 0315-3681 Volume 120, 2023
169
We know that
𝐿(𝜐𝜗 , 𝜉ℒ , 𝜁𝜘 , (𝜉𝜁)ℒ𝜘 , 𝜍ℓ(ℒ), 𝜎𝑒
2
, 𝜎𝜍
2
) = 𝐿1(𝜐̅…𝜗 , 𝜉ℒ , 𝜁𝜘 , (𝜉𝜁)ℒ𝜘 , 𝜍ℓ(ℒ)) = 𝐿2(𝑆𝑆𝑒, 𝑆𝑆𝜍
𝜎𝑒
2
, 𝜎𝜍
2
) , (2.2.2)
Then, we have 𝐿1(𝜐̅…𝜗 , 𝜉ℒ , 𝜁𝜘 , (𝜉𝜁)ℒ𝜘 , 𝜍ℓ(ℒ)) = (𝜎𝑒
2
+
𝑝𝜎𝜍
2
)
−(𝑁−𝑞)
2
exp {−
1
2
[
𝑁𝑝(𝜐
̅…−𝜗)2
(𝜎𝑒
2+𝑝𝜎𝜍
2)
+
𝑁𝑝(𝜐
̅.ℒ.−𝜐
̅…−𝜉ℒ)2
(𝜎𝑒
2+𝑝𝜎𝜍
2)
+
𝑁𝑝(𝜐
̅..𝜘−𝜐
̅…−𝜁𝜘)2
(𝜎𝑒
2+𝑝𝜎𝜍
2)
+
𝑁𝑝(𝜐
̅.ℒ𝜘+𝜐
̅…−𝜐
̅.ℒ.−𝜐
̅..𝜘−(𝜉𝜁)ℒ𝜘)2
(𝜎𝑒
2+𝑝𝜎𝜍
2)
] } (2.2.3)And
𝐿2(𝑆𝑆𝑒, 𝑆𝑆𝜍 𝜎𝑒
2
, 𝜎𝜍
2
) = (𝜎𝑒
2)
−(𝑝−1)(𝑁−𝑞)
2 (𝜎𝑒
2
+ 𝑝𝜎𝜍
2
)
−(𝑁−𝑞)
2
exp {−
1
2
[
𝑆𝑆𝑒
𝜎𝑒
2 +
𝑆𝑆𝜍
(𝜎𝑒
2+𝑝𝜎𝜍
2)
]} ,(2.2.4)
Then, by taking the logarithm of both sides of equation (2.2.3), we get
𝐿1 =
−(𝑛−𝑞)
2
ln(𝜎𝑒
2
+ 𝑝𝜎𝜍
2
) −
1
2
[
𝑁𝑝(𝜐
̅…−𝜗)2
(𝜎𝑒
2+𝑝𝜎𝜍
2)
+
𝑁𝑝(𝜐
̅.ℒ.−𝜐
̅…−𝜉ℒ)2
(𝜎𝑒
2+𝑝𝜎𝜍
2)
+
𝑁𝑝(𝜐
̅..𝜘−𝜐
̅…−𝜁𝜘)2
(𝜎𝑒
2+𝑝𝜎𝜍
2)
+
𝑁𝑝(𝜐
̅.ℒ𝜘+𝜐
̅…−𝜐
̅.ℒ.−𝜐
̅..𝜘−(𝜉𝜁)ℒ𝜘)2
(𝜎𝑒
2+𝑝𝜎𝜍
2)
]
(2.2.5)
The derivative is then equaled to zero by equating (2.2.5) with regard to parameters. This gives
us
𝜕ln (𝐿1)
𝜕𝜗
= −
1
2
[2𝑁𝑝
(𝜐
̅…−𝜗)
(𝜎𝑒
2+𝑝𝜎𝜍
2)
] = 0 ⇒ 𝜗
̂ = 𝜐̅… , (2.2.6)
𝜕ln (𝐿1)
𝜕𝜉ℒ
= −
1
2
[
2𝑁𝑝(𝜐
̅.ℒ.−𝜐
̅…−𝜉ℒ)
(𝜎𝑒
2+𝑝𝜎𝜍
2)
] = 0 ⇒ 𝜉
̂ℒ = 𝜐̅.ℒ. − 𝜐̅… , (2.2.7)
𝜕ln (𝐿1)
𝜕𝜁𝜘
= −
1
2
[
2𝑁𝑝(𝜐
̅..𝜘−𝜐
̅…−𝜁𝜘)
(𝜎𝑒
2+𝑝𝜎𝜍
2)
] = 0 ⇒ 𝜁
̂𝜘 = 𝜐̅..𝜘 − 𝜐̅… , (2.2.8)
𝜕 ln(𝐿1)
𝜕(𝜉𝜁)ℒ𝜘
= −
1
2
[
2𝑁𝑝(𝜐̅.ℒ𝜘 + 𝜐̅… − 𝜐̅.ℒ. − 𝜐̅..𝜘 − (𝜉𝜁)ℒ𝜘)
(𝜎𝑒
2 + 𝑝𝜎𝜍
2)
] = 0 ⇒ (𝜉𝜁
̂)ℒ𝜘
= 𝜐̅.ℒ𝜘 + 𝜐̅… − 𝜐̅.ℒ. − 𝜐̅..𝜘 , (2.2.9)
ln(𝐿2) =
−(𝑝−1)(𝑛−𝑞)
2
ln(𝜎𝑒
2) −
(𝑛−𝑞)
2
ln(𝜎𝑒
2
+ 𝑝𝜎𝜍
2
) −
1
2
[
𝑆𝑆𝑒
𝜎𝑒
2 +
𝑆𝑆𝜍
(𝜎𝑒
2+𝑝𝜎𝜍
2)
], (2.2.10)
let 𝛾 = 𝜎𝑒
2
+ 𝑝𝜎𝜍
2
then
ln (𝐿2(𝑆𝑆𝑒, 𝑆𝑆𝜍 𝛾)) =
−(𝑝−1)(𝑁−𝑞)
2
ln(𝜎𝑒
2) −
(𝑁−𝑞)
2
ln(𝛾) −
1
2
[
𝑆𝑆𝑒
𝜎𝑒
2 +
𝑆𝑆𝜍
𝛾
] (2.2.11)
UtilitasMathematica
ISSN 0315-3681 Volume 120, 2023
170
subject to the restrictions 𝛾 ≥ 𝜎𝑒
2
> 0. The derivative is then equaled to zero by equating
(2.2.11). Then
𝜕 ln(𝐿2)
𝜕𝜎𝑒
2 =
−(𝑝−1)(𝑁−𝑞)
2𝜎𝑒
2 −
𝑆𝑆𝑒
2𝜎𝑒
4 = 0 ⇒ 𝜎
̂𝑒
2
=
𝑆𝑆𝑒
(𝑝−1)(𝑁−𝑞)
= 𝑀𝑆𝑒, (2.2.12)
𝜕 ln(𝐿2)
𝜕𝛾
= −
(𝑁−𝑞)
2𝛾
−
𝑆𝑆𝜍
2𝛾2 = 0 ⇒ 𝛾
̂ =
𝑆𝑆𝜍
(𝑁−𝑞)
,then
𝜎
̂𝑒
2
+ 𝑝𝜎
̂𝜍
2
=
𝑆𝑆𝜍
(𝑁−𝑞)
⇒ 𝜎
̂𝜍
2
=
1
𝑝
[
𝑆𝑆𝜍
(𝑁−𝑞)
−
𝑆𝑆𝑒
(𝑝−1)(𝑁−𝑞)
] =
1
𝑝
[𝑀𝑆𝜍 − 𝑀𝑆𝑒] (2.2.13)
2.3. Important Estimators' Characteristics
We state various estimator properties as theorems in this section.
Theorem 2.3.1
The estimators of 𝜗 , 𝜉ℒ , 𝜁𝜘 , (𝜉𝜁)ℒ𝜘 in our model are unbiased estimator.
Proof:
From equation (2.2) ,(2.5) and (2.7) we have
𝐸(𝜗
̂ ) =
1
𝑁𝑝
𝐸(∑ ∑ ∑ 𝜐ℓℒ𝜘
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1 )
𝐸(𝜗
̂ ) =
1
𝑛𝑝
𝐸(∑ ∑ ∑ (𝜗 + 𝜉ℒ + 𝜁𝜘 + (𝜉𝜁)ℒ𝜘 + 𝜍ℓ(ℒ) + 𝑒ℓℒ𝜘)
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1 ) ,
𝐸(𝜗
̂ ) =
1
𝑁𝑝
(𝑁𝑝𝜗) = 𝜗 ⟹ 𝜗
̂ is unbiased estimator of 𝜗.
And , from equation (2.2) , (2.5)and (2.9)we have
𝐸(𝜉
̂ℒ) =
1
𝑛ℒ𝑝
𝐸(∑ ∑ 𝜐ℓℒ𝜘
𝑝
𝜘=1
𝑛ℒ
ℓ=1 ) − 𝜗
𝐸(𝜉
̂ℒ) =
1
𝑛ℒ𝑝
𝐸(∑ ∑ (𝜗 + 𝜉ℒ + 𝜁𝜘 + (𝜉𝜁)ℒ𝜘 + 𝜍ℓ(ℒ) + 𝑒ℓℒ𝜘)
𝑝
𝜘=1
𝑛ℒ
ℓ=1 ) − 𝜗,
𝐸(𝜉
̂ℒ) =
1
𝑛ℒ𝑝
𝐸(𝑛ℒ𝑝𝜗 + 𝑛ℒ𝑝𝜉ℒ) − 𝜗 = 𝜗 + 𝜉ℒ − 𝜗 = 𝜉ℒ
⟹ 𝜉
̂ℒ is unbiased estimator of 𝜉ℒ .
Now , from equation (2.2), (2.5) and (2.11) we have
𝐸(𝜁
̂𝜘) =
1
𝑁
∑ ∑ 𝜐ℓℒ𝜘
𝑛ℒ
ℓ=1
𝑞
ℒ=1 − 𝜗
UtilitasMathematica
ISSN 0315-3681 Volume 120, 2023
171
𝐸(𝜁
̂𝜘) =
1
𝑁
∑ ∑ (
𝑛ℒ
ℓ=1 𝜗 + 𝜉ℒ + 𝜁𝜘 + (𝜉𝜁)ℒ𝜘 + 𝜍ℓ(ℒ) + 𝑒ℓℒ𝜘)
𝑞
ℒ=1 − 𝜗 ,
𝐸(𝜁
̂𝜘) =
1
𝑁
𝐸(𝑁𝜗 + 𝑁𝜁𝜘) − 𝜗 = 𝜗 + 𝜁𝜘 − 𝜗 = 𝜁𝜘
⟹ 𝜁
̂𝜘 is unbiased estimator of 𝜁𝜘 ,
Now , from equation (2.2), (2.5) and (2.13) we have
𝐸 ((𝜉𝜁
̂)ℒ𝜘
) =
1
𝑛ℒ
𝐸(∑ 𝜐ℓℒ𝜘
𝑛ℒ
ℓ=1 )
−
1
𝑛ℒ𝑝
𝐸(∑ ∑ 𝜐ℓℒ𝜘
𝑝
𝜘=1
𝑛ℒ
ℓ=1 ) −
1
𝑁
𝐸(∑ ∑ 𝜐ℓℒ𝜘
𝑛ℒ
ℓ=1
𝑞
ℒ=1 ) + 𝜗,
𝐸 ((𝜉𝜁
̂)ℒ𝜘
) =
1
𝑛ℒ
𝐸(𝑛ℒ𝜗 + 𝑛ℒ𝜉ℒ + 𝑛ℒ𝜁𝜘 + 𝑛ℒ(𝜉𝜁)ℒ𝜘)
−
1
𝑛ℒ𝑝
𝐸(𝑛ℒ𝑝𝜗 + 𝑛ℒ𝑝𝜉ℒ) −
1
𝑁
𝐸(𝑁𝜗 + 𝑁𝜁𝜘) + 𝜗 ,
𝐸 ((𝜉𝜁
̂)ℒ𝜘
) = (𝜗 + 𝜉ℒ + 𝜁𝜘 + (𝜉𝜁)ℒ𝜘)
−(𝜗 + 𝜉ℒ) − (𝜗 + 𝜁𝜘) + 𝜗 = (𝜉𝜁)ℒ𝜘 ,
∴ (𝜉𝜁
̂)ℒ𝜘
is unbiased estimator of (𝜉𝜁)ℒ𝜘 .
Theorem 2.3.2
The estimators of 𝜗 , 𝜉ℒ , 𝜁𝜘 , (𝜉𝜁)ℒ𝜘 in our model are consistent estimator.
Proof:
From equation (2.2) and (2.5) we have
𝑣𝑎𝑟(𝜗
̂) = 𝑣𝑎𝑟 (
1
𝑁𝑝
∑ ∑ ∑ 𝜐ℓℒ𝜘
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1 )
𝑣𝑎𝑟(𝜗
̂) =
1
𝑁2𝑝2
𝑣𝑎𝑟(∑ ∑ ∑ 𝜐ℓℒ𝜘
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1 ) =
𝜎𝜍
2+𝜎𝑒
2
𝑚
⟶ 0 𝑎𝑠 𝑚 ⟶ ∞ ,
∴ 𝜗
̂ is consistent of 𝜗 .
And , from equation (2.5) and (2.9) we have
𝑣𝑎𝑟(𝜉
̂ℒ) = 𝑣𝑎𝑟(𝜐̅.ℒ. − 𝜐̅…) = 𝑣𝑎𝑟 (
1
𝑛ℒ𝑝
∑ ∑ 𝜐ℓℒ𝜘
𝑝
𝜘=1
𝑛ℒ
ℓ=1 ) +
𝜎𝜍
2+𝜎𝑒
2
𝑚
𝑣𝑎𝑟(𝜉
̂ℒ) =
1
𝑛ℒ
2𝑝2
(𝑛ℒ𝑝)( 𝜎𝜍
2
+ 𝜎𝑒
2
) +
𝜎𝜍
2+𝜎𝑒
2
𝑚
,
𝑣𝑎𝑟(𝜉
̂ℒ) =
(𝑁+𝑛ℒ)(𝜎𝜍
2+𝜎𝑒
2)
𝑚1
→ 0 𝑎𝑠 𝑚1 → ∞ , where 𝑚1 = 𝑛ℒ𝑁𝑝 .
UtilitasMathematica
ISSN 0315-3681 Volume 120, 2023
172
∴ 𝜉
̂ℒ is consistent of 𝜉ℒ .
And, from equation (2.5) and (2.11) we have
𝑣𝑎𝑟(𝜁
̂𝜘) = 𝑣𝑎𝑟(𝜐̅..𝜘 − 𝜐̅…) = 𝑣𝑎𝑟 (
1
𝑁
∑ ∑ 𝜐ℓℒ𝜘
𝑛ℒ
ℓ=1
𝑞
ℒ=1 ) +
𝜎𝜍
2+𝜎𝑒
2
𝑚
,
𝑣𝑎𝑟(𝜁
̂𝜘) =
1
𝑁2
𝑣𝑎𝑟(∑ ∑ 𝜐ℓℒ𝜘
𝑛ℒ
ℓ=1
𝑞
ℒ=1 ) +
𝜎𝜍
2+𝜎𝑒
2
𝑚
,
𝑣𝑎𝑟(𝜁
̂𝜘) =
(𝑝+1)(𝜎𝜍
2+𝜎𝑒
2)
𝑚
→ 0 𝑎𝑠 𝑚 → ∞
⟹ 𝜁
̂𝜘 is consistent of 𝜁𝜘 .
And , from equation (2.5) and (2.13)we have
𝑣𝑎𝑟 ((𝜉𝜁
̂)ℒ𝜘
) = 𝑣𝑎𝑟 (
1
𝑛ℒ
(∑ 𝜐ℓℒ𝜘
𝑛ℒ
ℓ=1 )) +
𝜎𝜍
2+𝜎𝑒
2
𝑛ℒ𝑝
+
𝜎𝜍
2+𝜎𝑒
2
𝑁
+
𝜎𝜍
2+𝜎𝑒
2
𝑚
𝑣𝑎𝑟 ((𝜉𝜁
̂)ℒ𝜘
) =
(𝑝+1)(𝑁+𝑛ℒ)(𝜎𝜍
2+𝜎𝑒
2)
𝑚1
→ ∞ 𝑎𝑠 𝑚1 → 0
⟹ (𝜉𝜁
̂)ℒ𝜘
is consistent of (𝜉𝜁)ℒ𝜘 .
3. Estimator Distribution
Because our estimators are statistical,, and the parameter estimators of three methods are same
then the distribution of them are as follows:
From theorem (2.3.1),and (2.3.2) we have 𝐸(𝜗
̂ ) = 𝜗 , and
𝑣𝑎𝑟(𝜗
̂) = 𝑣𝑎𝑟(𝜐̅…) =
𝜎𝜍
2+𝜎𝑒
2
𝑁𝑝
.
Then 𝜗
̂~𝑖. 𝑖. 𝑑 𝑁 (𝜗,
𝜎𝜍
2+𝜎𝑒
2
𝑁𝑝
).
Now for 𝜉
̂ℒ = 𝜐̅.ℒ. − 𝜐̅…, from theorem (2.3.1),and (2.3.2) we have 𝐸(𝜉
̂ℒ) = 𝜉ℒ , and
𝑣𝑎𝑟(𝜉
̂ℒ) = 𝑣𝑎𝑟(𝜐̅.ℒ. − 𝜐̅…) = 𝑣𝑎𝑟(𝜐̅.ℒ.) + 𝑣𝑎𝑟(𝜐̅…) =
𝜎𝜍
2+𝜎𝑒
2
𝑛ℒ𝑝
+
𝜎𝜍
2+𝜎𝑒
2
𝑁𝑝
=
(𝑁+𝑛ℒ)(𝜎𝜍
2+𝜎𝑒
2)
𝑛ℒ𝑁𝑝
.
Then 𝜉
̂ℒ~𝑖. 𝑖. 𝑑 𝑁 (𝜉ℒ,
(𝑁+𝑛ℒ)(𝜎𝜍
2+𝜎𝑒
2)
𝑛ℒ𝑁𝑝
).
UtilitasMathematica
ISSN 0315-3681 Volume 120, 2023
173
Now for 𝜁
̂𝜘 = 𝜐̅..𝜘 − 𝜐̅… , from theorem (2.3.1),and (2.3.2) we have 𝐸(𝜁
̂𝜘) = 𝜁𝜘 ,and
𝑣𝑎𝑟(𝜁
̂𝜘) = 𝑣𝑎𝑟(𝜐̅..𝜘 − 𝜐̅…) = 𝑣𝑎𝑟(𝜐̅..𝜘) + 𝑣𝑎𝑟(𝜐̅…) =
𝑝(𝜎𝜍
2+𝜎𝑒
2)
𝑁𝑝
+
(𝜎𝜍
2+𝜎𝑒
2)
𝑁𝑝
=
(𝑝+1)(𝜎𝜍
2+𝜎𝑒
2)
𝑁𝑝
Then 𝜁
̂𝜘~𝑖. 𝑖. 𝑑 𝑁 (𝜁𝜘,
(𝑝+1)(𝜎𝜍
2+𝜎𝑒
2)
𝑁𝑝
).
Now for (𝜉𝜁
̂)ℒ𝜘
= 𝜐̅.ℒ𝜘 − 𝜐̅.ℒ. − 𝜐̅..𝜘 + 𝜐̅… , from theorem (2.3.1),and (2.3.2) we have
𝐸 ((𝜉𝜁
̂)ℒ𝜘
) = (𝜉𝜁)ℒ𝜘 ,and 𝑣𝑎𝑟 ((𝜉𝜁
̂)ℒ𝜘
) = 𝑣𝑎𝑟(𝜐̅.ℒ𝜘 − 𝜐̅.ℒ. − 𝜐̅..𝜘 + 𝜐̅…)
𝑣𝑎𝑟 ((𝜉𝜁
̂)ℒ𝜘
) =
( 𝜎𝜍
2+𝜎𝑒
2)
𝑛ℒ
+
𝜎𝜍
2+𝜎𝑒
2
𝑛ℒ𝑝
+
𝜎𝜍
2+𝜎𝑒
2
𝑁
+
𝜎𝜍
2+𝜎𝑒
2
𝑁𝑝
=
(𝑁𝑝+𝑁+𝑛ℒ𝑝+𝑛ℒ)( 𝜎𝜍
2+𝜎𝑒
2)
𝑛ℒ𝑁𝑝
Then (𝜉𝜁
̂)ℒ𝜘
~𝑖. 𝑖. 𝑑 𝑁 ((𝜉𝜁)ℒ𝜘,
(𝑁𝑝+𝑁+𝑛ℒ𝑝+𝑛ℒ)( 𝜎𝜍
2+𝜎𝑒
2)
𝑛ℒ𝑁𝑝
).
4. The Test
In this part, we shall determine the analytic version of the likelihood-ratio test for the model.
4.1.Consider 𝐻0: 𝜗 = 0 vs. 𝐻1: 𝜗 ≠ 0
Let
Ω = {(𝜗 , 𝜉ℒ , 𝜁𝜘 , (𝜉𝜁)ℒ𝜘 , 𝜎𝑒
2
, 𝜎𝜍
2
), ℒ = 1, … , 𝑞 , 𝜘 = 1, … , 𝑝: 𝜗 , 𝜉ℒ , 𝜁𝜘 , (𝜉𝜁)ℒ𝜘 ∈ 𝑅 𝑎𝑛𝑑 𝜎𝑒
2
> 0, 𝜎𝜍
2
> 0}
𝜔 = {(𝜗 , 𝜉ℒ , 𝜁𝜘 , (𝜉𝜁)ℒ𝜘 , 𝜎𝑒
2
, 𝜎𝜍
2
) ∈ Ω, ℒ = 1, … , 𝑞 , 𝜘 = 1, … 𝑝 ∶ 𝜗 = 0}
Then we have
𝐿(𝜔) = (2𝜋( 𝜎𝑒
2
+ 𝜎𝜍
2
))
−𝑁𝑝
2 𝑒𝑥𝑝 [
− ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘)2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
2( 𝜎𝑒
2+𝜎𝜍
2)
]
𝐿(Ω) = (2𝜋( 𝜎𝑒
2
+ 𝜎𝜍
2
))
−𝑁𝑝
2 𝑒𝑥𝑝 [
− ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘)2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
2( 𝜎𝑒
2+𝜎𝜍
2)
]
We know that
UtilitasMathematica
ISSN 0315-3681 Volume 120, 2023
174
υ
̅… =
∑ ∑ ∑ 𝜐ℓℒ𝜘
p
ϰ=1
𝑛ℒ
ℓ=1
q
ℒ=1
Np
, 𝜐̅.ℒ. =
∑ ∑ 𝜐ℓℒ𝜘
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑛ℒ𝑝
, 𝜐̅..𝜘 =
∑ ∑ 𝜐ℓℒ𝜘
𝑛ℒ
ℓ=1
𝑞
ℒ=1
𝑁
, 𝜐̅.ℒ𝜘
=
∑ 𝜐ℓℒ𝜘
𝑛ℒ
ℓ=1
𝑛ℒ
, 𝜐̅ℓℒ. =
∑ 𝜐ℓℒ𝜘
𝑝
𝜘=1
𝑝
𝜗
̂ = 𝜐̅… , 𝜉
̂ℒ = 𝜐̅.ℒ. − 𝜐̅… , 𝜁
̂𝜘 = 𝜐̅..𝜘 − 𝜐̅… , (𝜉𝜁
̂)ℒ𝜘
= 𝜐̅.ℒ𝜘 − 𝜐̅.ℒ. − 𝜐̅..𝜘 + 𝜐̅… ,
𝑎𝑛𝑑(𝜎
̂𝜍
2
+ 𝜎
̂𝑒
2
) =
∑ ∑ ∑ (𝜐ℓℒ𝜘 − 𝜐̅.ℒ𝜘)2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
𝑁𝑝
Let 𝜆 =
𝐿(𝜔
̂ )
𝐿(𝛺
̂)
=
exp[
− ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐
̅.ℒ𝜘+ 𝜐
̅…)
2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
2[∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐
̅.ℒ𝜘)
2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
] 𝑁𝑝
⁄
]
exp[
− ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐
̅.ℒ𝜘)
2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
2[∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐
̅.ℒ𝜘)
2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1 ] 𝑁𝑝
⁄
]
=
𝑒𝑥𝑝[
− ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐
̅.ℒ𝜘 )
2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1 −𝑁𝑝𝜈
̅…
2
2[∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐
̅.ℒ𝜘)
2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
] 𝑁𝑝
⁄
]
exp[
−𝑁𝑝
2
]
𝜆 = exp [
−𝑁𝑝
2
(1 +
𝑁𝑝𝜈
̅…
2
∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐
̅.ℒ𝜘)2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
) +
𝑁𝑝
2
] ⟹ ln(𝜆) =
−(𝑁𝑝𝜐
̅…)2
2[∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐
̅.ℒ𝜘)2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1 ]
⟹ −2 ln(𝜆) =
(𝑁𝑝𝜐
̅…)2
∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐
̅.ℒ𝜘)2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
Since 𝜐̅…~𝑖. 𝑖. 𝑑 𝑁 (𝜗,
𝜎𝜍
2+𝜎𝑒
2
𝑁𝑝
) ,then
𝜐
̅…−0
√
𝜎𝜍
2+𝜎𝑒
2
𝑁𝑝
~𝑁(0,1) ⟹
𝑁𝑝𝜈
̅…
2
𝜎𝜍
2+𝜎𝑒
2 ~𝑥2
(1) ⇒
∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐
̅.ℒ𝜘)2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
𝜎𝜍
2+𝜎𝑒
2 ~𝑥2
(𝑞𝑝(𝑁 − 1)).Then we have
𝑁𝑝𝜈
̅…
2
𝜎𝜍
2+𝜎𝑒
2
∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐
̅.ℒ𝜘)
2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
𝜎𝜍
2+𝜎𝑒
2
~
𝑥2(1)
𝑥2(𝑞𝑝(𝑁−1))
~𝐹(1, 𝑞𝑝(𝑁 − 1))
4.2.Consider 𝐻0: 𝜉ℒ = 0 vs. 𝐻1: 𝜉ℒ ≠ 0 , ℒ = 1, … , 𝑞 ,
Then we have
𝐿(𝜔) = (2𝜋( 𝜎𝑒
2
+ 𝜎𝜍
2
))
−𝑁𝑝
2 𝑒𝑥𝑝 [
− ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜁𝜘−(𝜉𝜁)ℒ𝜘)2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
2( 𝜎𝑒
2+𝜎𝜍
2)
] 𝐿(Ω) =
(2𝜋( 𝜎𝑒
2
+ 𝜎𝜍
2
))
−𝑁𝑝
2 𝑒𝑥𝑝 [
− ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘)2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
2( 𝜎𝑒
2+𝜎𝜍
2)
]
UtilitasMathematica
ISSN 0315-3681 Volume 120, 2023
175
Then let 𝜆 =
𝐿(𝜔
̂)
𝐿(𝛺
̂)
=
exp[
− ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐
̅.ℒ𝜘+𝜐
̅.ℒ.− 𝜐
̅…)
2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
2[∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐
̅.ℒ𝜘)
2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1 ] 𝑁𝑝
⁄
]
exp[
− ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐
̅.ℒ𝜘)
2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
2[∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐
̅.ℒ𝜘)
2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
] 𝑁𝑝
⁄
]
=
𝑒𝑥𝑝[
− ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐
̅.ℒ𝜘 )
2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1 −∑ ∑ ∑ (𝜐
̅.ℒ.− 𝜐
̅… )
2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
2[∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐
̅.ℒ𝜘)
2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1 ] 𝑁𝑝
⁄
]
exp[
−𝑁𝑝
2
]
𝜆 = exp [
−𝑁𝑝
2
(1 +
∑ ∑ ∑ (𝜐
̅.ℒ.− 𝜐
̅… )2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐
̅.ℒ𝜘)2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
) +
𝑁𝑝
2
] ⟹ ln(𝜆) =
𝑁𝑝 ∑ ∑ ∑ (𝜐
̅.ℒ.− 𝜐
̅… )2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
2[∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐
̅.ℒ𝜘)2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1 ]
⟹ −2 ln(𝜆) =
∑ ∑ ∑ [√𝑁𝑝(𝜐
̅.ℒ.− 𝜐
̅… )]2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐
̅.ℒ𝜘)2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
Since 𝜉
̂ℒ~𝑖. 𝑖. 𝑑 𝑁 (𝜉ℒ,
(𝑁+𝑛ℒ)(𝜎𝜍
2+𝜎𝑒
2)
𝑛ℒ𝑁𝑝
) then
∑ ∑ ∑ [√𝑁𝑝(𝜐
̅.ℒ.− 𝜐
̅… )]2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
(𝜎𝜍
2+𝜎𝑒
2)
~𝑥2
(𝑞 − 1) and
∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐
̅.ℒ𝜘)2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
𝜎𝜍
2+𝜎𝑒
2 ~𝑥2
(𝑞𝑝(𝑁 − 1))
Then we have
∑ ∑ ∑ [√𝑁𝑝(𝜐
̅.ℒ.− 𝜐
̅… )]2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
(𝜎𝜍
2+𝜎𝑒
2)
∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐
̅.ℒ𝜘)
2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
𝜎𝜍
2+𝜎𝑒
2
~
𝑥2(𝑞−1)
𝑥2(𝑞𝑝(𝑁−1))
~𝐹(𝑞 − 1, 𝑞𝑝(𝑁 − 1))
4.3.Consider 𝐻0: 𝜁𝜘 = 0 vs. 𝐻1: 𝜁𝜘 ≠ 0 , 𝜘 = 1, … , 𝑝 ,
Then we have
𝐿(𝜔) = (2𝜋( 𝜎𝑒
2
+ 𝜎𝜍
2
))
−𝑁𝑝
2 𝑒𝑥𝑝 [
− ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−(𝜉𝜁)ℒ𝜘)2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
2( 𝜎𝑒
2+𝜎𝜍
2)
] 𝐿(Ω) =
(2𝜋( 𝜎𝑒
2
+ 𝜎𝜍
2
))
−𝑁𝑝
2 𝑒𝑥𝑝 [
− ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘)2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
2( 𝜎𝑒
2+𝜎𝜍
2)
]
Then let 𝜆 =
𝐿(𝜔
̂)
𝐿(𝛺
̂)
=
exp[
− ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐
̅.ℒ𝜘+𝜐
̅..𝜘− 𝜐
̅…)
2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
2[∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐
̅.ℒ𝜘)
2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
] 𝑁𝑝
⁄
]
exp[
− ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐
̅.ℒ𝜘)
2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
2[∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐
̅.ℒ𝜘)
2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
] 𝑁𝑝
⁄
]
=
𝑒𝑥𝑝[
− ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐
̅.ℒ𝜘 )
2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1 −∑ ∑ ∑ (𝜐
̅..𝜘− 𝜐
̅… )2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
2[∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐
̅.ℒ𝜘)
2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
] 𝑁𝑝
⁄
]
exp[
−𝑁𝑝
2
]
𝜆 = exp [
−𝑁𝑝
2
(1 +
∑ ∑ ∑ (𝜐
̅..𝜘− 𝜐
̅… )2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐
̅.ℒ𝜘)2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
) +
𝑁𝑝
2
] ⟹ ln(𝜆) =
𝑁𝑝 ∑ ∑ ∑ (𝜐
̅..𝜘− 𝜐
̅… )2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
2[∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐
̅.ℒ𝜘)2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1 ]
UtilitasMathematica
ISSN 0315-3681 Volume 120, 2023
176
⟹ −2 ln(𝜆) =
∑ ∑ ∑ [√𝑁𝑝(𝜐
̅..𝜘− 𝜐
̅… )]2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐
̅.ℒ𝜘)2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
Since 𝜁
̂𝜘~𝑖. 𝑖. 𝑑 𝑁 (𝜁𝜘,
(𝑝+1)(𝜎𝜍
2+𝜎𝑒
2)
𝑁𝑝
) then
∑ ∑ ∑ [√𝑁𝑝(𝜐̅..𝜘 − 𝜐̅… )]2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
(𝜎𝜍
2 + 𝜎𝑒
2)
~𝜒2(𝑝 − 1) , and
∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐
̅.ℒ𝜘)2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
(𝜎𝜍
2+𝜎𝑒
2)
~𝜒2
(𝑞𝑝(𝑁 − 1)) ,then we have
∑ ∑ ∑ [√𝑁𝑝(𝜐̅..𝜘 − 𝜐̅… )]2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
(𝜎𝜍
2 + 𝜎𝑒
2)
∑ ∑ ∑ (𝜐ℓℒ𝜘 − 𝜐̅.ℒ𝜘)2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
(𝜎𝜍
2 + 𝜎𝑒
2)
~
𝜒2
(𝑝 − 1)
𝜒2(𝑞𝑝(𝑁 − 1))
~𝐹((𝑝 − 1), 𝑞𝑝(𝑁 − 1))
4.4.Consider 𝐻0: (𝜉𝜁)ℒ𝜘 = 0 vs. 𝐻1: (𝜉𝜁)ℒ𝜘 ≠ 0 , 𝜘 = 1, … , 𝑝, ℒ = 1, … , 𝑞
Then we have
𝐿(𝜔) = (2𝜋( 𝜎𝑒
2
+ 𝜎𝜍
2
))
−𝑁𝑝
2 𝑒𝑥𝑝 [
− ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘)2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
2( 𝜎𝑒
2+𝜎𝜍
2)
] 𝐿(Ω) =
(2𝜋( 𝜎𝑒
2
+ 𝜎𝜍
2
))
−𝑁𝑝
2 𝑒𝑥𝑝 [
− ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘)2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
2( 𝜎𝑒
2+𝜎𝜍
2)
]
Then let 𝜆 =
𝐿(𝜔
̂)
𝐿(𝛺
̂)
=
exp[
− ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐
̅.ℒ.−𝜐
̅..𝜘+ 𝜐
̅…)
2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
2[∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐
̅.ℒ𝜘)
2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
] 𝑁𝑝
⁄
]
exp[
− ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐
̅.ℒ𝜘)
2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
2[∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐
̅.ℒ𝜘)
2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
] 𝑁𝑝
⁄
]
=
𝑒𝑥𝑝[
− ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐
̅.ℒ𝜘 )
2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1 −∑ ∑ ∑ (𝜐
̅.ℒ𝜘−𝜐
̅.ℒ.−𝜐
̅..𝜘+ 𝜐
̅… )
2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
2[∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐
̅.ℒ𝜘)
2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
] 𝑁𝑝
⁄
]
exp[
−𝑁𝑝
2
]
𝜆 = exp [
−𝑁𝑝
2
(1 +
∑ ∑ ∑ (𝜐
̅.ℒ𝜘−𝜐
̅.ℒ.−𝜐
̅..𝜘+ 𝜐
̅… )2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐
̅.ℒ𝜘)2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
) +
𝑁𝑝
2
] ⟹ ln(𝜆) =
𝑁𝑝 ∑ ∑ ∑ (𝜐
̅.ℒ𝜘−𝜐
̅.ℒ.−𝜐
̅..𝜘+ 𝜐
̅… )2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
2[∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐
̅.ℒ𝜘)2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1 ]
⟹ −2 ln(𝜆) =
∑ ∑ ∑ [√𝑁𝑝(𝜐
̅.ℒ𝜘−𝜐
̅.ℒ.−𝜐
̅..𝜘+ 𝜐
̅… )]2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐
̅.ℒ𝜘)2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
UtilitasMathematica
ISSN 0315-3681 Volume 120, 2023
177
since (𝜉𝜁
̂)ℒ𝜘
~𝑖. 𝑖. 𝑑 𝑁 ((𝜉𝜁)ℒ𝜘,
(𝑁𝑝+𝑁+𝑛ℒ𝑝+𝑛ℒ)( 𝜎𝜍
2+𝜎𝑒
2)
𝑛ℒ𝑁𝑝
) ,then
∑ ∑ ∑ [√𝑁𝑝(𝜐
̅.ℒ𝜘−𝜐
̅.ℒ.−𝜐
̅..𝜘+ 𝜐
̅… )]2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
( 𝜎𝑒
2+𝜎𝜍
2)
~𝜒2
((𝑝 − 1)(𝑞 − 1)) ,and
∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐
̅.ℒ𝜘)2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
( 𝜎𝑒
2+𝜎𝜍
2)
~𝜒2
(𝑞𝑝(𝑁 − 1)) , then we have
∑ ∑ ∑ [√𝑁𝑝(𝜐
̅.ℒ𝜘−𝜐
̅.ℒ.−𝜐
̅..𝜘+ 𝜐
̅… )]2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
( 𝜎𝑒
2+𝜎𝜍
2)
∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐
̅.ℒ𝜘)
2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
( 𝜎𝑒
2+𝜎𝜍
2)
~
𝜒2((𝑝−1)(𝑞−1))
𝜒2(𝑞𝑝(𝑁−1))
~𝐹((𝑝 − 1)(𝑞 − 1), 𝑞𝑝(𝑁 − 1))
5. First order approximations
We may now talk about possible applications of the Central Limit Theorem and the Law of
Large Numbers to the likelihood function in order to derive asymptotic claims when n or some
other information measure is huge. The parameter for likelihood and significance functions each
evaluate the data and produce a conclusion. The significance function calculates the probability
left (or right) of the data point, while the likelihood function calculates the probability up to a
multiplicative constant at the data point. The Central Limit Theorem and the Law of Large
Numbers serve as the foundation for three widely used asymptotic approaches of deriving first
order approximations. The following three statistics are utilized when applying these techniques.
5.1. The standardized score
𝑠 = 𝑆(𝜃)𝒥
−1
2 , 𝑤ℎ𝑒𝑟𝑒 𝜃 ∈ {𝜗, 𝜉ℒ, 𝜁𝜘, (𝜉𝜁)ℒ𝜘, 𝜎𝜍
2
, 𝜎𝑒
2
}, 𝒥 = −
𝜕2 𝑙𝑛(𝐿)
𝜕(𝜃)2
|
𝜃=𝜃
̂
, 𝑎𝑛𝑑 𝑆(𝜃) =
𝜕 𝑙𝑛(𝐿)
𝜕𝜃
Now we have from (2.20) and (2.21)
UtilitasMathematica
ISSN 0315-3681 Volume 120, 2023
178
𝑠1 = (
∑ ∑ ∑ (𝜐ℓℒ𝜘)−𝑁𝑝𝜗
̂
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
( 𝜎𝑒
2+𝜎𝜍
2)
) (
𝑁𝑝
( 𝜎𝑒
2+𝜎𝜍
2)
)
−1
2
𝑠2 = (
∑ ∑ (𝜐ℓℒ𝜘)
𝑝
𝜘=1
𝑛ℒ
ℓ=1
−𝑛ℒ𝑝𝜗
̂−𝑛ℒ𝑝𝜉
̂ℒ
( 𝜎𝑒
2+𝜎𝜍
2)
) (
𝑛ℒ𝑝𝑞
( 𝜎𝑒
2+𝜎𝜍
2)
)
−1
2
𝑠3 = (
𝑝 ∑ ∑ 𝜐ℓℒ𝜘
𝑛ℒ
ℓ=1
𝑞
ℒ=1 −𝑝𝑁𝜗
̂−𝑝𝑁𝜁
̂𝜘
( 𝜎𝑒
2+𝜎𝜍
2)
) (
𝑝𝑁
( 𝜎𝑒
2+𝜎𝜍
2)
)
−1
2
𝑠4 = (
𝑞𝑝(∑ 𝜐ℓℒ𝜘
𝑛ℒ
ℓ=1
−𝑛ℒ𝜗
̂−𝑛ℒ𝜉
̂ℒ−𝑛ℒ𝜁
̂𝜘−𝑛ℒ(𝜉𝜁
̂)ℒ𝜘
)
( 𝜎𝑒
2+𝜎𝜍
2)
) (
𝑞𝑝𝑛ℒ
( 𝜎𝑒
2+𝜎𝜍
2)
)
−1
2
𝑠5 = (
−𝑁𝑝
2
(
1
𝜎𝜍
2+𝜎𝑒
2) +
2 ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘)2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
(2(𝜎𝜍
2+𝜎𝑒
2))
2 )
(
𝑁𝑝
2(𝜎
̂𝜍
2+𝜎
̂𝑒
2)
2 −
2 ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗
̂−𝜉
̂ℒ−𝜁
̂𝜘−(𝜉𝜁
̂)ℒ𝜘
)
2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
(2(𝜎
̂𝜍
2+𝜎
̂𝑒
2))
3 )
−1
2
𝑠6 = (
−𝑁𝑝
2
(
1
𝜎𝜍
2+𝜎𝑒
2) +
2 ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘)2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
(2(𝜎𝜍
2+𝜎𝑒
2))
2 )
(
𝑁𝑝
2(𝜎
̂𝜍
2+𝜎
̂𝑒
2)
2 −
2 ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗
̂−𝜉
̂ℒ−𝜁
̂𝜘−(𝜉𝜁
̂)ℒ𝜘
)
2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
(2(𝜎
̂𝜍
2+𝜎
̂𝑒
2))
3 )
−1
2
.
}
(5.1)
5.2. The standardized maximum likelihood estimate
𝑞 = (𝜃
̂ − 𝜃)𝒥
−1
2 , then we have
𝑞1 = (𝜗
̂ − 𝜗) (
𝑁𝑝
( 𝜎𝑒
2+𝜎𝜍
2)
)
−1
2
, 𝑞2 = (𝜉
̂ℒ − 𝜉ℒ) (
𝑛ℒ𝑝𝑞
( 𝜎𝑒
2+𝜎𝜍
2)
)
−1
2
𝑞3 = (𝜁
̂𝜘 − 𝜁𝜘) (
𝑝𝑁
( 𝜎𝑒
2+𝜎𝜍
2)
)
−1
2
, 𝑞4 = ((𝜉𝜁
̂)ℒ𝜘
− (𝜉𝜁)ℒ𝜘) (
𝑞𝑝𝑛ℒ
( 𝜎𝑒
2+𝜎𝜍
2)
)
−1
2
𝑞5 = (𝜎
̂𝜍
2
− 𝜎𝜍
2
) (
𝑁𝑝
2(𝜎
̂𝜍
2+𝜎
̂𝑒
2)
2 −
2 ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗
̂−𝜉
̂ℒ−𝜁
̂𝜘−(𝜉𝜁
̂)ℒ𝜘
)
2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
(2(𝜎
̂𝜍
2+𝜎
̂𝑒
2))
3 )
−1
2
𝑞6 = (𝜎
̂𝑒
2
− 𝜎𝑒
2) (
𝑁𝑝
2(𝜎
̂𝜍
2+𝜎
̂𝑒
2)
2 −
2 ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗
̂−𝜉
̂ℒ−𝜁
̂𝜘−(𝜉𝜁
̂)ℒ𝜘
)
2
𝑝
𝜘=1
𝑛ℒ
ℓ=1
𝑞
ℒ=1
(2(𝜎
̂𝜍
2+𝜎
̂𝑒
2))
3 )
−1
2
.
}
(5.2)
5.3. The signed likelihood ratio statistic
𝑟 = 𝑠𝑔𝑛(𝜃
̂ − 𝜃)[−2 𝑙𝑛 𝜆]
1
2
UtilitasMathematica
ISSN 0315-3681 Volume 120, 2023
179
The significance function has three approximate values given by the three standardized variables:
𝜑(𝑠), 𝜑(𝑞), 𝑎𝑛𝑑 𝜑(𝑟).
The signed likelihood root and the maximum likelihood departure are significant quantities in the
current context because they serve as third- or lower-order approximations for assessing a scalar
interest parameter in the unbalanced one-way variance components models.
Testing a real interest parameter Ψ in the presence of a vector nuisance parameter α is a common
problem. In this part, we create significance tests for real parameters Ψ(𝜃), the canonical interest
parameter, using significance functions.
Let 𝐻0: Ψ = Ψ0
𝑣𝑠 𝐻1: 𝛹 ≠ 𝛹0
.
The maximum likelihood estimate for θ has an approximate 𝑁𝑁𝑝 (𝜃, 𝒥𝜃𝜃
(𝜃
̂)) distribution where
𝒥𝜃𝜃
designates the asymptotic variance of 𝜃
̂ and is given by
(
𝒥𝛼𝛼
𝒥𝛼Ψ
𝒥Ψ𝛼
𝒥ΨΨ) .
Using simple multivariate normal theory we find that the asymptotic distribution of Ψ
̂ is normal
with mean Ψ and estimated variance (𝒥𝛹𝛹
(𝜃
̂Ψ)) where
(𝒥𝛹𝛹
(𝜃
̂𝛹))
−1
= 𝒥ΨΨ − 𝒥Ψ𝛼𝒥𝛼𝛼
−1
𝒥𝛼𝛹|𝜃
̂𝛹
Under the null hypothesis, the maximum likelihood departure has a well-known standardized
pivotal form 𝑞 = (Ψ
̂ − Ψ)(𝒥𝛹𝛹)
−1
2 (5.3)
Using (5.3) and the following factorization |𝒥𝜃𝜃| = |𝒥𝛼𝛼|(𝒥𝛹𝛹)−1
(5.4)
the statistic q can be expressed as 𝑞 = (Ψ
̂ − Ψ) (
|𝒥𝜃𝜃(𝜃
̂)|
|𝒥𝛼𝛼(𝜃
̂)|
)
1
2
(5.5)
Another important statistic for the construction of third order approximations is the signed
likelihood ratio statistic r defined as 𝑟 = 𝑠𝑔𝑛(𝛹
̂ − 𝛹)√𝒟
where 𝒟 = 2[ln (𝐿(𝜃
̂)) − ln (𝐿(𝜃
̂𝛹))] = 2 ln (
𝐿(𝜃
̂)
𝐿(𝜃
̂𝛹)
)
𝒟 is called the deviance and r is called the directed deviance .
UtilitasMathematica
ISSN 0315-3681 Volume 120, 2023
180
The construction of the above two standardized variables leads to two first order approximations
for the significance function
𝑝1(𝜃) = 𝜑(𝑟) , 𝑝2(𝜃) = 𝜑(𝑞) 𝑤ℎ𝑒𝑟𝑒 𝜃𝜖{𝜗, 𝜉ℒ, 𝜁𝜘, (𝜉𝜁)ℒ𝜘, 𝜎𝜍
2
, 𝜎𝑒
2
} (5.6)
In (5.6) we have 𝑝𝑖(𝜃) = 𝑝(𝜃) + 𝑂(𝑛−1 2
⁄
) with 𝑝(𝜃) represents the exact probability for each
case , the bound as given applies for a bound range of the standardized variable.
Third order approximations for significance functions can be expressed in the Lugannani and
Rice(1980) type formula 𝜑(𝑟) + 𝜙(𝑟)(𝑟−1
− 𝑞−1
), and the r* type formula 𝜑(𝑟 + 𝑟−1
ln 𝑞𝑟−1
).
6. Conclusions
In this paper, we reached the following conclusions
1- The maximum likelihood estimators of parameters of repeated measurements model are
𝜗
̂ = 𝜐̅…, 𝜉
̂ℒ = 𝜐̅.ℒ. − 𝜐̅…,𝜁
̂𝜘 = 𝜐̅..𝜘 − 𝜐̅… ,(𝜉𝜁
̂)ℒ𝜘
= 𝜐̅.ℒ𝜘 − 𝜐̅.ℒ. − 𝜐̅..𝜘 + 𝜐̅…and 𝜎
̂𝑒
2
=
𝑀𝑆𝑒 , 𝑎𝑛𝑑 𝜎
̂𝛿
2
=
1
𝑁𝑝(𝑝−1)
𝑀𝑆𝜍 +
((𝑝−1)(𝑁−𝑞)−𝑛𝑝)
𝑁𝑝
𝑀𝑆𝑒 .
2- The maximum likelihood estimators of parameters of repeated measurements model are
unbiased ,and consistent.
3- The restricted maximum likelihood estimators of parameters of repeated measurements
model are 𝜗
̂ = 𝜐̅… , 𝜉
̂ℒ = 𝜐̅.ℒ. − 𝜐̅… , 𝜁
̂𝜘 = 𝜐̅..𝜘 − 𝜐̅… (𝜉𝜁
̂)ℒ𝜘
= 𝜐̅.ℒ𝜘 − 𝜐̅.ℒ. − 𝜐̅..𝜘 + 𝜐̅…
, 𝜎
̂𝑒
2
= 𝑀𝑆𝑒 ,and 𝜎
̂𝜍
2
=
1
𝑝
[𝑀𝑆𝜍 − 𝑀𝑆𝑒].
4- The distribution of the estimators of parameters of repeated measurements model are
𝜗
̂~𝑖. 𝑖. 𝑑 𝑁 (𝜗,
𝜎𝜍
2+𝜎𝑒
2
𝑁𝑝
) , 𝜉
̂ℒ~𝑖. 𝑖. 𝑑 𝑁 (𝜉ℒ,
(𝑁+𝑛ℒ)(𝜎𝜍
2+𝜎𝑒
2)
𝑛ℒ𝑁𝑝
) , 𝜁
̂𝜘~𝑖. 𝑖. 𝑑 𝑁 (𝜁𝜘,
(𝑝+1)(𝜎𝜍
2+𝜎𝑒
2)
𝑁𝑝
)
,and (𝜉𝜁
̂)ℒ𝜘
~𝑖. 𝑖. 𝑑 𝑁 ((𝜉𝜁)ℒ𝜘,
(𝑁𝑝+𝑁+𝑛ℒ𝑝+𝑛ℒ)( 𝜎𝜍
2+𝜎𝑒
2)
𝑛ℒ𝑁𝑝
).
5- The likelihood-ratio test of the parameters 𝜗, 𝜉ℒ, 𝜁𝜘, 𝑎𝑛𝑑 (𝜉𝜁)ℒ𝜘 for repeated
measurements model are obtained.
6- The maximum likelihood estimate for θ has an approximate 𝑁𝑁𝑝 (𝜃, 𝒥𝜃𝜃
(𝜃
̂))
distribution.
7- The first order approximations for the significance function have 𝑝1(𝜃) = 𝜑(𝑟) ,
𝑝2(𝜃) = 𝜑(𝑞).
UtilitasMathematica
ISSN 0315-3681 Volume 120, 2023
181
References
[1] AL-Mouel,A. S. " Multivariate Repeated Measures Models and Comparison of Estimators,"
Ph.D. dissertation, East China Normal University, China., 2004.
[2] AL-Mouel,A. S. and Abd-Ali,A.A. ,(2020)"On One-Way Repeated Measurements Model
with Practical Application", Journal of Global Scientific Research,Vol.5,Issue 12.
[3] AL-Mouel, A. S. and Ali, F.H., (2021)"Penalized Likelihood Estimator for Variance
Components in Repeated Measurements Model", Basrah Journal of Science, Vol.39(2).192-
203.
[4] Hand, D. J. and Crowder, M. J., Analysis of Repeated Measures, London, 1993.
[5] Mohaisen, A. J. and Swadi, K. A. R., (2014) "Bayesian one- Way Repeated Measurements
Model Based on Markov Chain Monte Carlo," Indian Journal of Applied Research,
Vol.4,No. 10.
[6] Patterson, H. D. and Thompson, R., " Recovery of inter-block information when block sizes
are unequal," Biometrika, 1971.
[7] Verbeke, G. and Molenberghs,G., Linear Mixed Models for Longitudinal Data, New York:
Springer, 2000.

research on journaling

  • 1.
    UtilitasMathematica ISSN 0315-3681 Volume120, 2023 163 Asymptotic Method for One-Way Repeated Measurements Model for Unbalanced data Sarah Abbas Abdul-Zahra1, Abdul Hussein Saber AL-Mouel2 1 Mathematics Department, College of Education for Pure Science Basrah University-Iraq, sarhabbasabulzhra@gmail.com 2 Mathematics Department, College of Education for Pure Science Basrah University-Iraq, abdulhusseinsaber@yahoo.com Abstract In this paper, we consider the one-way repeated measurements model for unbalanced data, and discuss method to estimate the parameter of the model, also introduce some properties of estimators, and the distribution of them.We will identify the analytic form of the likelihood-ratio test for the model, and obtaining asymptotic statements from the likelihood function. Keywords: One-Way Repeated Measurements Model, Maximum Likelihood Method; Restricted Maximum Likelihood Method, Likelihood-Ratio test, Asymptotic. 1. Introduction Repeated measurements analysis is extensively utilized in a variety of sectors, including epidemiology, biomedical research, health and life sciences, and others. There is a lot of literature on univariate repeated measurements analysis of variance[4][7]. The expression "repeated measurements" means information collected when the response variable of each experimental unit is measured repeatedly, possibly under different experimental conditions. [5]. Data comes in two varieties: balanced data and unbalanced data. In a balance RM design, all of the experimental units' measurement occasions are the same, however in an unbalance data design, some of the experimental units' measurement times are different [6]. Several studies and research projects are interested in examining repeated measures models, particularly in those models' analysis of variance and model parameter estimation. Following are some recent research on estimators and analysis of variance for repeated measurement models that we will quickly cover in this historical review: Al-Mouel (2004)[1] examined the multivariate repeated measures models, compared estimators, and discussed the results. AL-Mouel and Ali in(2021)[3],they study the variance components of the model and use the maximum likelihood method to estimate it. In their study on the estimation of variance components for the repeated measurements model published in (2020)[2], AL-Mouel and Abd-Ali.
  • 2.
    UtilitasMathematica ISSN 0315-3681 Volume120, 2023 164 In this paper, we estimate the parameters for our model by using the maximum likelihood method and restricted maximum likelihood method ,and obtaining asymptotic statements from the likelihood function. 2. One-Way Repeated Measurements Model We consider the one-way repeated measurements model for unbalanced data(where the number of observations is unequal in each levels).A mathematical expression for the model is presented and probability distributions for its components are discussed. The parameter of the model are estimator in various method, and some properties of the estimators. 2.1 The Mathematical model Consider the one-way repeated measurements model for unbalanced data, 𝜐ℓℒ𝜘 = 𝜗 + 𝜉ℒ + 𝜁𝜘 + (𝜉𝜁)ℒ𝜘 + 𝜍ℓ(ℒ) + 𝑒ℓℒ𝜘 , (2.1) where 𝜐ℓℒ𝜘 is the response measurement at time 𝜘 for unit ℓ within group ℒ, ℓ = 1,2, … , 𝑛ℒ is an index for experimental unit within group ℒ, ℒ = 1,2, … , 𝑞 is an index for levels of the between-units factor (Group) , 𝜘 = 1,2, … , 𝑝 is an index for levels of the within-units factor (Time) , 𝜗 is the overall mean ,and 𝝃𝓛is the added effect for treatment group ℒ, 𝜻𝝒 is the added effect for time 𝜘 , (𝜉𝜁)ℒ𝜘 is the added effect for the group ℒ × time 𝜘 (Interaction) , 𝜍ℓ(ℒ) is the random effect due to experimental unit ℓ within treatment group ℒ , 𝑒ℓℒ𝜘 is the random error time 𝜘 for unit ℓ within group ℒ , under the following considerations for the added parameter (added effect) ∑ 𝜉ℒ 𝑞 ℒ=1 = 0 , ∑ 𝜁𝜘 𝑝 𝜘=1 = 0 , ∑ (𝜉𝜁)ℒ𝜘 = 0 , 𝑞 ℒ=1 ∀ 𝜘 = 1,2, … , 𝑝 , ∑ (𝜉𝜁)ℒ𝜘 𝑝 𝜘=1 = 0 , ∀ℒ = 1,2, … , 𝑞 , 𝑤𝑖𝑡ℎ 𝑁 = ∑ 𝑛ℒ ℒ , 𝑎𝑛𝑑 ∑ 𝑛ℒ𝜉ℒ 𝑞 ℒ=1 = 0 . } (2.2) Considering that the 𝜍ℓ(ℒ)′𝑠 and 𝑒ℓℒ𝜘′𝑠 are independent and identically with
  • 3.
    UtilitasMathematica ISSN 0315-3681 Volume120, 2023 165 𝑒ℓℒ𝜘 ∼ 𝑖. 𝑖. 𝑑 𝑁(0, 𝜎𝑒 2) , 𝜍ℓ(ℒ) ~𝑖. 𝑖. 𝑑 𝑁(0, 𝜎𝜍 2 ) , (2.3) In the following manner we display the sum of square terms 𝑆𝑆𝜉 = 𝑃 ∑ 𝑛ℒ(𝜐̅.ℒ. − 𝜐̅...)2 , 𝑞 ℒ=1 𝑆𝑆𝜁 = 𝑁 ∑ (𝜐̅..𝜘 − 𝜐̅...)2 𝑝 𝜘=1 , 𝑆𝑆𝜍 = 𝑝 ∑ ∑ (𝜐̅ℓℒ. − 𝜐̅.ℒ.)2 𝑛ℒ ℓ=1 𝑞 ℒ=1 , 𝑆𝑆𝜉𝜁 = ∑ ∑ 𝑛ℒ(𝜐̅.ℒ𝜘 − 𝜐̅.ℒ. − 𝜐̅..𝜘 + 𝜐̅…)2 𝑝 𝜘=1 𝑞 ℒ=1 , 𝑆𝑆𝑒 = ∑ ∑ ∑ (𝜐ℓℒ𝜘 − 𝜐̅.ℒ𝜘 − 𝜐̅ℓℒ. + 𝜐̅.ℒ.)2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 , } (2.4) and υ ̅… = 1 Np ∑ ∑ ∑ 𝜐ℓℒ𝜘 p ϰ=1 𝑛ℒ ℓ=1 q ℒ=1 is the overall mean , 𝜐̅.ℒ. = 1 𝑛ℒp ∑ ∑ 𝜐ℓℒ𝜘 p ϰ=1 𝑛ℒ ℓ=1 , ∀ℒ = 1,2, … , 𝑞 , is the mean for group ℒ , 𝜐̅.ℒ𝜘 = 1 𝑛ℒ ∑ 𝜐ℓℒ𝜘 𝑛ℒ ℓ=1 , ∀ℒ = 1,2, … , 𝑞 , is the mean for the group ℒ at time ϰ , 𝜐̅..𝜘 = 1 N ∑ ∑ 𝜐ℓℒ𝜘 𝑛ℒ ℓ=1 q ℒ=1 is the mean for time ϰ , 𝜐̅ℓℒ. = 1 p ∑ 𝜐ℓℒ𝜘 p ϰ=1 , is a mean for the ℓth subject in groupn ℒ , } (2.5) 2.2 Estimation There are various methods available for estimating the model parameters from unbalanced data, we discuss some these methods and their properties. 2.2.1. Maximum likelihood method (ML) The joint density function of 𝜐ℓℒ𝜘 has the following form: 𝑓(𝜐𝜗 , 𝜉ℒ , 𝜁𝜘 , (𝜉𝜁)ℒ𝜘 , 𝜍ℓ(ℒ), σe 2 , σς 2 ) = (2𝜋( 𝜎𝑒 2 + 𝜎𝜍 2 )) −𝑁𝑝 2 𝑒𝑥𝑝 [ −(𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘)2 2( 𝜎𝑒 2+𝜎𝜍 2) ], Following is a derivation of the likelihood function for the model (2.1): 𝐿(𝜐𝜗 , 𝜉ℒ , 𝜁𝜘 , (𝜉𝜁)ℒ𝜘 , 𝜍ℓ(ℒ), 𝜎𝑒 2 , 𝜎𝜍 2 ) ∝ (2𝜋( 𝜎𝑒 2 + 𝜎𝜍 2 )) −𝑁𝑝 2 𝑒𝑥𝑝 [ − ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘)2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 2( 𝜎𝑒 2+𝜎𝜍 2) ], then, by taking the logarithm of both sides, we get ln(L) ∝ −𝑁𝑝 2 ln(2𝜋) − 𝑁𝑝 2 ln( 𝜎𝑒 2 + 𝜎𝜍 2 ) − [ ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘)2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 2( 𝜎𝑒 2+𝜎𝜍 2) ] (2.6),
  • 4.
    UtilitasMathematica ISSN 0315-3681 Volume120, 2023 166 The solution is reached by dividing by 𝜗 twice (2.6) and setting the derivative to zero. 𝜕 𝑙𝑛(𝐿) 𝜕𝜗 = −2 [− ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘) 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 2( 𝜎𝑒 2+𝜎𝜍 2) ] = 0 ∑ ∑ ∑ (𝜐ℓℒ𝜘) − 𝑁𝑝𝜗 ̂ − 𝑛ℒ𝑝 ∑ 𝜉ℒ 𝑞 ℒ=1 − 𝑁 ∑ 𝜁𝜘 𝑝 𝜘=1 − 𝑁 ∑ (𝜉𝜁)ℒ𝜘 𝑝 𝜘=1 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 = 0 ⇒ 𝑁𝑝𝜗 ̂ = ∑ ∑ ∑ (𝜐ℓℒ𝜘) 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 ⟹ 𝜗 ̂ = 1 𝑁𝑝 ∑ ∑ ∑ (𝜐ℓℒ𝜘) 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 ∴ 𝜗 ̂ = 𝜐̅… (2.7) 𝑎𝑛𝑑 𝜕2𝑙𝑛(𝐿) 𝜕𝜗2 = − 𝑁𝑝 ( 𝜎𝑒 2+𝜎𝜍 2) (2.8) , We may now differentiate twice (2.6) with regard to 𝜉ℒ and set the derivative equal to zero to get 𝜕 𝑙𝑛(𝐿) 𝜕𝜉ℒ = −2𝑞 [− ∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘) 𝑝 𝜘=1 𝑛ℒ ℓ=1 2( 𝜎𝑒 2+𝜎𝜍 2) ] = 0 , ∑ ∑ (𝜐ℓℒ𝜘) 𝑝 𝜘=1 𝑛ℒ ℓ=1 − 𝑛ℒ𝑝𝜗 ̂ − 𝑛ℒ𝑝𝜉 ̂ℒ − 𝑛ℒ ∑ 𝜁𝜘 𝑝 𝜘=1 − 𝑛ℒ ∑ (𝜉𝜁)ℒ𝜘 𝑝 𝜘=1 = 0 ⇒ 𝜉 ̂ℒ = 1 𝑛ℒ𝑝 [ ∑ ∑ (𝜐ℓℒ𝜘) 𝑝 𝜘=1 𝑛ℒ ℓ=1 − 𝑛ℒ𝑝𝜗 ̂] = 1 𝑛ℒ𝑝 ∑ ∑ (𝜐ℓℒ𝜘) 𝑝 𝜘=1 𝑛ℒ ℓ=1 − 𝜗 ̂ ∴ 𝜉 ̂ℒ = 𝜐̅.ℒ. − 𝜐̅… . (2.9) 𝑎𝑛𝑑 𝜕2𝑙𝑛(𝐿) 𝜕𝜉ℒ 2 = − 𝑛ℒ𝑝𝑞 ( 𝜎𝑒 2+𝜎𝜍 2) (2.10) The solution is reached by dividing by 𝜁𝜘 twice (2.6) and setting the derivative to zero. 𝜕 𝑙𝑛(𝐿) 𝜕𝜁𝜘 = −2𝑝 [− ∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘) 𝑛ℒ ℓ=1 𝑞 ℒ=1 2( 𝜎𝑒 2+𝜎𝜍 2) ] = 0 ∑ ∑ 𝜐ℓℒ𝜘 𝑛ℒ ℓ=1 𝑞 ℒ=1 − 𝑁𝜗 ̂ − 𝑛ℒ ∑ 𝜉ℒ 𝑞 ℒ=1 − 𝑁𝜁 ̂𝜘 − 𝑛ℒ ∑ (𝜉𝜁)ℒ𝜘 𝑞 ℒ=1 = 0 ⇒ 𝜁 ̂𝜘 = 1 𝑁 [∑ ∑ 𝜐ℓℒ𝜘 𝑛ℓ ℓ=1 𝑞 ℒ=1 − 𝑁𝜗 ̂] = 1 𝑁 ∑ ∑ 𝜐ℓℒ𝜘 𝑛ℓ ℓ=1 𝑞 ℒ=1 − 𝜗 ̂ → 𝜁 ̂𝜘 = 𝜐̅..𝜘 − 𝜐̅… (2.11) 𝑎𝑛𝑑 𝜕2𝑙𝑛(𝐿) 𝜕𝜁𝜘 2 = − 𝑝𝑁 ( 𝜎𝑒 2+𝜎𝜍 2) (2.12) , The solution is reached by dividing by (𝜉𝜁)ℒ𝜘 twice (2.6) and setting the derivative to zero.
  • 5.
    UtilitasMathematica ISSN 0315-3681 Volume120, 2023 167 𝜕 𝑙𝑛(𝐿) 𝜕(𝜉𝜁)ℒ𝜘 = −2𝑞𝑝 [− ∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘) 𝑛ℒ ℓ=1 2(𝜎𝑒 2+𝜎𝜍 2) ] = 0 , ∑ 𝜐ℓℒ𝜘 𝑛ℒ ℓ=1 − 𝑛ℒ𝜗 ̂ − 𝑛ℒ𝜉 ̂ℒ − 𝑛ℒ𝜁 ̂𝜘 − 𝑛ℒ(𝜉𝜁 ̂)ℒ𝜘 = 0 ⇒ (𝜉𝜁 ̂)ℒ𝜘 = 1 𝑛ℒ [∑ 𝜐ℓℒ𝜘 𝑛ℒ ℓ=1 − 𝑛ℒ𝜗 ̂ − 𝑛ℒ𝜉 ̂ℒ − 𝑛ℒ𝜁 ̂𝜘] ∴ (𝜉𝜁 ̂)ℒ𝜘 = 𝜐̅.ℒ𝜘 − 𝜐̅.ℒ. − 𝜐̅..𝜘 + 𝜐̅… (2.13) 𝑎𝑛𝑑 𝜕2𝑙𝑛(𝐿) 𝜕(𝜉𝜁)ℒ𝜘 2 = − 𝑞𝑝𝑛ℒ ( 𝜎𝑒 2+𝜎𝜍 2) (2.14) 𝑠𝑖𝑛𝑐𝑒 𝜎 ̂𝑒 2 = 𝑆𝑆𝑒 (𝑝−1)(𝑁−𝑞) , then 𝜎 ̂𝑒 2 = ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐 ̅.ℒ𝜘−𝜐 ̅ℓℒ.+𝜐 ̅.ℒ.)2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 (𝑝−1)(𝑁−𝑞) , ∴ 𝜎 ̂𝑒 2 = 𝑀𝑆𝑒 . (2.15) When 𝜕 𝑙𝑛(𝐿) 𝜕𝜎𝑒 2 = −𝑁𝑝 2 ( 1 𝜎𝜍 2+𝜎𝑒 2) + 2(𝜎𝜍 2+𝜎𝑒 2)(0)+2 ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘)2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 (2(𝜎𝜍 2+𝜎𝑒 2)) 2 = 0 −𝑁𝑝(𝜎 ̂𝜍 2 + 𝜎 ̂𝑒 2 ) + ∑ ∑ ∑ (𝜐ℓℒ𝜘 − 𝜗 ̂ − 𝜉 ̂ℒ − 𝜁 ̂𝜘 − (𝜉𝜁 ̂)ℒ𝜘 ) 2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 = 0 𝑎𝑛𝑑, 𝜕2 𝑙𝑛(𝐿) 𝜕(𝜎𝑒 2) 2 = 𝑁𝑝 2(𝜎𝜍 2+𝜎𝑒 2) 2 − 2 ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘)2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 (2(𝜎𝜍 2+𝜎𝑒 2)) 3 . (2.16) The solution is reached by dividing by 𝜎𝜍 2 twice (2.6) and setting the derivative to zero. 𝜕 𝑙𝑛(𝐿) 𝜕𝜎𝜍 2 = −𝑁𝑝 2 ( 1 𝜎𝜍 2+𝜎𝑒 2) + 2(𝜎𝜍 2+𝜎𝑒 2)(0)+2 ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘)2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 (2(𝜎𝜍 2+𝜎𝑒 2)) 2 = 0 , −𝑁𝑝(𝜎 ̂𝜍 2 + 𝜎 ̂𝑒 2 ) + ∑ ∑ ∑ (𝜐ℓℒ𝜘 − 𝜗 ̂ − 𝜉 ̂ℒ − 𝜁 ̂𝜘 − (𝜉𝜁 ̂)ℒ𝜘 ) 2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 = 0 ⇒ 𝜎 ̂𝜍 2 = ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗 ̂−𝜉 ̂ℒ−𝜁 ̂𝜘−(𝜉𝜁 ̂)ℒ𝜘 ) 2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 𝑁𝑝 − 𝜎 ̂𝑒 2 = ∑ ∑ ∑ [(𝜐ℓℒ𝜘−𝜐 ̅.ℒ𝜘−𝜐 ̅ℓℒ.+𝜐 ̅.ℒ.)2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 ]+(𝜐 ̅ℓℒ.−𝜐 ̅.ℒ.)2 𝑁𝑝 − ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐 ̅.ℒ𝜘−𝜐 ̅ℓℒ.+𝜐 ̅.ℒ.)2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 (𝑝−1)(𝑁−𝑞) ⇒ 𝜎 ̂𝜍 2 = 𝑆𝑆𝑒+𝑆𝑆𝜍 𝑁𝑝 − 𝑆𝑆𝑒 (𝑝−1)(𝑁−𝑞) = [ 𝑆𝑆𝜍 𝑁𝑝(𝑝−1)(𝑁−𝑞) + ((𝑝−1)(𝑁−𝑞)−𝑁𝑝)𝑆𝑆𝑒 𝑁𝑝(𝑝−1)(𝑁−𝑞) ] ∴ 𝜎 ̂𝜍 2 = 1 𝑁𝑝(𝑝−1) 𝑀𝑆𝜍 + ((𝑝−1)(𝑁−𝑞)−𝑁𝑝) 𝑁𝑝 𝑀𝑆𝑒 (2.17) 𝑎𝑛𝑑, 𝜕2 ln(𝐿) 𝜕(𝜎𝜍 2) 2 = − 𝑁𝑝 2 ( −1 (𝜎𝜍 2+𝜎𝑒 2) 2) − 4(𝜎𝜍 2+𝜎𝑒 2) ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘)2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 (2(𝜎𝜍 2+𝜎𝑒 2)) 4 𝜕2 𝑙𝑛(𝐿) 𝜕(𝜎𝜍 2) 2 = 𝑁𝑝 2(𝜎𝜍 2+𝜎𝑒 2) 2 − 2 ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘)2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 (2(𝜎𝜍 2+𝜎𝑒 2)) 3 (2.18) The result is obtained by differentiating (2.6) with respect to 𝜎𝜍 2 + 𝜎𝑒 2 and setting the derivative to zero
  • 6.
    UtilitasMathematica ISSN 0315-3681 Volume120, 2023 168 ∂ ln(L) ∂(σς 2+σe 2) = −Np 2 ( 1 σς 2+σe 2) + 2(σς 2+σe 2)(0)+2 ∑ ∑ ∑ (υℓℒϰ−ϑ−ξℒ−ζϰ−(ξζ)ℒϰ)2 p ϰ=1 nℒ ℓ=1 q ℒ=1 (2(σς 2+σe 2)) 2 = 0 −Np(σ ̂ς 2 + σ ̂e 2 ) + ∑ ∑ ∑ (υℓℒϰ − ϑ ̂ − ξ ̂ℒ − ζ ̂ϰ − (ξζ ̂)ℒϰ ) 2 p ϰ=1 nℒ ℓ=1 q ℒ=1 = 0 (σ ̂ς 2 + σ ̂e 2 ) = ∑ ∑ ∑ (υℓℒϰ−υ ̅.ℒϰ)2 p ϰ=1 nℒ ℓ=1 q ℒ=1 Np (2.19) Now we can write the derivatives of parameters as following 𝒥 = − 𝜕2 𝑙𝑛(𝐿) 𝜕(𝜃)2 | 𝜃=𝜃 ̂ , 𝑤ℎ𝑒𝑟𝑒 𝜃 ∈ {𝜗, 𝜉ℒ, 𝜁𝜘, (𝜉𝜁)ℒ𝜘, 𝜎𝜍 2 , 𝜎𝑒 2 } 𝒥1 = 𝜕2𝑙𝑛(𝐿) 𝜕𝜗2 = − 𝑁𝑝 ( 𝜎𝑒 2+𝜎𝜍 2) , 𝒥2 = 𝜕2𝑙𝑛(𝐿) 𝜕𝜉ℒ 2 = − 𝑛ℒ𝑝𝑞 ( 𝜎𝑒 2+𝜎𝜍 2) 𝒥3 = 𝜕2𝑙𝑛(𝐿) 𝜕𝜁𝜘 2 = − 𝑝𝑁 ( 𝜎𝑒 2+𝜎𝜍 2) , 𝒥4 = 𝜕2𝑙𝑛(𝐿) 𝜕(𝜉𝜁)ℒ𝜘 2 = − 𝑞𝑝𝑛ℒ ( 𝜎𝑒 2+𝜎𝜍 2) 𝒥5 = 𝜕2 𝑙𝑛(𝐿) 𝜕(𝜎𝜍 2) 2 = 𝑁𝑝 2(𝜎𝜍 2+𝜎𝑒 2) 2 − 2 ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘)2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 (2(𝜎𝜍 2+𝜎𝑒 2)) 3 𝒥6 = 𝜕2 𝑙𝑛(𝐿) 𝜕(𝜎𝑒 2) 2 = 𝑁𝑝 2(𝜎𝜍 2+𝜎𝑒 2) 2 − 2 ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘)2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 (2(𝜎𝜍 2+𝜎𝑒 2)) 3 } (2.20) ,and 𝑆(𝜃) = 𝜕 𝑙𝑛(𝐿) 𝜕𝜃 , 𝑤ℎ𝑒𝑟𝑒 𝜃 ∈ {𝜗, 𝜉ℒ, 𝜁𝜘, (𝜉𝜁)ℒ𝜘, 𝜎𝜍 2 , 𝜎𝑒 2 } then we have 𝑆(𝜗) = ∑ ∑ ∑ (𝜐ℓℒ𝜘)−𝑁𝑝𝜗 ̂ 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 ( 𝜎𝑒 2+𝜎𝜍 2) , 𝑆(𝜉ℒ) = ∑ ∑ (𝜐ℓℒ𝜘) 𝑝 𝜘=1 𝑛ℒ ℓ=1 −𝑛ℒ𝑝𝜗 ̂−𝑛ℒ𝑝𝜉 ̂ℒ ( 𝜎𝑒 2+𝜎𝜍 2) 𝑆(𝜁𝜘) = 𝑝 ∑ ∑ 𝜐ℓℒ𝜘 𝑛ℒ ℓ=1 𝑞 ℒ=1 −𝑝𝑁𝜗 ̂−𝑝𝑁𝜁 ̂𝜘 ( 𝜎𝑒 2+𝜎𝜍 2) , 𝑆((𝜉𝜁)ℒ𝜘) = 𝑞𝑝(∑ 𝜐ℓℒ𝜘 𝑛ℒ ℓ=1 −𝑛ℒ𝜗 ̂−𝑛ℒ𝜉 ̂ℒ−𝑛ℒ𝜁 ̂𝜘−𝑛ℒ(𝜉𝜁 ̂)ℒ𝜘 ) ( 𝜎𝑒 2+𝜎𝜍 2) 𝑆(𝜎𝜍 2 ) = −𝑁𝑝 2 ( 1 𝜎𝜍 2+𝜎𝑒 2) + 2 ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘)2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 (2(𝜎𝜍 2+𝜎𝑒 2)) 2 , 𝑆(𝜎𝑒 2) = −𝑁𝑝 2 ( 1 𝜎𝜍 2+𝜎𝑒 2) + 2 ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘)2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 (2(𝜎𝜍 2+𝜎𝑒 2)) 2 . } (2.21) 2.2.2. Restricted Maximum Likelihood Method (RMLM) The restricted maximum likelihood estimators of 𝜗 , 𝜉ℒ , 𝜁𝜘 , (𝜉𝜁)ℒ𝜘 , 𝜍ℓ(ℒ), 𝜎𝑒 2 , 𝑎𝑛𝑑 𝜎𝜍 2 are determined by the parameter values, which maximize the function: 𝐿 ∝ (𝜎𝑒 2) −(𝑝−1)(𝑁−𝑞) 2 (𝜎𝑒 2 + 𝑝𝜎𝜍 2 ) −(𝑁−𝑞) 2 𝑒𝑥𝑝 {− 1 2 [ 𝑆𝑆𝑒 𝜎𝑒 2 + 𝑆𝑆𝜍 (𝜎𝑒 2+𝑝𝜎𝜍 2) + 𝑁𝑝(𝜐 ̅…−𝜗)2 (𝜎𝑒 2+𝑝𝜎𝜍 2) + 𝑁𝑝(𝜐 ̅.ℒ.−𝜐 ̅…−𝜉ℒ)2 (𝜎𝑒 2+𝑝𝜎𝜍 2) + 𝑁𝑝(𝜐 ̅..𝜘−𝜐 ̅…−𝜁𝜘)2 (𝜎𝑒 2+𝑝𝜎𝜍 2) + 𝑁𝑝(𝜐 ̅.ℒ𝜘+𝜐 ̅…−𝜐 ̅.ℒ.−𝜐 ̅..𝜘−(𝜉𝜁)ℒ𝜘)2 (𝜎𝑒 2+𝑝𝜎𝜍 2) ] } , (2.2.1),
  • 7.
    UtilitasMathematica ISSN 0315-3681 Volume120, 2023 169 We know that 𝐿(𝜐𝜗 , 𝜉ℒ , 𝜁𝜘 , (𝜉𝜁)ℒ𝜘 , 𝜍ℓ(ℒ), 𝜎𝑒 2 , 𝜎𝜍 2 ) = 𝐿1(𝜐̅…𝜗 , 𝜉ℒ , 𝜁𝜘 , (𝜉𝜁)ℒ𝜘 , 𝜍ℓ(ℒ)) = 𝐿2(𝑆𝑆𝑒, 𝑆𝑆𝜍 𝜎𝑒 2 , 𝜎𝜍 2 ) , (2.2.2) Then, we have 𝐿1(𝜐̅…𝜗 , 𝜉ℒ , 𝜁𝜘 , (𝜉𝜁)ℒ𝜘 , 𝜍ℓ(ℒ)) = (𝜎𝑒 2 + 𝑝𝜎𝜍 2 ) −(𝑁−𝑞) 2 exp {− 1 2 [ 𝑁𝑝(𝜐 ̅…−𝜗)2 (𝜎𝑒 2+𝑝𝜎𝜍 2) + 𝑁𝑝(𝜐 ̅.ℒ.−𝜐 ̅…−𝜉ℒ)2 (𝜎𝑒 2+𝑝𝜎𝜍 2) + 𝑁𝑝(𝜐 ̅..𝜘−𝜐 ̅…−𝜁𝜘)2 (𝜎𝑒 2+𝑝𝜎𝜍 2) + 𝑁𝑝(𝜐 ̅.ℒ𝜘+𝜐 ̅…−𝜐 ̅.ℒ.−𝜐 ̅..𝜘−(𝜉𝜁)ℒ𝜘)2 (𝜎𝑒 2+𝑝𝜎𝜍 2) ] } (2.2.3)And 𝐿2(𝑆𝑆𝑒, 𝑆𝑆𝜍 𝜎𝑒 2 , 𝜎𝜍 2 ) = (𝜎𝑒 2) −(𝑝−1)(𝑁−𝑞) 2 (𝜎𝑒 2 + 𝑝𝜎𝜍 2 ) −(𝑁−𝑞) 2 exp {− 1 2 [ 𝑆𝑆𝑒 𝜎𝑒 2 + 𝑆𝑆𝜍 (𝜎𝑒 2+𝑝𝜎𝜍 2) ]} ,(2.2.4) Then, by taking the logarithm of both sides of equation (2.2.3), we get 𝐿1 = −(𝑛−𝑞) 2 ln(𝜎𝑒 2 + 𝑝𝜎𝜍 2 ) − 1 2 [ 𝑁𝑝(𝜐 ̅…−𝜗)2 (𝜎𝑒 2+𝑝𝜎𝜍 2) + 𝑁𝑝(𝜐 ̅.ℒ.−𝜐 ̅…−𝜉ℒ)2 (𝜎𝑒 2+𝑝𝜎𝜍 2) + 𝑁𝑝(𝜐 ̅..𝜘−𝜐 ̅…−𝜁𝜘)2 (𝜎𝑒 2+𝑝𝜎𝜍 2) + 𝑁𝑝(𝜐 ̅.ℒ𝜘+𝜐 ̅…−𝜐 ̅.ℒ.−𝜐 ̅..𝜘−(𝜉𝜁)ℒ𝜘)2 (𝜎𝑒 2+𝑝𝜎𝜍 2) ] (2.2.5) The derivative is then equaled to zero by equating (2.2.5) with regard to parameters. This gives us 𝜕ln (𝐿1) 𝜕𝜗 = − 1 2 [2𝑁𝑝 (𝜐 ̅…−𝜗) (𝜎𝑒 2+𝑝𝜎𝜍 2) ] = 0 ⇒ 𝜗 ̂ = 𝜐̅… , (2.2.6) 𝜕ln (𝐿1) 𝜕𝜉ℒ = − 1 2 [ 2𝑁𝑝(𝜐 ̅.ℒ.−𝜐 ̅…−𝜉ℒ) (𝜎𝑒 2+𝑝𝜎𝜍 2) ] = 0 ⇒ 𝜉 ̂ℒ = 𝜐̅.ℒ. − 𝜐̅… , (2.2.7) 𝜕ln (𝐿1) 𝜕𝜁𝜘 = − 1 2 [ 2𝑁𝑝(𝜐 ̅..𝜘−𝜐 ̅…−𝜁𝜘) (𝜎𝑒 2+𝑝𝜎𝜍 2) ] = 0 ⇒ 𝜁 ̂𝜘 = 𝜐̅..𝜘 − 𝜐̅… , (2.2.8) 𝜕 ln(𝐿1) 𝜕(𝜉𝜁)ℒ𝜘 = − 1 2 [ 2𝑁𝑝(𝜐̅.ℒ𝜘 + 𝜐̅… − 𝜐̅.ℒ. − 𝜐̅..𝜘 − (𝜉𝜁)ℒ𝜘) (𝜎𝑒 2 + 𝑝𝜎𝜍 2) ] = 0 ⇒ (𝜉𝜁 ̂)ℒ𝜘 = 𝜐̅.ℒ𝜘 + 𝜐̅… − 𝜐̅.ℒ. − 𝜐̅..𝜘 , (2.2.9) ln(𝐿2) = −(𝑝−1)(𝑛−𝑞) 2 ln(𝜎𝑒 2) − (𝑛−𝑞) 2 ln(𝜎𝑒 2 + 𝑝𝜎𝜍 2 ) − 1 2 [ 𝑆𝑆𝑒 𝜎𝑒 2 + 𝑆𝑆𝜍 (𝜎𝑒 2+𝑝𝜎𝜍 2) ], (2.2.10) let 𝛾 = 𝜎𝑒 2 + 𝑝𝜎𝜍 2 then ln (𝐿2(𝑆𝑆𝑒, 𝑆𝑆𝜍 𝛾)) = −(𝑝−1)(𝑁−𝑞) 2 ln(𝜎𝑒 2) − (𝑁−𝑞) 2 ln(𝛾) − 1 2 [ 𝑆𝑆𝑒 𝜎𝑒 2 + 𝑆𝑆𝜍 𝛾 ] (2.2.11)
  • 8.
    UtilitasMathematica ISSN 0315-3681 Volume120, 2023 170 subject to the restrictions 𝛾 ≥ 𝜎𝑒 2 > 0. The derivative is then equaled to zero by equating (2.2.11). Then 𝜕 ln(𝐿2) 𝜕𝜎𝑒 2 = −(𝑝−1)(𝑁−𝑞) 2𝜎𝑒 2 − 𝑆𝑆𝑒 2𝜎𝑒 4 = 0 ⇒ 𝜎 ̂𝑒 2 = 𝑆𝑆𝑒 (𝑝−1)(𝑁−𝑞) = 𝑀𝑆𝑒, (2.2.12) 𝜕 ln(𝐿2) 𝜕𝛾 = − (𝑁−𝑞) 2𝛾 − 𝑆𝑆𝜍 2𝛾2 = 0 ⇒ 𝛾 ̂ = 𝑆𝑆𝜍 (𝑁−𝑞) ,then 𝜎 ̂𝑒 2 + 𝑝𝜎 ̂𝜍 2 = 𝑆𝑆𝜍 (𝑁−𝑞) ⇒ 𝜎 ̂𝜍 2 = 1 𝑝 [ 𝑆𝑆𝜍 (𝑁−𝑞) − 𝑆𝑆𝑒 (𝑝−1)(𝑁−𝑞) ] = 1 𝑝 [𝑀𝑆𝜍 − 𝑀𝑆𝑒] (2.2.13) 2.3. Important Estimators' Characteristics We state various estimator properties as theorems in this section. Theorem 2.3.1 The estimators of 𝜗 , 𝜉ℒ , 𝜁𝜘 , (𝜉𝜁)ℒ𝜘 in our model are unbiased estimator. Proof: From equation (2.2) ,(2.5) and (2.7) we have 𝐸(𝜗 ̂ ) = 1 𝑁𝑝 𝐸(∑ ∑ ∑ 𝜐ℓℒ𝜘 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 ) 𝐸(𝜗 ̂ ) = 1 𝑛𝑝 𝐸(∑ ∑ ∑ (𝜗 + 𝜉ℒ + 𝜁𝜘 + (𝜉𝜁)ℒ𝜘 + 𝜍ℓ(ℒ) + 𝑒ℓℒ𝜘) 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 ) , 𝐸(𝜗 ̂ ) = 1 𝑁𝑝 (𝑁𝑝𝜗) = 𝜗 ⟹ 𝜗 ̂ is unbiased estimator of 𝜗. And , from equation (2.2) , (2.5)and (2.9)we have 𝐸(𝜉 ̂ℒ) = 1 𝑛ℒ𝑝 𝐸(∑ ∑ 𝜐ℓℒ𝜘 𝑝 𝜘=1 𝑛ℒ ℓ=1 ) − 𝜗 𝐸(𝜉 ̂ℒ) = 1 𝑛ℒ𝑝 𝐸(∑ ∑ (𝜗 + 𝜉ℒ + 𝜁𝜘 + (𝜉𝜁)ℒ𝜘 + 𝜍ℓ(ℒ) + 𝑒ℓℒ𝜘) 𝑝 𝜘=1 𝑛ℒ ℓ=1 ) − 𝜗, 𝐸(𝜉 ̂ℒ) = 1 𝑛ℒ𝑝 𝐸(𝑛ℒ𝑝𝜗 + 𝑛ℒ𝑝𝜉ℒ) − 𝜗 = 𝜗 + 𝜉ℒ − 𝜗 = 𝜉ℒ ⟹ 𝜉 ̂ℒ is unbiased estimator of 𝜉ℒ . Now , from equation (2.2), (2.5) and (2.11) we have 𝐸(𝜁 ̂𝜘) = 1 𝑁 ∑ ∑ 𝜐ℓℒ𝜘 𝑛ℒ ℓ=1 𝑞 ℒ=1 − 𝜗
  • 9.
    UtilitasMathematica ISSN 0315-3681 Volume120, 2023 171 𝐸(𝜁 ̂𝜘) = 1 𝑁 ∑ ∑ ( 𝑛ℒ ℓ=1 𝜗 + 𝜉ℒ + 𝜁𝜘 + (𝜉𝜁)ℒ𝜘 + 𝜍ℓ(ℒ) + 𝑒ℓℒ𝜘) 𝑞 ℒ=1 − 𝜗 , 𝐸(𝜁 ̂𝜘) = 1 𝑁 𝐸(𝑁𝜗 + 𝑁𝜁𝜘) − 𝜗 = 𝜗 + 𝜁𝜘 − 𝜗 = 𝜁𝜘 ⟹ 𝜁 ̂𝜘 is unbiased estimator of 𝜁𝜘 , Now , from equation (2.2), (2.5) and (2.13) we have 𝐸 ((𝜉𝜁 ̂)ℒ𝜘 ) = 1 𝑛ℒ 𝐸(∑ 𝜐ℓℒ𝜘 𝑛ℒ ℓ=1 ) − 1 𝑛ℒ𝑝 𝐸(∑ ∑ 𝜐ℓℒ𝜘 𝑝 𝜘=1 𝑛ℒ ℓ=1 ) − 1 𝑁 𝐸(∑ ∑ 𝜐ℓℒ𝜘 𝑛ℒ ℓ=1 𝑞 ℒ=1 ) + 𝜗, 𝐸 ((𝜉𝜁 ̂)ℒ𝜘 ) = 1 𝑛ℒ 𝐸(𝑛ℒ𝜗 + 𝑛ℒ𝜉ℒ + 𝑛ℒ𝜁𝜘 + 𝑛ℒ(𝜉𝜁)ℒ𝜘) − 1 𝑛ℒ𝑝 𝐸(𝑛ℒ𝑝𝜗 + 𝑛ℒ𝑝𝜉ℒ) − 1 𝑁 𝐸(𝑁𝜗 + 𝑁𝜁𝜘) + 𝜗 , 𝐸 ((𝜉𝜁 ̂)ℒ𝜘 ) = (𝜗 + 𝜉ℒ + 𝜁𝜘 + (𝜉𝜁)ℒ𝜘) −(𝜗 + 𝜉ℒ) − (𝜗 + 𝜁𝜘) + 𝜗 = (𝜉𝜁)ℒ𝜘 , ∴ (𝜉𝜁 ̂)ℒ𝜘 is unbiased estimator of (𝜉𝜁)ℒ𝜘 . Theorem 2.3.2 The estimators of 𝜗 , 𝜉ℒ , 𝜁𝜘 , (𝜉𝜁)ℒ𝜘 in our model are consistent estimator. Proof: From equation (2.2) and (2.5) we have 𝑣𝑎𝑟(𝜗 ̂) = 𝑣𝑎𝑟 ( 1 𝑁𝑝 ∑ ∑ ∑ 𝜐ℓℒ𝜘 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 ) 𝑣𝑎𝑟(𝜗 ̂) = 1 𝑁2𝑝2 𝑣𝑎𝑟(∑ ∑ ∑ 𝜐ℓℒ𝜘 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 ) = 𝜎𝜍 2+𝜎𝑒 2 𝑚 ⟶ 0 𝑎𝑠 𝑚 ⟶ ∞ , ∴ 𝜗 ̂ is consistent of 𝜗 . And , from equation (2.5) and (2.9) we have 𝑣𝑎𝑟(𝜉 ̂ℒ) = 𝑣𝑎𝑟(𝜐̅.ℒ. − 𝜐̅…) = 𝑣𝑎𝑟 ( 1 𝑛ℒ𝑝 ∑ ∑ 𝜐ℓℒ𝜘 𝑝 𝜘=1 𝑛ℒ ℓ=1 ) + 𝜎𝜍 2+𝜎𝑒 2 𝑚 𝑣𝑎𝑟(𝜉 ̂ℒ) = 1 𝑛ℒ 2𝑝2 (𝑛ℒ𝑝)( 𝜎𝜍 2 + 𝜎𝑒 2 ) + 𝜎𝜍 2+𝜎𝑒 2 𝑚 , 𝑣𝑎𝑟(𝜉 ̂ℒ) = (𝑁+𝑛ℒ)(𝜎𝜍 2+𝜎𝑒 2) 𝑚1 → 0 𝑎𝑠 𝑚1 → ∞ , where 𝑚1 = 𝑛ℒ𝑁𝑝 .
  • 10.
    UtilitasMathematica ISSN 0315-3681 Volume120, 2023 172 ∴ 𝜉 ̂ℒ is consistent of 𝜉ℒ . And, from equation (2.5) and (2.11) we have 𝑣𝑎𝑟(𝜁 ̂𝜘) = 𝑣𝑎𝑟(𝜐̅..𝜘 − 𝜐̅…) = 𝑣𝑎𝑟 ( 1 𝑁 ∑ ∑ 𝜐ℓℒ𝜘 𝑛ℒ ℓ=1 𝑞 ℒ=1 ) + 𝜎𝜍 2+𝜎𝑒 2 𝑚 , 𝑣𝑎𝑟(𝜁 ̂𝜘) = 1 𝑁2 𝑣𝑎𝑟(∑ ∑ 𝜐ℓℒ𝜘 𝑛ℒ ℓ=1 𝑞 ℒ=1 ) + 𝜎𝜍 2+𝜎𝑒 2 𝑚 , 𝑣𝑎𝑟(𝜁 ̂𝜘) = (𝑝+1)(𝜎𝜍 2+𝜎𝑒 2) 𝑚 → 0 𝑎𝑠 𝑚 → ∞ ⟹ 𝜁 ̂𝜘 is consistent of 𝜁𝜘 . And , from equation (2.5) and (2.13)we have 𝑣𝑎𝑟 ((𝜉𝜁 ̂)ℒ𝜘 ) = 𝑣𝑎𝑟 ( 1 𝑛ℒ (∑ 𝜐ℓℒ𝜘 𝑛ℒ ℓ=1 )) + 𝜎𝜍 2+𝜎𝑒 2 𝑛ℒ𝑝 + 𝜎𝜍 2+𝜎𝑒 2 𝑁 + 𝜎𝜍 2+𝜎𝑒 2 𝑚 𝑣𝑎𝑟 ((𝜉𝜁 ̂)ℒ𝜘 ) = (𝑝+1)(𝑁+𝑛ℒ)(𝜎𝜍 2+𝜎𝑒 2) 𝑚1 → ∞ 𝑎𝑠 𝑚1 → 0 ⟹ (𝜉𝜁 ̂)ℒ𝜘 is consistent of (𝜉𝜁)ℒ𝜘 . 3. Estimator Distribution Because our estimators are statistical,, and the parameter estimators of three methods are same then the distribution of them are as follows: From theorem (2.3.1),and (2.3.2) we have 𝐸(𝜗 ̂ ) = 𝜗 , and 𝑣𝑎𝑟(𝜗 ̂) = 𝑣𝑎𝑟(𝜐̅…) = 𝜎𝜍 2+𝜎𝑒 2 𝑁𝑝 . Then 𝜗 ̂~𝑖. 𝑖. 𝑑 𝑁 (𝜗, 𝜎𝜍 2+𝜎𝑒 2 𝑁𝑝 ). Now for 𝜉 ̂ℒ = 𝜐̅.ℒ. − 𝜐̅…, from theorem (2.3.1),and (2.3.2) we have 𝐸(𝜉 ̂ℒ) = 𝜉ℒ , and 𝑣𝑎𝑟(𝜉 ̂ℒ) = 𝑣𝑎𝑟(𝜐̅.ℒ. − 𝜐̅…) = 𝑣𝑎𝑟(𝜐̅.ℒ.) + 𝑣𝑎𝑟(𝜐̅…) = 𝜎𝜍 2+𝜎𝑒 2 𝑛ℒ𝑝 + 𝜎𝜍 2+𝜎𝑒 2 𝑁𝑝 = (𝑁+𝑛ℒ)(𝜎𝜍 2+𝜎𝑒 2) 𝑛ℒ𝑁𝑝 . Then 𝜉 ̂ℒ~𝑖. 𝑖. 𝑑 𝑁 (𝜉ℒ, (𝑁+𝑛ℒ)(𝜎𝜍 2+𝜎𝑒 2) 𝑛ℒ𝑁𝑝 ).
  • 11.
    UtilitasMathematica ISSN 0315-3681 Volume120, 2023 173 Now for 𝜁 ̂𝜘 = 𝜐̅..𝜘 − 𝜐̅… , from theorem (2.3.1),and (2.3.2) we have 𝐸(𝜁 ̂𝜘) = 𝜁𝜘 ,and 𝑣𝑎𝑟(𝜁 ̂𝜘) = 𝑣𝑎𝑟(𝜐̅..𝜘 − 𝜐̅…) = 𝑣𝑎𝑟(𝜐̅..𝜘) + 𝑣𝑎𝑟(𝜐̅…) = 𝑝(𝜎𝜍 2+𝜎𝑒 2) 𝑁𝑝 + (𝜎𝜍 2+𝜎𝑒 2) 𝑁𝑝 = (𝑝+1)(𝜎𝜍 2+𝜎𝑒 2) 𝑁𝑝 Then 𝜁 ̂𝜘~𝑖. 𝑖. 𝑑 𝑁 (𝜁𝜘, (𝑝+1)(𝜎𝜍 2+𝜎𝑒 2) 𝑁𝑝 ). Now for (𝜉𝜁 ̂)ℒ𝜘 = 𝜐̅.ℒ𝜘 − 𝜐̅.ℒ. − 𝜐̅..𝜘 + 𝜐̅… , from theorem (2.3.1),and (2.3.2) we have 𝐸 ((𝜉𝜁 ̂)ℒ𝜘 ) = (𝜉𝜁)ℒ𝜘 ,and 𝑣𝑎𝑟 ((𝜉𝜁 ̂)ℒ𝜘 ) = 𝑣𝑎𝑟(𝜐̅.ℒ𝜘 − 𝜐̅.ℒ. − 𝜐̅..𝜘 + 𝜐̅…) 𝑣𝑎𝑟 ((𝜉𝜁 ̂)ℒ𝜘 ) = ( 𝜎𝜍 2+𝜎𝑒 2) 𝑛ℒ + 𝜎𝜍 2+𝜎𝑒 2 𝑛ℒ𝑝 + 𝜎𝜍 2+𝜎𝑒 2 𝑁 + 𝜎𝜍 2+𝜎𝑒 2 𝑁𝑝 = (𝑁𝑝+𝑁+𝑛ℒ𝑝+𝑛ℒ)( 𝜎𝜍 2+𝜎𝑒 2) 𝑛ℒ𝑁𝑝 Then (𝜉𝜁 ̂)ℒ𝜘 ~𝑖. 𝑖. 𝑑 𝑁 ((𝜉𝜁)ℒ𝜘, (𝑁𝑝+𝑁+𝑛ℒ𝑝+𝑛ℒ)( 𝜎𝜍 2+𝜎𝑒 2) 𝑛ℒ𝑁𝑝 ). 4. The Test In this part, we shall determine the analytic version of the likelihood-ratio test for the model. 4.1.Consider 𝐻0: 𝜗 = 0 vs. 𝐻1: 𝜗 ≠ 0 Let Ω = {(𝜗 , 𝜉ℒ , 𝜁𝜘 , (𝜉𝜁)ℒ𝜘 , 𝜎𝑒 2 , 𝜎𝜍 2 ), ℒ = 1, … , 𝑞 , 𝜘 = 1, … , 𝑝: 𝜗 , 𝜉ℒ , 𝜁𝜘 , (𝜉𝜁)ℒ𝜘 ∈ 𝑅 𝑎𝑛𝑑 𝜎𝑒 2 > 0, 𝜎𝜍 2 > 0} 𝜔 = {(𝜗 , 𝜉ℒ , 𝜁𝜘 , (𝜉𝜁)ℒ𝜘 , 𝜎𝑒 2 , 𝜎𝜍 2 ) ∈ Ω, ℒ = 1, … , 𝑞 , 𝜘 = 1, … 𝑝 ∶ 𝜗 = 0} Then we have 𝐿(𝜔) = (2𝜋( 𝜎𝑒 2 + 𝜎𝜍 2 )) −𝑁𝑝 2 𝑒𝑥𝑝 [ − ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘)2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 2( 𝜎𝑒 2+𝜎𝜍 2) ] 𝐿(Ω) = (2𝜋( 𝜎𝑒 2 + 𝜎𝜍 2 )) −𝑁𝑝 2 𝑒𝑥𝑝 [ − ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘)2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 2( 𝜎𝑒 2+𝜎𝜍 2) ] We know that
  • 12.
    UtilitasMathematica ISSN 0315-3681 Volume120, 2023 174 υ ̅… = ∑ ∑ ∑ 𝜐ℓℒ𝜘 p ϰ=1 𝑛ℒ ℓ=1 q ℒ=1 Np , 𝜐̅.ℒ. = ∑ ∑ 𝜐ℓℒ𝜘 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑛ℒ𝑝 , 𝜐̅..𝜘 = ∑ ∑ 𝜐ℓℒ𝜘 𝑛ℒ ℓ=1 𝑞 ℒ=1 𝑁 , 𝜐̅.ℒ𝜘 = ∑ 𝜐ℓℒ𝜘 𝑛ℒ ℓ=1 𝑛ℒ , 𝜐̅ℓℒ. = ∑ 𝜐ℓℒ𝜘 𝑝 𝜘=1 𝑝 𝜗 ̂ = 𝜐̅… , 𝜉 ̂ℒ = 𝜐̅.ℒ. − 𝜐̅… , 𝜁 ̂𝜘 = 𝜐̅..𝜘 − 𝜐̅… , (𝜉𝜁 ̂)ℒ𝜘 = 𝜐̅.ℒ𝜘 − 𝜐̅.ℒ. − 𝜐̅..𝜘 + 𝜐̅… , 𝑎𝑛𝑑(𝜎 ̂𝜍 2 + 𝜎 ̂𝑒 2 ) = ∑ ∑ ∑ (𝜐ℓℒ𝜘 − 𝜐̅.ℒ𝜘)2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 𝑁𝑝 Let 𝜆 = 𝐿(𝜔 ̂ ) 𝐿(𝛺 ̂) = exp[ − ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐 ̅.ℒ𝜘+ 𝜐 ̅…) 2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 2[∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐 ̅.ℒ𝜘) 2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 ] 𝑁𝑝 ⁄ ] exp[ − ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐 ̅.ℒ𝜘) 2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 2[∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐 ̅.ℒ𝜘) 2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 ] 𝑁𝑝 ⁄ ] = 𝑒𝑥𝑝[ − ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐 ̅.ℒ𝜘 ) 2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 −𝑁𝑝𝜈 ̅… 2 2[∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐 ̅.ℒ𝜘) 2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 ] 𝑁𝑝 ⁄ ] exp[ −𝑁𝑝 2 ] 𝜆 = exp [ −𝑁𝑝 2 (1 + 𝑁𝑝𝜈 ̅… 2 ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐 ̅.ℒ𝜘)2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 ) + 𝑁𝑝 2 ] ⟹ ln(𝜆) = −(𝑁𝑝𝜐 ̅…)2 2[∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐 ̅.ℒ𝜘)2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 ] ⟹ −2 ln(𝜆) = (𝑁𝑝𝜐 ̅…)2 ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐 ̅.ℒ𝜘)2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 Since 𝜐̅…~𝑖. 𝑖. 𝑑 𝑁 (𝜗, 𝜎𝜍 2+𝜎𝑒 2 𝑁𝑝 ) ,then 𝜐 ̅…−0 √ 𝜎𝜍 2+𝜎𝑒 2 𝑁𝑝 ~𝑁(0,1) ⟹ 𝑁𝑝𝜈 ̅… 2 𝜎𝜍 2+𝜎𝑒 2 ~𝑥2 (1) ⇒ ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐 ̅.ℒ𝜘)2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 𝜎𝜍 2+𝜎𝑒 2 ~𝑥2 (𝑞𝑝(𝑁 − 1)).Then we have 𝑁𝑝𝜈 ̅… 2 𝜎𝜍 2+𝜎𝑒 2 ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐 ̅.ℒ𝜘) 2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 𝜎𝜍 2+𝜎𝑒 2 ~ 𝑥2(1) 𝑥2(𝑞𝑝(𝑁−1)) ~𝐹(1, 𝑞𝑝(𝑁 − 1)) 4.2.Consider 𝐻0: 𝜉ℒ = 0 vs. 𝐻1: 𝜉ℒ ≠ 0 , ℒ = 1, … , 𝑞 , Then we have 𝐿(𝜔) = (2𝜋( 𝜎𝑒 2 + 𝜎𝜍 2 )) −𝑁𝑝 2 𝑒𝑥𝑝 [ − ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜁𝜘−(𝜉𝜁)ℒ𝜘)2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 2( 𝜎𝑒 2+𝜎𝜍 2) ] 𝐿(Ω) = (2𝜋( 𝜎𝑒 2 + 𝜎𝜍 2 )) −𝑁𝑝 2 𝑒𝑥𝑝 [ − ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘)2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 2( 𝜎𝑒 2+𝜎𝜍 2) ]
  • 13.
    UtilitasMathematica ISSN 0315-3681 Volume120, 2023 175 Then let 𝜆 = 𝐿(𝜔 ̂) 𝐿(𝛺 ̂) = exp[ − ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐 ̅.ℒ𝜘+𝜐 ̅.ℒ.− 𝜐 ̅…) 2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 2[∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐 ̅.ℒ𝜘) 2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 ] 𝑁𝑝 ⁄ ] exp[ − ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐 ̅.ℒ𝜘) 2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 2[∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐 ̅.ℒ𝜘) 2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 ] 𝑁𝑝 ⁄ ] = 𝑒𝑥𝑝[ − ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐 ̅.ℒ𝜘 ) 2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 −∑ ∑ ∑ (𝜐 ̅.ℒ.− 𝜐 ̅… ) 2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 2[∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐 ̅.ℒ𝜘) 2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 ] 𝑁𝑝 ⁄ ] exp[ −𝑁𝑝 2 ] 𝜆 = exp [ −𝑁𝑝 2 (1 + ∑ ∑ ∑ (𝜐 ̅.ℒ.− 𝜐 ̅… )2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐 ̅.ℒ𝜘)2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 ) + 𝑁𝑝 2 ] ⟹ ln(𝜆) = 𝑁𝑝 ∑ ∑ ∑ (𝜐 ̅.ℒ.− 𝜐 ̅… )2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 2[∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐 ̅.ℒ𝜘)2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 ] ⟹ −2 ln(𝜆) = ∑ ∑ ∑ [√𝑁𝑝(𝜐 ̅.ℒ.− 𝜐 ̅… )]2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐 ̅.ℒ𝜘)2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 Since 𝜉 ̂ℒ~𝑖. 𝑖. 𝑑 𝑁 (𝜉ℒ, (𝑁+𝑛ℒ)(𝜎𝜍 2+𝜎𝑒 2) 𝑛ℒ𝑁𝑝 ) then ∑ ∑ ∑ [√𝑁𝑝(𝜐 ̅.ℒ.− 𝜐 ̅… )]2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 (𝜎𝜍 2+𝜎𝑒 2) ~𝑥2 (𝑞 − 1) and ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐 ̅.ℒ𝜘)2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 𝜎𝜍 2+𝜎𝑒 2 ~𝑥2 (𝑞𝑝(𝑁 − 1)) Then we have ∑ ∑ ∑ [√𝑁𝑝(𝜐 ̅.ℒ.− 𝜐 ̅… )]2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 (𝜎𝜍 2+𝜎𝑒 2) ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐 ̅.ℒ𝜘) 2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 𝜎𝜍 2+𝜎𝑒 2 ~ 𝑥2(𝑞−1) 𝑥2(𝑞𝑝(𝑁−1)) ~𝐹(𝑞 − 1, 𝑞𝑝(𝑁 − 1)) 4.3.Consider 𝐻0: 𝜁𝜘 = 0 vs. 𝐻1: 𝜁𝜘 ≠ 0 , 𝜘 = 1, … , 𝑝 , Then we have 𝐿(𝜔) = (2𝜋( 𝜎𝑒 2 + 𝜎𝜍 2 )) −𝑁𝑝 2 𝑒𝑥𝑝 [ − ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−(𝜉𝜁)ℒ𝜘)2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 2( 𝜎𝑒 2+𝜎𝜍 2) ] 𝐿(Ω) = (2𝜋( 𝜎𝑒 2 + 𝜎𝜍 2 )) −𝑁𝑝 2 𝑒𝑥𝑝 [ − ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘)2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 2( 𝜎𝑒 2+𝜎𝜍 2) ] Then let 𝜆 = 𝐿(𝜔 ̂) 𝐿(𝛺 ̂) = exp[ − ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐 ̅.ℒ𝜘+𝜐 ̅..𝜘− 𝜐 ̅…) 2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 2[∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐 ̅.ℒ𝜘) 2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 ] 𝑁𝑝 ⁄ ] exp[ − ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐 ̅.ℒ𝜘) 2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 2[∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐 ̅.ℒ𝜘) 2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 ] 𝑁𝑝 ⁄ ] = 𝑒𝑥𝑝[ − ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐 ̅.ℒ𝜘 ) 2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 −∑ ∑ ∑ (𝜐 ̅..𝜘− 𝜐 ̅… )2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 2[∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐 ̅.ℒ𝜘) 2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 ] 𝑁𝑝 ⁄ ] exp[ −𝑁𝑝 2 ] 𝜆 = exp [ −𝑁𝑝 2 (1 + ∑ ∑ ∑ (𝜐 ̅..𝜘− 𝜐 ̅… )2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐 ̅.ℒ𝜘)2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 ) + 𝑁𝑝 2 ] ⟹ ln(𝜆) = 𝑁𝑝 ∑ ∑ ∑ (𝜐 ̅..𝜘− 𝜐 ̅… )2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 2[∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐 ̅.ℒ𝜘)2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 ]
  • 14.
    UtilitasMathematica ISSN 0315-3681 Volume120, 2023 176 ⟹ −2 ln(𝜆) = ∑ ∑ ∑ [√𝑁𝑝(𝜐 ̅..𝜘− 𝜐 ̅… )]2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐 ̅.ℒ𝜘)2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 Since 𝜁 ̂𝜘~𝑖. 𝑖. 𝑑 𝑁 (𝜁𝜘, (𝑝+1)(𝜎𝜍 2+𝜎𝑒 2) 𝑁𝑝 ) then ∑ ∑ ∑ [√𝑁𝑝(𝜐̅..𝜘 − 𝜐̅… )]2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 (𝜎𝜍 2 + 𝜎𝑒 2) ~𝜒2(𝑝 − 1) , and ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐 ̅.ℒ𝜘)2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 (𝜎𝜍 2+𝜎𝑒 2) ~𝜒2 (𝑞𝑝(𝑁 − 1)) ,then we have ∑ ∑ ∑ [√𝑁𝑝(𝜐̅..𝜘 − 𝜐̅… )]2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 (𝜎𝜍 2 + 𝜎𝑒 2) ∑ ∑ ∑ (𝜐ℓℒ𝜘 − 𝜐̅.ℒ𝜘)2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 (𝜎𝜍 2 + 𝜎𝑒 2) ~ 𝜒2 (𝑝 − 1) 𝜒2(𝑞𝑝(𝑁 − 1)) ~𝐹((𝑝 − 1), 𝑞𝑝(𝑁 − 1)) 4.4.Consider 𝐻0: (𝜉𝜁)ℒ𝜘 = 0 vs. 𝐻1: (𝜉𝜁)ℒ𝜘 ≠ 0 , 𝜘 = 1, … , 𝑝, ℒ = 1, … , 𝑞 Then we have 𝐿(𝜔) = (2𝜋( 𝜎𝑒 2 + 𝜎𝜍 2 )) −𝑁𝑝 2 𝑒𝑥𝑝 [ − ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘)2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 2( 𝜎𝑒 2+𝜎𝜍 2) ] 𝐿(Ω) = (2𝜋( 𝜎𝑒 2 + 𝜎𝜍 2 )) −𝑁𝑝 2 𝑒𝑥𝑝 [ − ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘)2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 2( 𝜎𝑒 2+𝜎𝜍 2) ] Then let 𝜆 = 𝐿(𝜔 ̂) 𝐿(𝛺 ̂) = exp[ − ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐 ̅.ℒ.−𝜐 ̅..𝜘+ 𝜐 ̅…) 2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 2[∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐 ̅.ℒ𝜘) 2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 ] 𝑁𝑝 ⁄ ] exp[ − ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐 ̅.ℒ𝜘) 2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 2[∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐 ̅.ℒ𝜘) 2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 ] 𝑁𝑝 ⁄ ] = 𝑒𝑥𝑝[ − ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐 ̅.ℒ𝜘 ) 2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 −∑ ∑ ∑ (𝜐 ̅.ℒ𝜘−𝜐 ̅.ℒ.−𝜐 ̅..𝜘+ 𝜐 ̅… ) 2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 2[∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐 ̅.ℒ𝜘) 2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 ] 𝑁𝑝 ⁄ ] exp[ −𝑁𝑝 2 ] 𝜆 = exp [ −𝑁𝑝 2 (1 + ∑ ∑ ∑ (𝜐 ̅.ℒ𝜘−𝜐 ̅.ℒ.−𝜐 ̅..𝜘+ 𝜐 ̅… )2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐 ̅.ℒ𝜘)2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 ) + 𝑁𝑝 2 ] ⟹ ln(𝜆) = 𝑁𝑝 ∑ ∑ ∑ (𝜐 ̅.ℒ𝜘−𝜐 ̅.ℒ.−𝜐 ̅..𝜘+ 𝜐 ̅… )2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 2[∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐 ̅.ℒ𝜘)2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 ] ⟹ −2 ln(𝜆) = ∑ ∑ ∑ [√𝑁𝑝(𝜐 ̅.ℒ𝜘−𝜐 ̅.ℒ.−𝜐 ̅..𝜘+ 𝜐 ̅… )]2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐 ̅.ℒ𝜘)2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1
  • 15.
    UtilitasMathematica ISSN 0315-3681 Volume120, 2023 177 since (𝜉𝜁 ̂)ℒ𝜘 ~𝑖. 𝑖. 𝑑 𝑁 ((𝜉𝜁)ℒ𝜘, (𝑁𝑝+𝑁+𝑛ℒ𝑝+𝑛ℒ)( 𝜎𝜍 2+𝜎𝑒 2) 𝑛ℒ𝑁𝑝 ) ,then ∑ ∑ ∑ [√𝑁𝑝(𝜐 ̅.ℒ𝜘−𝜐 ̅.ℒ.−𝜐 ̅..𝜘+ 𝜐 ̅… )]2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 ( 𝜎𝑒 2+𝜎𝜍 2) ~𝜒2 ((𝑝 − 1)(𝑞 − 1)) ,and ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐 ̅.ℒ𝜘)2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 ( 𝜎𝑒 2+𝜎𝜍 2) ~𝜒2 (𝑞𝑝(𝑁 − 1)) , then we have ∑ ∑ ∑ [√𝑁𝑝(𝜐 ̅.ℒ𝜘−𝜐 ̅.ℒ.−𝜐 ̅..𝜘+ 𝜐 ̅… )]2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 ( 𝜎𝑒 2+𝜎𝜍 2) ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜐 ̅.ℒ𝜘) 2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 ( 𝜎𝑒 2+𝜎𝜍 2) ~ 𝜒2((𝑝−1)(𝑞−1)) 𝜒2(𝑞𝑝(𝑁−1)) ~𝐹((𝑝 − 1)(𝑞 − 1), 𝑞𝑝(𝑁 − 1)) 5. First order approximations We may now talk about possible applications of the Central Limit Theorem and the Law of Large Numbers to the likelihood function in order to derive asymptotic claims when n or some other information measure is huge. The parameter for likelihood and significance functions each evaluate the data and produce a conclusion. The significance function calculates the probability left (or right) of the data point, while the likelihood function calculates the probability up to a multiplicative constant at the data point. The Central Limit Theorem and the Law of Large Numbers serve as the foundation for three widely used asymptotic approaches of deriving first order approximations. The following three statistics are utilized when applying these techniques. 5.1. The standardized score 𝑠 = 𝑆(𝜃)𝒥 −1 2 , 𝑤ℎ𝑒𝑟𝑒 𝜃 ∈ {𝜗, 𝜉ℒ, 𝜁𝜘, (𝜉𝜁)ℒ𝜘, 𝜎𝜍 2 , 𝜎𝑒 2 }, 𝒥 = − 𝜕2 𝑙𝑛(𝐿) 𝜕(𝜃)2 | 𝜃=𝜃 ̂ , 𝑎𝑛𝑑 𝑆(𝜃) = 𝜕 𝑙𝑛(𝐿) 𝜕𝜃 Now we have from (2.20) and (2.21)
  • 16.
    UtilitasMathematica ISSN 0315-3681 Volume120, 2023 178 𝑠1 = ( ∑ ∑ ∑ (𝜐ℓℒ𝜘)−𝑁𝑝𝜗 ̂ 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 ( 𝜎𝑒 2+𝜎𝜍 2) ) ( 𝑁𝑝 ( 𝜎𝑒 2+𝜎𝜍 2) ) −1 2 𝑠2 = ( ∑ ∑ (𝜐ℓℒ𝜘) 𝑝 𝜘=1 𝑛ℒ ℓ=1 −𝑛ℒ𝑝𝜗 ̂−𝑛ℒ𝑝𝜉 ̂ℒ ( 𝜎𝑒 2+𝜎𝜍 2) ) ( 𝑛ℒ𝑝𝑞 ( 𝜎𝑒 2+𝜎𝜍 2) ) −1 2 𝑠3 = ( 𝑝 ∑ ∑ 𝜐ℓℒ𝜘 𝑛ℒ ℓ=1 𝑞 ℒ=1 −𝑝𝑁𝜗 ̂−𝑝𝑁𝜁 ̂𝜘 ( 𝜎𝑒 2+𝜎𝜍 2) ) ( 𝑝𝑁 ( 𝜎𝑒 2+𝜎𝜍 2) ) −1 2 𝑠4 = ( 𝑞𝑝(∑ 𝜐ℓℒ𝜘 𝑛ℒ ℓ=1 −𝑛ℒ𝜗 ̂−𝑛ℒ𝜉 ̂ℒ−𝑛ℒ𝜁 ̂𝜘−𝑛ℒ(𝜉𝜁 ̂)ℒ𝜘 ) ( 𝜎𝑒 2+𝜎𝜍 2) ) ( 𝑞𝑝𝑛ℒ ( 𝜎𝑒 2+𝜎𝜍 2) ) −1 2 𝑠5 = ( −𝑁𝑝 2 ( 1 𝜎𝜍 2+𝜎𝑒 2) + 2 ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘)2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 (2(𝜎𝜍 2+𝜎𝑒 2)) 2 ) ( 𝑁𝑝 2(𝜎 ̂𝜍 2+𝜎 ̂𝑒 2) 2 − 2 ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗 ̂−𝜉 ̂ℒ−𝜁 ̂𝜘−(𝜉𝜁 ̂)ℒ𝜘 ) 2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 (2(𝜎 ̂𝜍 2+𝜎 ̂𝑒 2)) 3 ) −1 2 𝑠6 = ( −𝑁𝑝 2 ( 1 𝜎𝜍 2+𝜎𝑒 2) + 2 ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗−𝜉ℒ−𝜁𝜘−(𝜉𝜁)ℒ𝜘)2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 (2(𝜎𝜍 2+𝜎𝑒 2)) 2 ) ( 𝑁𝑝 2(𝜎 ̂𝜍 2+𝜎 ̂𝑒 2) 2 − 2 ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗 ̂−𝜉 ̂ℒ−𝜁 ̂𝜘−(𝜉𝜁 ̂)ℒ𝜘 ) 2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 (2(𝜎 ̂𝜍 2+𝜎 ̂𝑒 2)) 3 ) −1 2 . } (5.1) 5.2. The standardized maximum likelihood estimate 𝑞 = (𝜃 ̂ − 𝜃)𝒥 −1 2 , then we have 𝑞1 = (𝜗 ̂ − 𝜗) ( 𝑁𝑝 ( 𝜎𝑒 2+𝜎𝜍 2) ) −1 2 , 𝑞2 = (𝜉 ̂ℒ − 𝜉ℒ) ( 𝑛ℒ𝑝𝑞 ( 𝜎𝑒 2+𝜎𝜍 2) ) −1 2 𝑞3 = (𝜁 ̂𝜘 − 𝜁𝜘) ( 𝑝𝑁 ( 𝜎𝑒 2+𝜎𝜍 2) ) −1 2 , 𝑞4 = ((𝜉𝜁 ̂)ℒ𝜘 − (𝜉𝜁)ℒ𝜘) ( 𝑞𝑝𝑛ℒ ( 𝜎𝑒 2+𝜎𝜍 2) ) −1 2 𝑞5 = (𝜎 ̂𝜍 2 − 𝜎𝜍 2 ) ( 𝑁𝑝 2(𝜎 ̂𝜍 2+𝜎 ̂𝑒 2) 2 − 2 ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗 ̂−𝜉 ̂ℒ−𝜁 ̂𝜘−(𝜉𝜁 ̂)ℒ𝜘 ) 2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 (2(𝜎 ̂𝜍 2+𝜎 ̂𝑒 2)) 3 ) −1 2 𝑞6 = (𝜎 ̂𝑒 2 − 𝜎𝑒 2) ( 𝑁𝑝 2(𝜎 ̂𝜍 2+𝜎 ̂𝑒 2) 2 − 2 ∑ ∑ ∑ (𝜐ℓℒ𝜘−𝜗 ̂−𝜉 ̂ℒ−𝜁 ̂𝜘−(𝜉𝜁 ̂)ℒ𝜘 ) 2 𝑝 𝜘=1 𝑛ℒ ℓ=1 𝑞 ℒ=1 (2(𝜎 ̂𝜍 2+𝜎 ̂𝑒 2)) 3 ) −1 2 . } (5.2) 5.3. The signed likelihood ratio statistic 𝑟 = 𝑠𝑔𝑛(𝜃 ̂ − 𝜃)[−2 𝑙𝑛 𝜆] 1 2
  • 17.
    UtilitasMathematica ISSN 0315-3681 Volume120, 2023 179 The significance function has three approximate values given by the three standardized variables: 𝜑(𝑠), 𝜑(𝑞), 𝑎𝑛𝑑 𝜑(𝑟). The signed likelihood root and the maximum likelihood departure are significant quantities in the current context because they serve as third- or lower-order approximations for assessing a scalar interest parameter in the unbalanced one-way variance components models. Testing a real interest parameter Ψ in the presence of a vector nuisance parameter α is a common problem. In this part, we create significance tests for real parameters Ψ(𝜃), the canonical interest parameter, using significance functions. Let 𝐻0: Ψ = Ψ0 𝑣𝑠 𝐻1: 𝛹 ≠ 𝛹0 . The maximum likelihood estimate for θ has an approximate 𝑁𝑁𝑝 (𝜃, 𝒥𝜃𝜃 (𝜃 ̂)) distribution where 𝒥𝜃𝜃 designates the asymptotic variance of 𝜃 ̂ and is given by ( 𝒥𝛼𝛼 𝒥𝛼Ψ 𝒥Ψ𝛼 𝒥ΨΨ) . Using simple multivariate normal theory we find that the asymptotic distribution of Ψ ̂ is normal with mean Ψ and estimated variance (𝒥𝛹𝛹 (𝜃 ̂Ψ)) where (𝒥𝛹𝛹 (𝜃 ̂𝛹)) −1 = 𝒥ΨΨ − 𝒥Ψ𝛼𝒥𝛼𝛼 −1 𝒥𝛼𝛹|𝜃 ̂𝛹 Under the null hypothesis, the maximum likelihood departure has a well-known standardized pivotal form 𝑞 = (Ψ ̂ − Ψ)(𝒥𝛹𝛹) −1 2 (5.3) Using (5.3) and the following factorization |𝒥𝜃𝜃| = |𝒥𝛼𝛼|(𝒥𝛹𝛹)−1 (5.4) the statistic q can be expressed as 𝑞 = (Ψ ̂ − Ψ) ( |𝒥𝜃𝜃(𝜃 ̂)| |𝒥𝛼𝛼(𝜃 ̂)| ) 1 2 (5.5) Another important statistic for the construction of third order approximations is the signed likelihood ratio statistic r defined as 𝑟 = 𝑠𝑔𝑛(𝛹 ̂ − 𝛹)√𝒟 where 𝒟 = 2[ln (𝐿(𝜃 ̂)) − ln (𝐿(𝜃 ̂𝛹))] = 2 ln ( 𝐿(𝜃 ̂) 𝐿(𝜃 ̂𝛹) ) 𝒟 is called the deviance and r is called the directed deviance .
  • 18.
    UtilitasMathematica ISSN 0315-3681 Volume120, 2023 180 The construction of the above two standardized variables leads to two first order approximations for the significance function 𝑝1(𝜃) = 𝜑(𝑟) , 𝑝2(𝜃) = 𝜑(𝑞) 𝑤ℎ𝑒𝑟𝑒 𝜃𝜖{𝜗, 𝜉ℒ, 𝜁𝜘, (𝜉𝜁)ℒ𝜘, 𝜎𝜍 2 , 𝜎𝑒 2 } (5.6) In (5.6) we have 𝑝𝑖(𝜃) = 𝑝(𝜃) + 𝑂(𝑛−1 2 ⁄ ) with 𝑝(𝜃) represents the exact probability for each case , the bound as given applies for a bound range of the standardized variable. Third order approximations for significance functions can be expressed in the Lugannani and Rice(1980) type formula 𝜑(𝑟) + 𝜙(𝑟)(𝑟−1 − 𝑞−1 ), and the r* type formula 𝜑(𝑟 + 𝑟−1 ln 𝑞𝑟−1 ). 6. Conclusions In this paper, we reached the following conclusions 1- The maximum likelihood estimators of parameters of repeated measurements model are 𝜗 ̂ = 𝜐̅…, 𝜉 ̂ℒ = 𝜐̅.ℒ. − 𝜐̅…,𝜁 ̂𝜘 = 𝜐̅..𝜘 − 𝜐̅… ,(𝜉𝜁 ̂)ℒ𝜘 = 𝜐̅.ℒ𝜘 − 𝜐̅.ℒ. − 𝜐̅..𝜘 + 𝜐̅…and 𝜎 ̂𝑒 2 = 𝑀𝑆𝑒 , 𝑎𝑛𝑑 𝜎 ̂𝛿 2 = 1 𝑁𝑝(𝑝−1) 𝑀𝑆𝜍 + ((𝑝−1)(𝑁−𝑞)−𝑛𝑝) 𝑁𝑝 𝑀𝑆𝑒 . 2- The maximum likelihood estimators of parameters of repeated measurements model are unbiased ,and consistent. 3- The restricted maximum likelihood estimators of parameters of repeated measurements model are 𝜗 ̂ = 𝜐̅… , 𝜉 ̂ℒ = 𝜐̅.ℒ. − 𝜐̅… , 𝜁 ̂𝜘 = 𝜐̅..𝜘 − 𝜐̅… (𝜉𝜁 ̂)ℒ𝜘 = 𝜐̅.ℒ𝜘 − 𝜐̅.ℒ. − 𝜐̅..𝜘 + 𝜐̅… , 𝜎 ̂𝑒 2 = 𝑀𝑆𝑒 ,and 𝜎 ̂𝜍 2 = 1 𝑝 [𝑀𝑆𝜍 − 𝑀𝑆𝑒]. 4- The distribution of the estimators of parameters of repeated measurements model are 𝜗 ̂~𝑖. 𝑖. 𝑑 𝑁 (𝜗, 𝜎𝜍 2+𝜎𝑒 2 𝑁𝑝 ) , 𝜉 ̂ℒ~𝑖. 𝑖. 𝑑 𝑁 (𝜉ℒ, (𝑁+𝑛ℒ)(𝜎𝜍 2+𝜎𝑒 2) 𝑛ℒ𝑁𝑝 ) , 𝜁 ̂𝜘~𝑖. 𝑖. 𝑑 𝑁 (𝜁𝜘, (𝑝+1)(𝜎𝜍 2+𝜎𝑒 2) 𝑁𝑝 ) ,and (𝜉𝜁 ̂)ℒ𝜘 ~𝑖. 𝑖. 𝑑 𝑁 ((𝜉𝜁)ℒ𝜘, (𝑁𝑝+𝑁+𝑛ℒ𝑝+𝑛ℒ)( 𝜎𝜍 2+𝜎𝑒 2) 𝑛ℒ𝑁𝑝 ). 5- The likelihood-ratio test of the parameters 𝜗, 𝜉ℒ, 𝜁𝜘, 𝑎𝑛𝑑 (𝜉𝜁)ℒ𝜘 for repeated measurements model are obtained. 6- The maximum likelihood estimate for θ has an approximate 𝑁𝑁𝑝 (𝜃, 𝒥𝜃𝜃 (𝜃 ̂)) distribution. 7- The first order approximations for the significance function have 𝑝1(𝜃) = 𝜑(𝑟) , 𝑝2(𝜃) = 𝜑(𝑞).
  • 19.
    UtilitasMathematica ISSN 0315-3681 Volume120, 2023 181 References [1] AL-Mouel,A. S. " Multivariate Repeated Measures Models and Comparison of Estimators," Ph.D. dissertation, East China Normal University, China., 2004. [2] AL-Mouel,A. S. and Abd-Ali,A.A. ,(2020)"On One-Way Repeated Measurements Model with Practical Application", Journal of Global Scientific Research,Vol.5,Issue 12. [3] AL-Mouel, A. S. and Ali, F.H., (2021)"Penalized Likelihood Estimator for Variance Components in Repeated Measurements Model", Basrah Journal of Science, Vol.39(2).192- 203. [4] Hand, D. J. and Crowder, M. J., Analysis of Repeated Measures, London, 1993. [5] Mohaisen, A. J. and Swadi, K. A. R., (2014) "Bayesian one- Way Repeated Measurements Model Based on Markov Chain Monte Carlo," Indian Journal of Applied Research, Vol.4,No. 10. [6] Patterson, H. D. and Thompson, R., " Recovery of inter-block information when block sizes are unequal," Biometrika, 1971. [7] Verbeke, G. and Molenberghs,G., Linear Mixed Models for Longitudinal Data, New York: Springer, 2000.