2.
FENG et al.: BIA FOR EXTRACTING INDEPENDENT COMPONENTS FROM ARRAY SIGNALS 3637
It is known that on the basis of parallel decomposition
of multi-order tensors, many different BSS problems can be
modeled as the multi-linear models [37]–[39], [45]–[53]. The
multi-quadratic criteria associated with the multi-linear models
have been introduced by researchers [38], [39], [45]–[53],
and the minimization of the multi-quadratic criteria is conve-
niently achieved by the well-known alternating least squares
(ALS) algorithms [38], [39], [45]–[53]. Thus, the NOJD can
be efﬁciently implemented by the ALS algorithms [38], [39],
[45]–[53]. More importantly, an extended nonunitary identiﬁa-
bility proposition of the BSS problem can be deduced directly
from the identiﬁability theorem of parallel decomposition of
three-order tensors [37]–[39], [45]–[53]. According to the
extended identiﬁability proposition, the NUJD problem of BSS
can be linked to the trilinear model.
In order to extend the ACDC algorithm to the complex-valued
case and to develop a parallel version of it, the present paper will
establish a complex-valued nonunitary joint diagonalization al-
gorithm for extracting independent components from array sig-
nals. A novel symmetric tri-quadratic contrast function will be
introduced and then the corresponding simultaneous bi-iterative
algorithm (s-BIA) will be developed. The s-BIA has low compu-
tational complexity and good performance, like the ACDC and
the QDIAG algorithms. All these are the main contributions of
this paper.
The structure of the paper is as follows. Formulation of the
NUJD problem under consideration is made in Section II. In
Section III, the s-BIA for NUJD is introduced. Experimental
results are presented in Section IV to compare the proposed al-
gorithms with closely related BSS algorithms. The paper is con-
cluded in Section V.
II. NUJD PROBLEM
A. Eigenmatrices
It is known that many BSS problems in array signal pro-
cessing can be formulated by solving the NUJD problem of a
set of eigenmatrices
(2.1)
where the superscript denotes the complex conjugate trans-
pose, represents the number of the eigenmatrices used,
is the mixture matrix with full rank, and (
and ) denote eigenvalues. Note that since
the covariance matrix has a positive deﬁnite error matrix, it is
not included in a set of eigenmatrices (2.1). Thus, from the view-
point of matrix theory [41], the BSS problem can be considered
as an algebraic problem in the sense that a set of approximate
diagonally-structured matrices is utilized to
arrive at an estimate of the array mixture matrix . Moreover,
it is also well known that the sufﬁcient estimate of the mixture
matrix is impossible [25] because there always exist indeter-
minacies associated with the order and the scaling. This shows
that two estimates and of the mixture matrix usually sat-
isfy the condition
(2.2)
where is an invertible diagonal matrix and
denotes a permutation matrix. In accordance to
the permutation and scaling indeterminacies as shown in (2.2),
we can deﬁne an acceptable solution set of the NUJD problem
as follows.
Deﬁnition 2.1: An acceptable solution set
for estimating mixture matrix is deﬁned as
any permutation matrix and any invertible diagonal
matrix , where is a solution of the
mixture matrix of the BSS problem.
Since the permutation matrix changes discretely and the
norm of each diagonal element of the invertible diagonal matrix
varies over an inﬁnite open interval is an inﬁnite
disconnected nonconvex set formed by multiple continuous in-
ﬁnite convex sets (a continuous convex set is associated only
with continuous variation of the diagonal matrix ).
It is well known that the classical Lyapunov functions admit
only the unique ﬁxed point or the unique stable convex set as-
sociated with the solution. However, since the set is an in-
ﬁnite disconnected nonconvex set, a classical Lyapunov func-
tion cannot include the ﬁxed point set like the nonconvex set .
Hence, a BSS algorithm with the solution set cannot have any
classical Lyapunov function and must satisfy the following im-
portant property.
Property 2.1: A BSS algorithm with the solution set does
not have the global convergence in the sense of the classical
Lyapunov functions.
B. Trilinear Model
Deﬁne a vector and
its associated matrix ,
where the superscript denotes the transpose. Then (2.1) can
be rewritten as
(2.3)
Let the element of , the element of and
the element of be denoted by and , re-
spectively, for and . Then
(2.3) is expressible as the following trilinear models [38], [39],
[45]–[53]:
for and (2.4)
where the superscript denotes the complex conjugate. Some
existing works [37]–[39], [45]–[53] have more extensively con-
sidered the most general trilinear models
for and (2.5)
where and represent the element of
, the element of and the element
of , respectively. Obviously, if the condition
is satisﬁed, then (2.5) reduces to (2.4). Hence, (2.4) can
be seen as a constrained form of (2.5).
Let be the th row of matrix and the three-order tensor
consist of for and
. Then by slicing the three-order tensor along , we
have
(2.6)
3.
3638 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 59, NO. 8, AUGUST 2011
The so-called -rank of a matrix is deﬁned in
[37] and denoted by . For illustration convenience, is re-
deﬁned as follows.
Deﬁnition 2.2 [35]: Given a matrix , if
, then contains a collection of linearly inde-
pendent columns. Moreover, if every columns of
are linearly independent but every columns are linearly
dependent, then has .
It is obvious that . Kruskal’s seminal works
[37] and other related works [38], [39], [45]–[53] easily lead to
the following identiﬁability proposition.
Proposition 2.1: Suppose that there exist ﬁve matrices
and such that
for (2.7)
Then there exist relations and
, where is a permutation
matrix, and three diagonal matrices and
satisfy if .
Note that if like most of BSS algorithms the mixture matrix
is assumed to have full rank, then and
. This implies that the number of the required eigen-
matrices must be greater than or equal to 2. If the number of
the required eigenmatrices is 2, then it can be veriﬁed that an
element in is not equal to any entries in . Moreover,
when the mixture matrix does not have full rank,
then is necessary. Since , there must be
.
The above proposition shows that the BSS problem (2.1) can
be equivalently described by the trilinear models. This fact will
be exploited in the development of the cost functions and algo-
rithms for extracting the mixture matrix.
III. COST FUNCTIONS AND ALGORITHM
Here, in order to illustrate the efﬁciency of the proposed algo-
rithm, the computational complexity of three relative algorithms
is detailedly analyzed in Appendix A and also summarized as
follows. Let MDN denote the number of multiplications and
divisions. The well-known ACDC algorithm established by
Yeredor [21] takes MDN per iterative step.
In the QDIAG algorithm developed by Vollgraf and Obermayer
[23], performing one step iteration requires
MDN. In Fast Frobenius DIAGonalization (FFDIAG) devel-
oped in [22], each iterative step takes the lowest computational
complexity , but the FFDIAG algorithm cannot usually
be applied in the complex-valued BSS problems.
According to Proposition 2.1, the conventional subspace ﬁt-
ting criteria [40], [45]–[53] are given by
(3.1)
where each parameter matrix in the subset
is diagonal. Although the above criteria can be conveniently
solved by the well-known ALS methods [38], [39], [45]–[53],
it does not take into account the special structure in (2.1). If the
special structure of (2.1) is exploited, we can deﬁne a novel sym-
metric subspace ﬁtting criterion as follows:
(3.2)
It is shown by Proposition 2.1 that once the optimal values
and are obtained, any of and can be taken as an estimate
of the mixture matrix. It is easy to verify that the above cost
function satisﬁes the following symmetric relation:
(3.3)
Remark 3.1: Since the criterion (A.1) given in Appendix A is
a quartic function with respect to , it cannot be easily solved
though it maintains the structure of (2.1). The criterion (3.1) can
be easily solved but does not maintain the structure of (2.1). In
contrast, the criterion (3.2) not only is a tri-quadratic function
like criteria (3.1), but also exploits the special structure in (2.1).
So the criterion (3.2) can achieve a good tradeoff between the
criterion (A.1) and the criterion (3.1).
Remark 3.2: The cost function (3.2) can be seen as a sym-
metric parallel version of the criterion (4.5) in [35]. Interest-
ingly, the procedure to solve the cost function (3.2) can be con-
ducted by the algorithm similar to the BIA in [35]. Moreover,
the fact that the cost function (3.2) satisﬁes the symmetric rela-
tion (3.3) will result in the s-BIA.
Since the cost function (3.2) with respect to all indepen-
dent variables is quadratic, the gradient of the cost function
(3.2) with respect to any independent matrix variables can
be derived by the methods in [42]. Firstly, let and be
ﬁxed, parameters in diagonal matrices
are computed by minimizing .
Set and
. Noticing the relation
, the cost function (3.2) can
be expanded into
(3.4)
4.
FENG et al.: BIA FOR EXTRACTING INDEPENDENT COMPONENTS FROM ARRAY SIGNALS 3639
Letting the gradient of with respect
to for and be equal to zero,
we have
(3.5)
Set
(3.6a)
(3.6b)
where represents Hadamard’s (element-wise) product. It is
easy to show that the th elements of matrices and
are equal to and , respectively. More-
over, it is veriﬁed in Appendix B that if and are nonsingular,
then is positive deﬁnite.
Let . Then we have
, and (3.5) can be rewritten in matrix form as
for (3.7a)
for (3.7b)
Second, cost function (3.2) can also be changed into
(3.8)
Letting the gradient of with re-
spect to be equal to zero, then we have
(3.9)
It follows directly from (3.9) that
(3.10)
In a similar manner, we can get a formulation for computing .
With (3.6), (3.7), (3.9) and (3.10), we can now establish
the s-BIA. The basic procedure of the s-BIA is described as
follows.1
Give the initial values and compute
by (3.7). For , repeat the following two
steps until convergence:
F1) solve
and normalize all columns of ;
F2) solve
The s-BIA is presented in detail in Table I, where steps a)–d)
in Table I are used to implement step F1) of the s-BIA and
steps d)–g) in Table I are used to perform step F2) of the s-BIA.
Remark 3.3: Since is usually supposed to be closer to
the solution of (3.9) than , it is reasonable to expect that
obtained by performing
may be better than those gotten by solving
It can be seen from Table I that the computational complexity
of the proposed algorithm is lower than that of the ACDC [21]
and the QDIAG [23] algorithms. This will also be conﬁrmed by
1The Matlab code of s-BIA can be downloaded from http://see.xidian.edu.cn/
faculty/dzfeng.
5.
3640 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 59, NO. 8, AUGUST 2011
TABLE I
SIMULTANEOUS BI-ITERATIVE ALGORITHM (S-BIA)
simulation results to be presented in Section IV. Moreover, a
comparison of the s-BIA with the cyclic maximizer [35] shows
that the proposed s-BIA can be viewed as the cyclic minimiza-
tion algorithm.
IV. EXPERIMENTAL RESULTS
In this section, some experimental results are presented to il-
lustrate the performance of the proposed algorithm. For com-
parison purposes, three performance indexes will be used. The
most commonly-used performance index is the global rejection
level (GRL) [15]
(4.1)
It is easily shown that if GRL tends towards zero, then the es-
timated matrix tends to a point in the set or to the true
mixing matrix within a permutation and scaling. The second
performance index represents the time required by convergence
of an algorithm and is simply called convergence time. The third
performance index is the signal-to-interference-plus-noise ratio
(SINR) of an algorithm, i.e.,
(4.2)
which measures the independence of the separated signals,
where and are two row vectors formed by the samples of
the th source signal and its estimate, respectively.
In all experiments, if the condition
is satisﬁed, where denotes the GRL asso-
ciated with the th iterative step of an algorithm, then this
algorithm is considered to have converged, and such
is simply referred to as the convergent GRL. Thus, once
, an algorithm will be
stopped. Most interestingly, GRL is unaltered within a permu-
tation and scaling, which shows that is
a good stopping criterion. In particular, all algorithms s-BIA,
ALS [38], [39], ACDC [21], SS-ﬁtting [40] and QDIAG [23]
start at the same initial value that is estimated by the ESPRIT
method [44] by using the ﬁrst two eigenmatrices and
.
Experiment 1: The ﬁrst experiment is intended to compare
s-BIA, ALS [38], [39], ACDC [21], SS-ﬁtting algorithm [40],
QDIAG [23] and ESPRIT methods [25], [26], [44]. Here, a set
of 11 11 matrices
are used, where . Let
(4.3)
where the mixing matrix , each of error matrices
and diagonal matrices
are complex-valued matrices whose elements are nor-
mally distributed with mean zero. Furthermore, each column
of the mixing matrix has been normalized to a unit norm.
Fig. 1 shows the convergence curves of these algorithms for
6.
FENG et al.: BIA FOR EXTRACTING INDEPENDENT COMPONENTS FROM ARRAY SIGNALS 3641
Fig. 1. (a) Convergence curve of CF versus the iterative number obtained by
s-BIA for NER = 5 dB. (b) Convergence curves of GRL versus the iterative
number obtained by the ﬁve comparing algorithms for NER = 5 dB.
dB, and in particular, Fig. 1(a) displays the conver-
gence curve of the cost function (CF) (3.2) by s-BIA versus the
iterative number. The curves of the convergent GRL and conver-
gence time versus NER are shown in Figs. 2 and 3, respectively,
where 100 independent trials are conducted for each NER. For
the SS-ﬁtting algorithm which is the Gauss-Newton iterative
method for the nonunitary joint diagonalization, the maximal
step size, such as unity, gives the fastest convergence. Generally,
it converges in two steps when the step size equals to unity and
the initial point is sufﬁciently close to a solution. However, as
discussed in [40], the maximal step size usually leads to diver-
gence, especially with an ill-conditioned initialization matrix.
Therefore, we chose the step size as 0.4 to guarantee the conver-
gence in each independent experiment. We further plot curves
of the convergent GRL and convergence time versus the number
of eigenmatrices, as shown in Figs. 4 and 5, respectively. It is
seen that the better performance is achieved at the expense of
using more eigenmatrices and longer computation time.
Experiment 2: Given ﬁve zero-mean independent com-
plex-valued source signals
Fig. 2. Curves of convergent GRL versus NER.
Fig. 3. Curves of convergence time versus NER.
,
they are mixed by the matrix and corrupted by the additive
noise matrix to generate the received
signal , where generated
by Matlab software ( indicates the number of samples). In
this experiment, 100 independent trials are also conducted. In
each trial, the mixing matrix is produced in the same way
as in Experiment 1. Twenty-seven correlation matrices of the
noisy mixing signals with time lags have
been diagonalized. Let
(4.4)
where in which
. Under the condition that the number of the
samples is equal to 500, the curve of convergent GRL versus
SNR and the curve of SINR versus SNR are shown in Figs. 6
and 7, respectively. These results exhibit that the proposed
s-BIA gives more accurate estimates than the other algorithms.
Experiment 3: The third experiment is concerned
with separating three speech sources as shown in
Fig. 8(a) and (b). With ﬁve sensors, the mixture ma-
trix is given by , where
7.
3642 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 59, NO. 8, AUGUST 2011
Fig. 4. Curves of convergent GRL versus number of eigenmatrices for NER =
5 dB.
Fig. 5. Curves of convergence time versus number of eigenmatrices for
NER = 5 dB.
Fig. 6. Curves of convergent GRL versus SNR using 500 samples for SNR =
5 dB.
denotes the response of
a ﬁve-element uniform linear array with half-wavelength sensor
Fig. 7. Curves of SINR versus SNR using 500 samples for SNR = 5 dB.
Fig. 8. Signal waveforms, where (a) and (b) show three speech signals and ﬁve
received signals, respectively.
spacing. Take the 27 correlation matrices with
as the eigenmatrices. In Table II, the SINR, the convergent
GRL, and the convergence time are listed to further compare
the performances of these algorithms through 100 independent
trials. In each trial, let the direction of arrival of the three sources
8.
FENG et al.: BIA FOR EXTRACTING INDEPENDENT COMPONENTS FROM ARRAY SIGNALS 3643
TABLE II
COMPARISON OF THE PERFORMANCES OF THESE ALGORITHMS, WHERE SNR = 10 DB
Fig. 9. Curves of SINR versus SNR, where three speech signals and ﬁve sen-
sors are used.
be randomly generated by
, and
the SNR is 10 dB. Since the mixture matrix is tall, the
dimension-reduced process [35] is ﬁrst performed before the
joint diagonalization. Fig. 9 displays the SINR versus the
SNR, where the experiment parameters are identical with
those of Table II. These ﬁgures clearly show that the proposed
algorithm can achieve good separation performance.
V. CONCLUSION
In this paper, by taking advantage of the special structure
of the NUJD problem, we have developed a new version of
BIA for minimizing the introduced symmetric tri-quadratic cost
function which is used as extracting independent components
from a set of eigenmatrices. The simulation results have con-
ﬁrmed the comparatively good performance of the proposed
s-BIA algorithm.
APPENDIX A
COMPUTATIONAL COMPLEXITY OF THREE
RELATIVE ALGORITHMS
When is not unitary, nonorthogonal joint diagonalization
[20]–[24] is needed in BSS. The least squares cost function used
by Yeredor [21] is expressible as
(A.1)
To solve the above optimization problem, Yeredor [21] pro-
posed the well-known ACDC algorithm, which is composed of
the AC phase and DC phase. The computational complexity of
the ACDC algorithm is approximately analyzed in [21] and is
roughly . For the purpose of making a better compar-
ison of computational complexity of the relative algorithms, we
will make a careful analysis of the MDN required by the ACDC
algorithm. In the AC phase, computing
involves MDN. The AC phase is
also divided into sub-steps. In all the substeps, computing
all
involves MDN, and calculating the largest
eigenvectors of all averagely involves
MDN since computing the largest eigenvector of each non-Her-
mitian matrix takes MDN [41]. In the DC phase,
computing takes MDN, where
denotes Hadamard’s (element-wise) product. Computing
all
takes MDN, while solving all equations
takes MDN, where MDN of order lower than
or is ignored. Thus, each iterative step of the
ACDC algorithm takes MDN if all the MDN
of order lower than or is ignored.
In [23] the following well-known contrast cost function was
used:
(A.2)
where denotes a diagonalization matrix, and
in which is the element of . To solve the op-
timization problem (A.2), Vollgraf and Obermayer [23] devel-
oped the QDIAG algorithm, where each step is divided into
9.
3644 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 59, NO. 8, AUGUST 2011
substeps. The computational complexity of the QDIAG algo-
rithm is approximately analyzed in [23] and is roughly
or . Similarly, for the purpose of making a better com-
parison on computational complexity, we will make a careful
analysis of MDN of the QDIAG algorithm. In the QDIAG algo-
rithm, one needs to compute the following matrix
(A.3)
Let . Then we have
, and (A.3) can be changed into
(A.4)
where
It is easy to see that computing , and
needs and MDN,
respectively. Furthermore, according to the matrix theory [41],
computing the smallest eigenvectors of all
averagely takes MDN. So performing one step iteration
of the QDIAG algorithm takes MDN if all the
MDN of order lower than or is ignored.
The version of the QDIAG algorithm with computational
complexity was also proposed in [23]. However, if
is large enough or larger than , then such version with
computational complexity may not be suitable and so
will not be considered in the following.
More recently, a fast algorithm, Fast Frobenius DIAGonaliza-
tion (FFDIAG) for solving the intuitive-ground contrast func-
tion (A.2) was developed in [22]. The important advantage of
the FFDIAG algorithm is that it has the lowest computational
complexity . The FFDIAG algorithm exploits a multi-
plicative update rule
(A.5)
where the invertibility of the nonorthogonal diagonal-
izer is ensured by the constraint condition
. Its th iteration
step [22] adopts the following linearization approximation:
(A.6)
where and denote the diagonal and off-diag-
onal parts of , respectively. Ignoring already diagonal-
ized terms and inserting (A.5) into (A.2) yields
(A.7)
Most interestingly, when is real, two terms
can be combined into a single term, which leads to the FFDIAG
[22]. However, since in the complex-valued space, two terms
in (A.7) cannot usually be combined into
a single term, the FFDIAG algorithm cannot generally be used
for the complex-valued BSS problems. Due to the above reason
and the fact that this paper focuses the complex-valued BSS
problems, experimental results of the FFDIAG algorithm are not
given in Section VI.
APPENDIX B
POSITIVE DEFINITENESS OF MATRIX
It is directly deduced from (3.6a) that
(B.1)
Let and
, then two positive deﬁnite matrices and
can be expanded as
(B.2a)
(B.2b)
Inserting (B.2) into (B.1) gives rise to
(B.3)
It is seen from the above formula that matrix is, at least,
positive semi-deﬁnite.
By the method of contradiction, we can further show that is
positive deﬁnite. Suppose that is only positive semi-deﬁnite,
i.e., there exists a nonzero vector such that
(B.4)
10.
FENG et al.: BIA FOR EXTRACTING INDEPENDENT COMPONENTS FROM ARRAY SIGNALS 3645
Substituting (B.3) into (B.4) yields
(B.5)
This shows that
and (B.6)
Equations (B.6) can be written in the vector form as
(B.7)
Since have full rank, there must be
(B.8)
Let and ,
then (B.8) can be converted into
and (B.9)
Since is of full rank, there must, at least, be a nonzero entry
in . It is directly deduced from (B.9) that
, which is a contradiction. Hence, must be positive
deﬁnite.
ACKNOWLEDGMENT
The authors would like to thank very much the Associate Ed-
itor Prof. S. Shahbazpanahi and the anonymous reviewers for
their valuable comments and suggestions that have signiﬁcantly
improved the manuscript.
REFERENCES
[1] C. Jutten and J. Herault, “Blind separation of sources, part I: An adap-
tive algorithm based on neuromatic architecture,” Signal Process., vol.
24, no. 1, pp. 1–10, Jul. 1991.
[2] P. Comon, “Independent component analysis: A new concept?,” Signal
Process., vol. 36, no. 3, pp. 287–314, Apr. 1994.
[3] A. Cichocki, R. Unbehauen, and E. Rummert, “Robust learning algo-
rithm for blind separation of signals,” Electron. Lett., vol. 30, no. 17,
pp. 1386–1387, Aug. 1994.
[4] A. Cichocki and R. Unbehauen, “Robust neural networks with on-line
learning for blind identiﬁcation and blind separation of sources,” IEEE
Trans. Circuits Syst. I, vol. 43, no. 11, pp. 894–906, Nov. 1996.
[5] S. Amari, A. Cichocki, and H. H. Yang, , D. Touretzky, M. Mozer, and
M. Hasselmo, Eds., “A new learning algorithm for blind sources sep-
aration,” in Advances in Neural Information Processing. Cambridge,
MA: MIT Press, 1996, vol. 8, pp. 757–763.
[6] S. Amari and A. Cichocki, “Adaptive blind signal processing—Neural
network approaches,” Proc. IEEE, vol. 86, no. 10, pp. 2026–2048, Oct.
1998.
[7] S. A. Cruces-Alvarez, A. Cichocki, and S. Amari, “From blind signal
extraction to blind instantaneous signal separation: Contrast function,
algorithms, and stability,” IEEE Trans. Neural Netw., vol. 15, no. 4, pp.
859–873, Jul. 2004.
[8] A. J. Bell and T. J. Sejnowski, “An information-maximization ap-
proach to blind separation and blind disconsolation,” Neural Comput.,
vol. 7, no. 6, pp. 1129–1159, Nov. 1995.
[9] A.CichockiandS.Amari,AdaptiveBlindSignalandImageProcessing:
Learning Algorithms and Applications. West Sussex, U.K.: Wiley,
2003.
[10] J. Karhunen, E. Oja, L. Wang, R. Vigario, and J. Joutsensalo, “A class
of neural networks for independent component analysis,” IEEE Trans.
Neural Netw., vol. 8, no. 3, pp. 486–504, May 1997.
[11] A. Hyvarinen and E. Oja, “Independent component analysis by general
nonlinear Hebbian like learning rules,” Signal Process., vol. 64, no. 3,
pp. 301–313, Feb. 1998.
[12] J. F. Cardoso and B. Laheld, “Equivariant adaptive source separation,”
IEEE Trans. Signal Process., vol. 44, no. 12, pp. 3017–3030, Dec. 1996.
[13] J. F. Cardoso, “Source separation using higher order moments,” in
Proc. IEEE Int. Conf. Acoust., Speech, Signal Process. (ICASSP),
Glasgow, U.K., May 1989, vol. 4, pp. 2109–2112.
[14] J. F. Cardoso and A. Souloumiac, “Blind signal separation for
non-Gaussian signals,” Proc. Inst. Electr. Eng.—F, vol. 140, no. 6, pp.
362–370, Dec. 1993.
[15] A. Belouchrani, K. Abed-Meraim, J. F. Cardoso, and E. Moulines, “A
blind source separation technique using second-order statistics,” IEEE
Trans. Signal Process., vol. 45, no. 2, pp. 434–444, Feb. 1997.
[16] E. Moreau, “A generalization of joint-diagonalization contrast function
for source separation,” IEEE Trans. Signal Process., vol. 49, no. 3, pp.
530–541, Mar. 2001.
[17] M. Wax and J. Sheinvald, “A least squares approach to joint diagonal-
ization,” IEEE Signal Process. Lett., vol. 4, no. 2, pp. 52–53, Feb. 1997.
[18] M. Wax and Y. Anu, “A least squares approach to blind beamforming,”
IEEE Trans. Signal Process., vol. 47, no. 1, pp. 231–234, Jan. 1999.
[19] D. T. Pham, “Joint approximate diagonalization of positive Hermitian
matrices,” SIAM J. Matrix Anal. Appl., vol. 22, no. 4, pp. 1136–1152,
2001.
[20] M. Joho and K. Rahbar, “Joint diagonalization of correlation matrices
by using Newton methods with application to blind signal separation,”
in Proc. Sensor Array Multichannel Signal Process. Workshop, Aug.
2002, pp. 403–407.
[21] A. Yeredor, “Non-orthogonal joint diagonalization in the least-squares
sense with application in blind signal separation,” IEEE Trans. Signal
Process., vol. 50, no. 7, pp. 1545–1553, Jul. 2002.
[22] A. Ziehe, P. Laskov, G. Nolte, and K.-R. Müller, “A fast algorithm
for joint diagonalization with non-orthogonal transformations and its
application to blind signal separation,” J Mach. Learn. Res., vol. 5, pp.
777–800, 2004.
[23] R. Vollgraf and K. Obermayer, “Quadratic optimization for simulta-
neous matrix diagonalization,” IEEE Trans. Signal Process., vol. 54,
no. 9, pp. 3270–3278, Sep. 2006.
[24] X.-L. Li and X.-D. Zhang, “Nonorthogonal joint diagonalization free
of degenerate solution,” IEEE Trans. Signal Process., vol. 55, no. 5, pp.
1803–1814, May 2007.
[25] L. Tong, R.-W. Liu, V. C. Soon, and Y.-F. Huang, “Indeterminacy and
identiﬁability of blind source separation,” IEEE Trans. Circuits Syst.,
vol. 38, no. 5, pp. 499–509, May 1991.
[26] C. Chang, Z. Ding, S. F. Yau, and F. H. Y. Chan, “A matrix-pencil
approach to blind separation of colored nonstationary signals,” IEEE
Trans. Signal Process., vol. 48, no. 3, pp. 900–907, Mar. 2000.
[27] L. Tong, V. C. Soon, Y.-F. Huang, and R. Liu, “AMUSE: A new blind
source separation algorithm,” in Proc. IEEE ISCAS, New Orleans, LA,
May 1990, vol. 3, pp. 1784–1887.
[28] D.-Z. Feng, X.-D. Zhang, and Z. Bao, “An efﬁcient multistage decom-
position approach for independent components,” Signal Process., vol.
83, no. 1, pp. 181–197, Jan. 2003.
11.
3646 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 59, NO. 8, AUGUST 2011
[29] D.-Z. Feng, Z. Bao, H.-Q. Zhang, and X.-D. Zhang, “An efﬁcient al-
gorithm for extracting independent components one by one,” in Proc.
16th IFIP World Comput. Congr. (ICSP), Beijing, China, Aug. 2000,
vol. 1, pp. 421–424.
[30] L. Para and C. Spence, “Convolutive blind separation of nonstationary
sources,” IEEE Trans. Speech Audio Process., vol. 8, no. 3, pp.
320–327, May 2000.
[31] K. Rahbar and J. P. A. Reilly, “Frequency domain method for blind
source separation of convolutive audio mixtures,” IEEE Trans. Speech
Audio Process., vol. 13, no. 5, pp. 832–844, Sep. 2005.
[32] A. Belouchrani and A. Cichocki, “Robust whitening procedure in
blind source separation context,” Electron. Lett., vol. 36, no. 24, pp.
2050–2951, Nov. 2000.
[33] M. Wax and T. Kailath, “Detection of signals by information theoretic
contrast function,” IEEE Trans. Acoust., Speech, Signal Process., vol.
33, no. 2, pp. 387–392, Apr. 1985.
[34] A. Belouchrani, M. G. Amin, and K. Abed-Meraim, “Direction ﬁnding
in correlated noise ﬁelds based on joint block-diagonalization of spatio-
temporal correlation matrices,” IEEE Signal Process. Lett., vol. 4, no.
9, pp. 266–268, Sep. 1997.
[35] D.-Z. Feng, W. X. Zheng, and A. Cichocki, “Matrix group algorithm
via improved whitening process for extracting statistically independent
sources from array signals,” IEEE Trans. Signal Process., vol. 55, no.
3, pp. 962–977, Mar. 2007.
[36] W. W. Wang, S. Sanei, and J. A. Chambers, “Penalty function-based
joint diagonalization approach for convolutive blind separation of non-
stationary sources,” IEEE Trans. Signal Process., vol. 53, no. 5, pp.
1654–1669, May 2005.
[37] J. B. Kruskal, “Three-way arrays: Rank and uniqueness of trilinear de-
composition, with application to arithmetic complexity and statistics,”
Linear Algebra Its Appl., vol. 18, pp. 95–138, 1977.
[38] N. D. Sidiropoulos, G. B. Giannakis, and R. Bro, “Deterministic wave-
form-preserving blind separation of DS-CDMA signals using an an-
tenna array,” in Proc. 9th IEEE Workshop Stat. Signal Array Process.,
Pocono Manor, PA, Aug. 1998, pp. 304–307.
[39] N. D. Sidiropoulos, G. B. Giannakis, and R. Bro, “Blind PARAFAC
receivers for DS-CDMA systems,” IEEE Trans. Signal Process., vol.
48, no. 3, pp. 810–823, Mar. 2000.
[40] A.-J. van der Veen, “Joint diagonalization via subspace ﬁtting
techniques,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal
Process. (ICASSP), Salt Lake City, UT, May 2001, vol. 5, pp.
2773–2776.
[41] G. H. Golub and C. F. Van Loan, Matrix Computations, 2nd ed. Bal-
timore, MD: The John Hopkins Univ. Press, 1989.
[42] J. R. Magnus and H. Neudecker, Matrix Differential Calculus With Ap-
plications in Statistics and Econometrics, 2nd ed. New York: Wiley,
1991.
[43] J. P. LaSalle, The Stability of Dynamical Systems. Philadelphia, PA:
SIAM Press, 1976.
[44] R. Roy and T. Kailath, “ESPRIT-estimation of signal parameters via
rotational invariance techniques,” IEEE Trans. Acoust., Speech, Signal
Process., vol. 37, no. 7, pp. 984–995, Jul. 1989.
[45] P. Comon and B. Mourrain, “Decomposition of quantics in sums of
powers of linear forms,” Signal Process., vol. 53, no. 2–3, pp. 93–107,
1996.
[46] P. Comon, “Tensor decompositions,” in Mathematics in Signal
Process., V. J. G. McWhirter and I. K. Proudler, Eds. Oxford, U.K.:
Oxford Univ. Press, 2001.
[47] P. Comon, “Canonical tensor decompositions,” presented at the ARCC
Workshop Tensor Decomposit., Amer. Inst. Math., Palo Alto, CA, Jul.
2004.
[48] L. De Lathauwer, “A link between the canonical decomposition in mul-
tilinear algebra and simultaneous matrix diagonalization,” SIAM J. Ma-
trix Anal. Appl., vol. 28, no. 3, pp. 642–666, 2006.
[49] L. De Lathauwer, “Simultaneous matrix diagonalization: The over-
complete case,” in Proc. 4th Int. Symp. Independent Component Anal.
(ICA), Nara, Japan, Apr. 2003, pp. 821–825.
[50] L. De Lathauwer, B. De Moor, and J. Vandewalle, “A multilinear sin-
gular value decomposition,” SIAM J. Matrix Anal. Appl., vol. 21, no.
4, pp. 1253–1278, 2000.
[51] L. De Lathauwer, B. De Moor, and J. Vandewalle, “Independent
component analysis and (simultaneous) third-order tensor diagonal-
ization,” IEEE Trans. Signal Process., vol. 49, no. 10, pp. 2262–2271,
Oct. 2001.
[52] L. De Lathauwer and J. Casaing, “Blind identiﬁcation of underdeter-
mined mixtures by simultaneous matrix diagonalization,” IEEE Trans.
Signal Process., vol. 56, no. 3, pp. 1096–1105, Mar. 2008.
[53] L. De Lathauwer, J. Casaing, and J.-F. Cardoso, “Fourth-order
cumulant-based blind identiﬁcation of underdetermined mixtures,”
IEEE Trans. Signal Process., vol. 55, no. 6, pp. 2965–2973, Jun.
2007.
Da-Zheng Feng (M’02) was born in December 1959.
He graduated from Xi’an University of Technology,
Xi’an, China, in 1982. He received the M.S. degree
from Xi’an Jiaotong University, China, in 1986 and
the Ph.D. degree in electronic engineering from Xi-
dian University, Xi’an, China, in 1996.
From May 1996 to May 1998, he was a Postdoc-
toral Research Afﬁliate and an Associate Professor at
Xi’an Jiaotong University, China. From May 1998 to
June 2000, he was an Associate Professor at Xidian
University. Since July 2000, he has been a Professor
at Xidian University. He has published about 80 journal papers. His current re-
search interests include signal processing, brain information processing, image
processing, radar techniques, and blind equalization.
Hua Zhang was born in January 1982. She received
the B.S. degree in electronic engineering in 2003, the
M.S. degree in 2006 and the Ph.D. degree in 2010
both in signal and information processing, all from
Xidian University, Xi’an, China.
She is currently with the Software Development
Department at the Shanghai R&D Center, Shanghai,
China. Her main research interests were in blind
signal processing and array signal processing. After
graduation, she joined Huawei Technologies Com-
pany, where she is currently a Software Engineer for
WCDMA system design.
Wei Xing Zheng (M’93–SM’98) received the Ph.D.
degrees in electrical engineering from Southeast Uni-
versity, China, in 1989.
He has held various faculty/research/visiting
positions at Southeast University, China; Imperial
College of Science, Technology and Medicine, U.K.;
University of Western Australia, Curtin University
of Technology, Australia; Munich University of
Technology, Germany; University of Virginia; and
University of California-Davis. Currently he holds
the rank of Full Professor at University of Western
Sydney, Australia.
Dr. Zheng has served as an Associate Editor for ﬁve ﬂagship journals: the
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I: FUNDAMENTAL THEORY
AND APPLICATIONS (2002–2004), the IEEE TRANSACTIONS ON AUTOMATIC
CONTROL (2004–2007), the IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS
II: EXPRESS BRIEFS (2008–2009), IEEE SIGNAL PROCESSING LETTERS
(2007–2010), and Automatica (2011–present). He was a Guest Editor of the
Special Issue on Blind Signal Processing and Its Applications for the IEEE
TRANSACTIONS ON CIRCUITS AND SYSTEMS I: REGULAR PAPERS (2009–2010).
He has also served as the Chair of IEEE Circuits and Systems Society’s
Technical Committee on Neural Systems and Applications and as the Chair of
the IEEE Circuits and Systems Society’s Technical Committee on Blind Signal
Processing.
Be the first to comment