Upcoming SlideShare
×

• 307 views

More in: Technology , Education
• Comment goes here.
Are you sure you want to
Be the first to comment
Be the first to like this

Total Views
307
On Slideshare
0
From Embeds
0
Number of Embeds
0

Shares
2
0
Likes
0

No embeds

### Report content

No notes for slide

### Transcript

• 4. 4II. PROBLEM FORMULATIONIn this section, we formulate the noise-corrupted and noise-free leader selection problemsin consensus networks and make connections to sensor selection problem in sensor networks.Furthermore, we establish an equivalence between the noise-corrupted and noise-free leaderselection problems when all leaders use arbitrarily large feedback gains on their own states.A. Leader selection problem in consensus networksWe consider n single-integrators˙ψi = ui + wi, i = 1, . . . , nwhere ψi is the scalar state, ui is the control input, and wi is the white stochastic disturbance withzero-mean and unit-variance. A node is a follower if it uses only relative information exchangewith its neighbors to form its control action,ui = −j ∈ Ni(ψi − ψj).A node is a leader if, in addition to relative information exchange with its neighbors, it also hasaccess to its own stateui = −j ∈ Ni(ψi − ψj) − κi ψi.Here, κi is a positive number and Ni is the set of all nodes that node i communicates with.The communication network is modeled by a connected, undirected graph; thus, the graphLaplacian L is a symmetric positive semideﬁnite matrix with a single eigenvalue at zero andthe corresponding eigenvector 1 of all ones. A state-space representation of the leader-followerconsensus network is given by˙ψ = − (L + DκDx) ψ + w (1)whereDκ := diag (κ) , Dx := diag (x)are diagonal matrices formed from the vectors κ = [ κ1 · · · κn ]Tand x = [ x1 · · · xn ]T. Here,x is a Boolean-valued vector with its ith entry xi ∈ {0, 1}, indicating that node i is a leader ifOctober 12, 2012 DRAFT
• 6. 6Thus, the problem of selecting leaders that minimize the steady-state variance of ψf amounts tominimizexJf (x) = trace (L−1f )subject to xi ∈ {0, 1}, i = 1, . . . , n1Tx = Nl.(LS2)As in (LS1), the Boolean constraints xi ∈ {0, 1} are nonconvex. Furthermore, as we demonstratein Section III, the objective function Jf in (LS2) is a nonconvex function of x.In what follows, we establish equivalence between the noise-corrupted and noise-free leaderselection problems (LS1) and (LS2) when all leaders use arbitrarily large feedback gains ontheir own states. Partitioning ψ into the state of the leader nodes ψl and the state of the followernodes ψf brings system (1) to the following form1˙ψl˙ψf = −Ll + DκlL0LT0 Lfψlψf +wlwf . (3)Here, Dκl:= diag (κl) with κl ∈ RNl being the vector of feedback gains associated with theleaders. Taking the trace of the inverse of the 2 × 2 block matrix in (3) yieldsJ = trace L−1f + L−1f LT0 S−1κlL0 L−1f + S−1κlwhereSκl= Ll + Dκl− L0 L−1f LT0is the Schur complement of Lf . Since S−1κlvanishes as each component of the vector κl goesto inﬁnity, the variance of the network is solely determined by the variance of the followers,Jf = trace L−1f , where Lf is the reduced Laplacian matrix obtained by removing all columnsand rows corresponding to the leaders from L.1Note that Dx does not show in (3) since the partition is performed with respect to the indices of the 0 and 1 diagonalelements of Dx.October 12, 2012 DRAFT
• 7. 7B. Connections to the sensor selection problemThe problem of estimating a vector ψ ∈ Rnfrom m relative measurements corrupted byadditive white noiseyij = ψi − ψj + wijarises in distributed localization in sensor networks. We consider the simplest scenario in whichall ψi’s are scalar-valued, with ψi denoting the position of sensor i; see [10], [11] for vector-valued localization problems. Let Ir denote the index set of the m pairs of distinct nodes betweenwhich the relative measurements are taken and let eij belong to Rnwith 1 and −1 at its ith andjth elements, respectively, and zero everywhere else. Then,yij = eTij ψ + wij, (i, j) ∈ Iror, equivalently in the matrix form,yr = ETr ψ + wr (4)where yr is the vector of relative measurements and Er ∈ Rn×mis the matrix whose columnsare determined by eij for (i, j) ∈ Ir. Since ψ + a1 for any scalar a results in the same yr, withrelative measurements the position vector ψ can be determined only up to an additive constant.This can also be veriﬁed by noting that ETr 1 = 0.Suppose that Nl sensors can be equipped with GPS devices that allow them to measure theirabsolute positionsya = ETa ψ + ETa wawhere Ea ∈ Rn×Nl is the matrix whose columns are determined by ei, the ith unit vector inRn, for i ∈ Ia, the index set of absolute measurements. Then the vector of all measurements isgiven by yrya =ETrETa ψ +I 00 ETawrwa (5)where wr and wa are zero-mean white stochastic disturbances withE(wrwTr ) = Wr, E(wawTa ) = Wa, E(wrwTa ) = 0.October 12, 2012 DRAFT
• 8. 8In Appendix A, we show that the problem of choosing Nl absolute position measurementsamong n sensors to minimize the variance of the estimation error is equivalent to the noise-corrupted leader selection problem (LS1). Furthermore, when the positions of Nl sensors areknown a priori we show that the problem of assigning Nl sensors to minimize the variance ofthe estimation error amounts to solving the noise-free leader selection problem (LS2).III. LINEAR APPROXIMATION AND SOFT CONSTRAINT METHOD: NOISE-FREE LEADERSIn this section, we provide an alternative expression for the objective function Jf in thenoise-free leader selection problem (LS2). We use this explicit expression to identify the sourceof nonconvexity and to suggest an LMI-based convex approximation. We then relax the hardconstraint of having exactly Nl leaders in (LS2) by augmenting the objective function Jf withthe 1 norm of the optimization variable x. This soft constraint approach yields a parameterizedfamily of optimization problems whose solution provides a tradeoff between the 1 norm of xand the convex approximation of the variance ampliﬁcation of the network.A. Explicit expression for the objective functionSince the objective function Jf in (LS2) is not expressed explicitly in terms of the optimizationvariable x, it is difﬁcult to examine its basic properties (including convexity). We next providean alternative expression for Jf that allows us to establish lack of convexity and to suggest anLMI-based convex approximation of Jf .Proposition 1: For networks with at least one leader, the objective function Jf in the noise-freeleader selection problem (LS2) can be written asJf = trace L−1f = trace (I − Dx)(G + Dx ◦ L)−1(I − Dx) (6)where ◦ denotes the elementwise multiplication of matrices, andG = (I − Dx) L (I − Dx), Dx = diag (x) , xi ∈ {0, 1}, i = 1, . . . , n.Furthermore, Jf is a nonconvex function of x over the smallest convex set xi ∈ [0, 1] thatcontains feasible points xi ∈ {0, 1} for i = 1, . . . , n.October 12, 2012 DRAFT
• 9. 9Proof: After an appropriate relabeling of the nodes (as done in (3)), L and Dx can bepartitioned conformably into 2 × 2 block matrices,L =Ll L0LT0 Lf , Dx =INl×Nl0Nl×p0p×Nl0p×p , p := n − Nlwhich leads toG =0Nl×Nl0Nl×p0p×NlLf , Dx ◦ L =INl×Nl◦ Ll 0Nl×p0p×Nl0p×pG + Dx ◦ L =INl×Nl◦ Ll 0Nl×p0p×NlLf .Since INl×Nl◦Ll is a diagonal matrix with positive diagonal elements and since the principal sub-matrix Lf of the Laplacian L is positive deﬁnite for connected graphs [1, Lemma 10.36], we haveG + Dx ◦ L 0. (7)Consequently,trace (I − Dx)(G + Dx ◦ L)−1(I − Dx) = trace (L−1f )which yields the desired result (6).We next use a counterexample to illustrate the lack of convexity of Jf over xi ∈ [0, 1]. LetL =1 −1−1 1 , Dx =x1 00 x2with x1 ∈ [0, 1] and x2 = 1. FromG + L ◦ Dx =(1 − x1)2+ x1 00 1 0 and Jf =(1 − x1)2(1 − x1)2 + x1it can be veriﬁed that, for x1 ∈ [0, 1/3], the second derivative of Jf with respect to x1 is negative(implying that Jf is not convex).October 12, 2012 DRAFT
• 10. 10Explicit expression (6) in conjunction with Schur complement can be used to convert theminimization of Jf into the following problemminimizeX, xtrace (X)subject toX I − DxI − Dx G + Dx ◦ L 0(8)where X ∈ Rn×nis a symmetric positive deﬁnite matrix. To see this, note that since G+Dx◦L0, we haveX I − DxI − Dx G + Dx ◦ L 0 ⇔ X (I − Dx)(G + Dx ◦ L)−1(I − Dx).Thus, to minimize trace (X) subject to the inequality constraint, we take X = (I − Dx)(G +Dx ◦ L)−1(I − Dx), which shows equivalence between the objective functions in (8) and in (6).Thus, the noise-free leader selection problem (LS2) can be formulated asminimizeX, xtrace (X)subject toX I − DxI − Dx G + Dx ◦ L 0G = (I − Dx) L (I − Dx)Dx = diag (x) , 1Tx = Nl, xi ∈ {0, 1}, i = 1, . . . , n.(9)In addition to the Boolean constraints, the quadratic dependence of G on Dx provides anothersource of nonconvexity in (9). Thus, in contrast to (LS1), relaxation of the Boolean constraints toxi ∈ [0, 1] for i = 1, . . . , n is not enough to guarantee convexity of the optimization problem (9).B. Linear approximation and soft constraint methodAs established in Section III-A, the alternative formulation (9) of the noise-free leader selectionproblem (LS2) identiﬁes two sources of nonconvexity: the quadratic matrix inequality and theBoolean constraints. In view of this, we use linearization of the matrix G to approximate thequadratic matrix inequality in (9) with an LMI. Furthermore, instead of imposing Booleanconstraints, we augment the objective function with the 1 norm of x. This choice is usedas a proxy for obtaining a sparse solution x whose nonzero elements identify the leaders.October 12, 2012 DRAFT
• 11. 11The idea of using linearization comes from [31], where a linear approximation of the objectivefunction trace (Y Z) at the point (Y0, Z0) was considered(1/2) trace (Y0Z + Y Z0).To design ﬁxed-order output feedback controllers, the authors of [31] minimize trace (Y0Z+Y Z0)with respect to Y and Z, set Y0 ← Y , Z0 ← Z, and repeat. Motivated by this iterative scheme,we consider the following linear approximation of GG0 := (1/2) (I − Dx) L (I − Dx0 ) + (1/2) (I − Dx0 ) L (I − Dx) (10)where Dx0 is our current-best-estimate of Dx. Replacing G with G0 leads to an LMI approxi-mation of the quadratic matrix inequality in (9).In addition to the linearization, we relax the hard constraint 1Tx = Nl for Boolean-valued xwith a soft one. This is achieved by augmenting the objective function with the 1 norm of x,trace (X) + γni = 1|xi|where the positive number γ characterizes our emphasis on the sparsity of the vector x. We notethat the 1 norm x 1 is a widely used proxy for promoting sparsity [32, Chapter 6]. Puttingthis soft constraint approach and linearization (10) together, we obtain a convex optimizationproblemminimizeX, xtrace (X) + γni = 1|xi|subject toX I − DxI − Dx G0 + Dx ◦ L 0G0 = (1/2) (I − Dx) L (I − Dx0 ) + (1/2) (I − Dx0 ) L (I − Dx)Dx = diag (x)(11)which can be solved efﬁciently for small size problems (e.g., n ≤ 30) using standard softwaresuch as CVX [33]. For large problems, we develop a customized algorithm in Appendix B.For a ﬁxed value of γ, we start with Dx0 = 0 and solve problem (11) as part of an iterativeloop; the solution Dx = diag (x) at every iteration is treated as the current-best-estimate Dx0 =October 12, 2012 DRAFT
• 13. 131241056378912111314151617 1822252324211920Fig. 1: A small network with 25 nodes [18].(a) Number of leaders Nl (b) Variance of the network Jf (c) Tradeoff between Nl and JfFig. 2: Performance of the soft constraint method for the network shown in Fig. 1: (a) the numberof leaders Nl decreases with γ; (b) the variance of the followers Jf increases with γ; and (c)the tradeoff between Nl and Jf .TABLE I: Performance comparison of greedy algorithm and soft constraint method with theglobal solution to the noise-free leader selection problem (LS2) for the network shown in Fig. 1.global solution greedy algorithm soft constraintNl Jf leaders Jf leaders Jf leaders1 66.0 13 66.0 13 112.0 252 38.4 8, 25 44.8 13, 25 64.0 16, 253 30.0 8, 16, 25 33.3 7, 13, 25 32.1 7, 16, 254 25.3 7, 9, 16, 25 27.4 7, 13, 16, 25 29.4 7, 16, 20, 255 20.7 3, 7, 9, 16, 25 22.2 3, 7, 13, 16, 25 22.6 3, 7, 16, 20, 25October 12, 2012 DRAFT
• 15. 15(a) Number of leaders Nl (b) Variance of the network Jf (c) Tradeoff curve between Nl and JfFig. 4: A random network with 100 nodes: (a) the number of leaders Nl decreases with γ; (b)the variance of the followers Jf increases with γ; and (c) the tradeoff curve between Nl and Jf .Fig. 5: The objective function Jf obtained using the soft constraint method (◦), the greedyalgorithm (∗), and the degree heuristics (+) for the random network.soft constraint method chooses nodes with both large- and small-degrees as leaders; in particular,all nodes with degree less than 8 and all nodes with degree greater than 18 are selected.IV. LOWER AND UPPER BOUNDS ON GLOBAL PERFORMANCE: NOISE-CORRUPTED LEADERSIn contrast to the noise-free leader selection problem (LS2), we next show that the objectivefunction in the noise-corrupted leader selection problem (LS1) is convex. We take advantageof the convexity of J in (LS1) and develop efﬁcient algorithms to compute lower and upperbounds on the global optimal value Jopt of (LS1). A lower bound results from convex relaxationof Boolean constraints in (LS1). Furthermore, upper bounds are obtained using an efﬁcient greedyalgorithm and the alternating direction method of multipliers (ADMM). Greedy algorithm selectsOctober 12, 2012 DRAFT
• 16. 16(a) Nl = 5 (b) Nl = 5(c) Nl = 41 (d) Nl = 40Fig. 6: Selection of leaders (•) for the random network example using soft constraint method in(a) and (c) and using degree heuristics in (b) and (d).one leader at a time, which introduces low-rank modiﬁcations to the Laplacian matrix. We exploitthis feature in conjunction with the matrix inversion lemma to gain computational efﬁciency. Onthe other hand, the ADMM algorithm handles the Boolean constraints explicitly by a simpleprojection onto a discrete nonconvex set. Finally, we provide two examples to illustrate theperformance of the developed approach.A. Convex relaxation to obtain a lower boundSince the objective function J in (LS1) is the composition of a convex function trace (¯L−1)of a positive deﬁnite matrix ¯L 0 with an afﬁne function ¯L = L + DκDx, it follows that J isa convex function of x. By enlarging the Boolean constraint set xi ∈ {0, 1} to its convex hullOctober 12, 2012 DRAFT
• 17. 17(a) (b)Fig. 7: The degree distribution of (a) the random network of Section III-C2 and of (b) 41 leadersselected using soft constraint method. Note that the soft constraint method chooses all nodeswith degree less than 8 and all nodes with degree greater than 18.xi ∈ [0, 1] (i.e., the smallest convex set that contains the Boolean constraint set), we obtain aconvex relaxation of (LS1)minimizexJ(x) = trace (L + DκDx)−1subject to 1Tx = Nl, 0 ≤ xi ≤ 1, i = 1, . . . , n.(CR)Since we have enlarged the constraint set, the solution x∗of the relaxed problem (CR) providesa lower bound on Jopt. However, x∗may not provide a selection of Nl leaders, as it may turnout not to be Boolean-valued. If x∗is Boolean-valued, then it is the global solution of (LS1).Following similar argument as in Section III-A, Schur complement can be used to formulatethe convex optimization problem (CR) as an SDPminimizeX, xtrace (X)subject toX II L + DκDx 01Tx = Nl, 0 ≤ xi ≤ 1, i = 1, . . . , n.For small networks (e.g., n ≤ 30), this problem can be solved efﬁciently using standard SDPsolvers. For large networks, we develop a customized interior point method in Appendix C.October 12, 2012 DRAFT