SlideShare a Scribd company logo
1 of 55
Download to read offline
WIRELESS POSITIONING
 USING ELLIPSOIDAL
    CONSTRAINTS




      Giovanni Soldi
      Lund University
i

Acknowledgements
First, I would like to thank my supervisor at the department of Mathematical Statis-
tics at Lund University, professor Andreas Jakobsson, for his continuous support
and commitment throughout this master’s thesis. Furthermore, I am really grateful
to Qubulus AB for providing me with the data and for giving me the opportunity
of a three-month internship.
    I would like to thank also my supervisor in Milano, professor Marco Trubian, and
my Erasmus coordinator, professor Kevin Payne, for his help with all the bureau-
cratic issues. Special thanks go to all my international friends that I have known
during these ten months and made my stay in Sweden more fun.
    Last but not least, I would like to thank my mother and all my family for their
continuous support and encouragement during all these years of studies.
ii

Abstract
Since the advent of the global positioning system in 1999, positioning systems have
been used to deliver location-based services (LBs) in outdoor environments. In
recent years, LBs have become of equal interest in indoor environments in a wide
range of personal and commercial applications. For this reason, recent research has
focused its attention on wireless positioning systems. This master’s thesis presents
a new approach for indoor positioning, based on the notion of separating ellipsoids.
In order to improve the position estimation algorithm, the technique is combined
with the algorithm Aβˆ— , being applied to binary maps of the examined buildings to
take into account obstacles such as walls.
   The combination of separating ellipsoids and Aβˆ— seems to promise an improve-
ment over previous algorithms based on a probabilistic approaches.
CONTENTS


1 INTRODUCTION                                      1
  1.1 Wireless Positioning Systems                  1

2 PROBLEM EXPLANATION AND PREVIOUS WORK             3
  2.1 Notation                                      3
  2.2 Problem explanation                           3
  2.3 Properties of the weights functions 𝑀𝑖        4
  2.4 The kernel-based method                       5
  2.5 A projection-based method                     6

3 A NEW APPROACH                                    9
  3.1 Theory                                        9
  3.2 Homogeneous Embedding                        11
  3.3 SDP Formulations                             12
  3.4 Classification Rule                           13

4 DATA                                             15
  4.1 Plotting the locations on the maps           15
  4.2 Data                                         18
  4.3 Version with one slack for location          19
  4.4 Iterative version                            20
  4.5 Variance ellipsoids                          22

5 THE A* ALGORITHM                                 24
  5.1 The A* algorithm                             24
      5.1.1 How A* works                           24
  5.2 Applied A* star                              25

                                                   iii
iv

6 ANALYSES                                                       28
  6.1 Computation of the distance matrix and of the ellipsoids   28
  6.2 Classification                                              31
  6.3 Interpolation Step                                         38
  6.4 Results for the first floor of the Hansa Mall                40
  6.5 Results for the ground floor of the Hansa Mall              42

7 CONCLUSIONS                                                    45

BIBLIOGRAPHY                                                     46
Chapter 1


                                   INTRODUCTION


                                           One must learn by doing the thing;
                                           for though you think you know it
                                           You have no certainty, until you try.
                                                                       Sophocles



1.1   Wireless Positioning Systems
Since the advent of the Global Positioning System (GPS) in 1999 [1], position-
ing systems have been used to deliver Location-Based services (LBs) in outdoor
environment. In the recent years, LBs have become of equal interest in indoor envi-
ronments as well as in a wide range of personal and commercial applications. These
include location-based network management and security, medicine and health care,
personalized information delivery and context awareness.
    Unfortunately, coverage of the GPS system is limited in indoor environments
and dense urban areas. For this reason, recent research has focused on some ex-
isting wireless communication infrastructures such as wireless local area networks
(WLANs) to be a complementary technique. A WLAN is characterized by a num-
ber of access points (APs), which are devices that allow wireless communication
based on the IEEE 802.11 standard and are widespread indoors for Internet access.
Since the power-sensing function is available in every WLAN device, localization
using the received signal strength (RSS) is a relatively cost-effective solution.
    Commonly, the procedure to estimate the position using RSS measurements is
based on a two step method known as location fingerprinting. Fingerprinting based
positioning is divided into an offline and an online phase.
  1. In the offline phase (also known as training phase), by site-surveying, the RSS
     from multiple APs at different points in the building are collected and stored
     in a fingerprint database, called a radio map. More specifically, a number
     𝑁 of survey locations in the site of interest are chosen and for each location
     a number 𝑀 of RSS measurements is collected and stored to constitute the
     fingerprint of that location. For each location, the sample mean and the
     sample variance of the 𝑀 measurements are computed.

                                                                                   1
2                                                               Introduction   Chapter 1




    Figure 1.1. Example of a building map with the plotted survey locations.


    2. In the online phase, the client’s position is inferred online by comparing the
       measured RSS with the previous stored measurements, and the position esti-
       mate is a combination of the locations whose fingerprints most closely match
       the observation. This approach can be viewed as an application of pattern
       recognition.

    There are four key challenges in fingerprinting:

    1. generation of fingerprints,

    2. preprocessing for reducing computational complexity and enhancing accuracy,

    3. selection of APs for use in positioning, and

    4. estimation of the distance between a new RSS observation and the fingerprints.

   When the positioning algorithm is performed on handheld devices, extra care
should be taken due to their constrained resource and, clearly, using all available
APs increases the computational complexity of the algorithm. For this reason, an
Section 1.1.   Wireless Positioning Systems                                        3

AP selection method is needed. The recent work of Kushki et al. [2] offers a real-
time AP selection technique that minimizes the correlation between the selected
APs to reduce the complexity and ensure the coverage.
   The contribution of this thesis is twofold:
    βˆ™ the initial development of a new algorithm based on the separating ellipsoids
      method, in order to obtain a better estimate of the user position compared to
      the previous methods.
    βˆ™ The application of the Aβˆ— algorithm in order to take into account the previous
      positions of the user and possible obstacles, such as walls, and to reduce the
      computational complexity of the positioning algorithm.

    The second chapter of this report explains the mathematical problem of the
wireless positioning and gives a brief overview of the previous work by Kushki et
al. [2] and by Fang et al. [3]. The third and the forth chapter concern the theory
of the separating ellipsoids method and the application of this method to an initial
dataset from the EntrΒ΄ mall in MalmΒ¨. The fifth chapter explains the π΄βˆ— algorithm
                       e             o
and how it is used in the context of wireless positioning. Finally, the sixth chapter
presents the obtained results by the carried analyses.
Chapter 2


         PROBLEM EXPLANATION
           AND PREVIOUS WORK


2.1    Notation
The following notation will be used throughout the report:

    βˆ™ 𝐿 : number of APs

    βˆ™ 𝑁 : number of survey locations in the radio map

    βˆ™ p : point in 2-D cartesian space

    βˆ™ r : measured RSS vector

    βˆ™ F(p𝑖 ) : Fingerprint record at the 𝑖th location of the radio map

    βˆ™ A : Multiple Discriminant Analysis (MDA) projection matrix

    βˆ™ 𝐷 : number of discriminative components (DCs) generated after the MDA
      projection

    βˆ™ q : Projected RSS vector by the projection matrix A

      Λ†
    βˆ™ F(p𝑖 ) : Projected fingerprint record at the 𝑖th location of the radio map

    βˆ™ Φ𝑖 : Computed ellipsoid for the location 𝑖

    βˆ™ 𝐷𝑖𝑠𝑑(𝑖, π‘˜) : real distance between the location 𝑖 and the location π‘˜


2.2    Problem explanation
One of the fundamental problems in location fingerprinting is to produce a position
estimate using training information on a discrete grid of training points when a new

4
Section 2.3.   Properties of the weights functions   𝑀𝑖                                   5

RSS observation is received. That is, we seek a function 𝑔 such that

                                 𝑔 : ℝ 𝐿 Γ— ℝ2 Γ— . . . Γ— ℝ 2 β†’ ℝ 2
                                                     𝑁
                                       𝑔 : (r, p1 , . . . , p𝑁 ) β†’ p
                                                                   Λ†                 (2.2.1)

where p1 , . . . , p𝑁 are the cartesian coordinates (π‘₯, 𝑦) of the 𝑁 survey locations with
respect to a predefined reference point (0, 0). If the function 𝑔 is restricted to the
class of linear functions, the problem is reduced to determining a set of weights such
that
                                        βˆ‘π‘
                                   Λ†
                                   p=      𝑀(r, F(p𝑖 ))p𝑖 ,                        (2.2.2)
                                           𝑖=1

where F(p𝑖 ) = [r𝑖 (1), . . . , r𝑖 (𝑀 )] is an 𝐿 Γ— 𝑀 matrix defined as
                                          βŽ› 1              1
                                                                  ⎞
                                             π‘Ÿπ‘– (1) . . . π‘Ÿπ‘– (𝑀 )
                               F(p𝑖 ) = ⎝ .         ..        . ⎟
                                          ⎜ .                 . ⎠
                                                .       .     .                      (2.2.3)
                                              𝐿                  𝐿
                                             π‘Ÿπ‘– (1)       ...   π‘Ÿπ‘– (𝑀 )

   The column of the fingerprint matrix are RSS vectors r𝑖 (𝑑) = [π‘Ÿπ‘– (𝑑), . . . , π‘Ÿπ‘– (𝑑)]𝑇
                                                                   1              𝑀

that contain readings from 𝐿 APs at time 𝑑 in the 𝑖th survey location p𝑖 . For
convenience the weights 𝑀(r, F(p𝑖 )) can be also denoted with 𝑀𝑖 .

2.3     Properties of the weights functions 𝑀𝑖
The weights 𝑀𝑖 in the estimation function (2.2.2) are required to be decreasing
functions of the distance between an observation vector and the training records.
That is, survey points whose training records closely match the observations should
receive a higher weight. In particular, the functions 𝑀(r, F(p𝑖 ) should satisfy the
following properties:
      βˆ‘π‘
   1.    𝑖=1 𝑀(r, F(p𝑖 )) = 1 so that the estimated position belongs to the convex
      hull defined by the set of survey positions. This can be achieved by including
      a normalization term in (2.2.2).
   2. 𝑀(r, F(p𝑖 )) is a monotonically decreasing function in both arguments with
      respect to a distance measure 𝑑(r, F(p𝑗 )):

                    𝑑(r, F(p𝑖 )) β‰₯ 𝑑(rβ€² , F(p𝑖 )) β‡’ 𝑀(r, F(p𝑖 )) ≀ 𝑀(rβ€² , F(p𝑖 )),   (2.3.1)
                    𝑑(r, F(p𝑖 )) β‰₯ 𝑑(r, F(p𝑗 )) β‡’ 𝑀(r, F(p𝑖 )) ≀ 𝑀(r, F(p𝑗 )).       (2.3.2)

       The inequality (2.3.1) means that, if a RSS measurement rβ€² is closer than
       another RSS measurement r to the training records in a location 𝑖, then, the
       survey point 𝑖 should receive a higher weight for the RSS measurement rβ€² than
       the RSS measurement r.
6                                           Problem Explanation and previous work   Chapter 2


      The inequality (2.3.2) means that, if a RSS measurement r is further from
      the training measurements in a location 𝑖 than the training measurements in
      another location 𝑗, then, the survey site 𝑖 should receive a lower weight than
      the location 𝑗.

2.4    The kernel-based method
The method developed by Kushki et al. [2] involves the use of kernel functions to
estimate the weights 𝑀𝑖 . In particular, a non-linear mapping πœ™ : r ∈ ℝ𝑑 β†’ πœ™(r) ∈ β„±
is used to map the input to a higher (possibly infinite) dimensional space β„± where
the weight calculations take place. Once mapped, the weights are calculated in the
following way:
                                          𝑀
                                      1 βˆ‘ βŸ¨πœ™(r), πœ™(r𝑖 (𝑑))⟩
                      𝑀(r, F(p𝑖 )) =                           .             (2.4.1)
                                      𝑀 𝑑=1 βˆ₯πœ™(r)βˆ₯ βˆ₯πœ™(r𝑖 (𝑑))βˆ₯
where βŸ¨β‹…, β‹…βŸ© denotes the scalar product in β„±. At first glance, the calculation of
weights in a possibly infinite dimensional space may seem computationally in-
tractable. Fortunately, the kernel trick can be used to calculate the inner product
in β„± without the need for explicit evaluation of the mapping πœ™. The kernel trick
allows the replacement of inner products in β„± by a kernel evaluation on the input
vectors. In the WLAN context, the kernel is a function π‘˜ : ℝ𝐿 Γ— ℝ𝐿 β†’ ℝ such
that π‘˜(r, rβ€² ) = βŸ¨πœ™(r), πœ™(rβ€² )⟩. Since the training data only enter weight calculations
through inner products, the kernel trick can be used to carry out inner products
in β„± without the need for explicit evaluation of mapping πœ™. The weights function
𝑀(r, F(p𝑖 )) then becomes
                                        𝑀
                                     1 βˆ‘       π‘˜(r, r𝑖 (𝑑))
                    𝑀(r, F(p𝑖 )) =        √                                          (2.4.2)
                                     𝑀 𝑑=1 π‘˜(r, r)π‘˜(r𝑖 (𝑑), r𝑖 (𝑑))
. Mercer’s theorem guarantees the correspondence between a kernel function and
an inner product in a feature space β„±, given that the kernel is a positive definite
function. Moreover the weights 𝑀(r, F(p𝑖 )) satisfy the properties (2.3.1) and (2.3.2)
with the distance 𝑑 defined as follows:
                                               𝑁
                                      πœ™(r)   1 βˆ‘ πœ™(r𝑖 (𝑑))
                    𝑑(r, F(p𝑖 )) =         βˆ’                                         (2.4.3)
                                     βˆ₯πœ™(r)βˆ₯ 𝑁 𝑑=1 βˆ₯πœ™(r𝑖 (𝑑))βˆ₯
In fact, we have that:
                                        1(
                                           𝑑(r, F(p𝑖 ))2 βˆ’ 𝐢 ,
                                                            )
                         𝑀(r, F(p𝑖 )) = βˆ’                                      (2.4.4)
                                        2
One of the kernels proposed in the article by Kushki et al. [2] is the Gaussian kernel
defined as follows:
                                       𝑁
                                                βˆ’βˆ₯r βˆ’ r𝑖 (𝑑)βˆ₯2
                                              (                )
                                   1 βˆ‘
                    𝑀(r, F(p𝑖 )) =        𝑒π‘₯𝑝                     .            (2.4.5)
                                   𝑁 𝑑=1              2𝜎 2
Section 2.5.   A projection-based method                                            7

One drawback is that the Gaussian Kernel is linearly dependent on the number 𝑁
of the survey locations.

2.5     A projection-based method
The method proposed by Fang et al. [3] is based on the calculation of a projection
matrix A to map the RSS measurements in a feature space of lower dimension where
they are uncorrelated and then to take into account only the most discriminative
information.
    More specifically, when AP selection techniques are applied to reduce the com-
putational cost, an additional binary-decision vector v = [0, 1, 0, . . . 0]𝑇 ∈ ℝ𝐿 is
required to indicate which APs are retained. If the number of non-zero components
is 𝐷, the dimension of the RSS measurement r is reduced from 𝐿 to 𝐷.
    Unlike the zero-one weighting (binary decision) in the selection of APs, the
projection-based system reduces dimension from 𝐿 to 𝐷 by combining APs with
a discriminative projection A. The projection matrix A and the number 𝐿 is
determined by Multiple Discriminant Analysis (MDA).
    The MDA criterion has widely been applied for the problem of multiple-class
classification. In our context, the classes can be viewed as the training measure-
ments in the different reference locations in a fingerprinting problem. The objective
of MDA is to find the components that are useful for discriminating between them.
After the MDA projection, the generated components, which are named discrim-
inative components (DCs), carry the most discriminative information and rank it
by its information quantity. Assuming that r ∈ ℝ𝐿 is the RSS measurement, the
projection-based system extracts the required DCs as follows:

                                             q = A𝑇 r                          (2.5.1)

where q ∈ ℝ𝐷 , A𝐿×𝐷 is the MDA optimized projection matrix. The value of
𝐷 represents the number of required DCs for the location system and it can be
determined by a system threshold according to different applications.
    In classical MDA, two scatter matrices called between-class (S𝐡 ) and within-
class (Sπ‘Š ) matrices are defined to quantify the quality of the projection, as follows:
                                   𝑁 𝑀
                                   βˆ‘βˆ‘
                          Sπ‘Š =               (r𝑖 (𝑑) βˆ’ u𝑖 )(r𝑖 (𝑑) βˆ’ u𝑖 )𝑇
                                                       Β―             Β―         (2.5.2)
                                   𝑖=1 𝑑=1
                                   𝑁
                                   βˆ‘
                           S𝐡 =          𝑀 (Β― 𝑖 βˆ’ u)(Β― 𝑖 βˆ’ u)𝑇
                                            u     Β― u      Β―                   (2.5.3)
                                   𝑖=1

              1
                βˆ‘π‘€
       Β―
where u𝑖 = 𝑀 𝑑=1 r𝑖 (𝑑) is the mean of the training measurements in the 𝑖th
                      βˆ‘π‘ βˆ‘π‘€
location and u = 𝑁 1
             Β―     ⋅𝑀   𝑖=1  𝑑=1 r𝑖 (𝑑) is the global mean of the all measurements.
   Sπ‘Š measures the closeness of the samples within the locations, whereas S𝐡
measures the separation between locations. An optimal projection would maximize
8                                         Problem Explanation and previous work   Chapter 2




Figure 2.1. In this figure it is possible to observe an example of projected RSS
measurement space for three different locations (three DCs).


the between-class measure and minimize the within-class measure. One way to
achieve this result is to maximize the ratio as follows:

                          Λ†               ∣A𝑇 S𝐡 A∣
                          A𝑀 𝐷𝐴 = arg maxA 𝑇                                       (2.5.4)
                                          ∣A Sπ‘Š A∣
The numerator of (2.5.4) measures the separation among different locations that
is the RSS spatial separation (between-class), whereas the denominator measures
the compactness of each location that is the RSS temporal variation (within-class).
As a result, the optimal weighting matrix A satisfying (2.5.4) leads to the best
separation between different reference locations, because the discrimination and the
compactness are maximized at the same time. This way, the generated DCs in the
projected space have low temporal variation but high spatial separation to provide
discriminating information as much as possible. The dimension in the projected
space (number of DCs) can be determined by a system threshold according to
different applications.
    Once obtained, the projection matrix A and the number of desired DCs, 𝐷, a
probabilistic approach is used to derive the weighting functions 𝑀𝑖 .
    First, the observed RSS signal r and the fingerprint matrix F(p𝑖 ) are projected
by the projection matrix A:

                           q = A𝑇 r                                                (2.5.5)
                      Λ†
                      F(p𝑖 ) = [A𝑇 β‹… r𝑖 (1), . . . , A𝑇 β‹… r𝑖 (𝑀 )]                 (2.5.6)
                               βŽ› 1                   1
                                                           ⎞
                                 π‘žπ‘– (1) . . . π‘žπ‘– (𝑀 )
                             = ⎝ .         ..          . ⎟
                               ⎜ .                     . ⎠
                                    .         .        .                           (2.5.7)
                                    𝐷               𝐷
                                   π‘žπ‘– (1)    ...   π‘žπ‘– (𝑀 )
Section 2.5.   A projection-based method                                                  9

For 𝑖 = 1, . . . , 𝑁 , we have:
                                             Λ†
                                         P(q∣F(p𝑖 ))
                                   𝑀𝑖 = βˆ‘π‘                                           (2.5.8)
                                               Λ†
                                           P(q∣F(p𝑖 ))
                                                 𝑖=1

            Λ†
where P(q∣F(p𝑖 )) indicates the likelihood value of the location p𝑖 , given the ob-
servation q = [π‘ž 1 , . . . , π‘ž 𝐷 ]𝑇 . The likelihood that the Gaussian model produces is
formulated as:
                                   𝐷
                                                                (π‘ž 𝑑 βˆ’ 𝑒𝑖 (𝑑))2
                                                             (                  )
                       Λ†
                                   βˆ‘             1                      ˜
                   P(q∣F(p𝑖 )) =           √            β‹… 𝑒π‘₯𝑝 βˆ’                      (2.5.9)
                                               2πœ‹Λœπ‘– (𝑑)
                                                 𝑠                   2Λœπ‘– (𝑑)
                                                                      𝑠
                                   𝑑=1

    where 𝑒𝑖 (𝑑) and 𝑠𝑖 (𝑑) are calculated in the following way:
          ˜          ˜
                                               𝑀
                                            1 βˆ‘ 𝑑
                               𝑒𝑖 (𝑑) =
                               ˜                 π‘ž (𝑑)                              (2.5.10)
                                            𝑀 𝑑=1 𝑖
                                               𝑀
                                            1 βˆ‘ 𝑑
                               𝑠𝑖 (𝑑) =
                               ˜                 (π‘ž (𝑑) βˆ’ 𝑒𝑖 (𝑑))2
                                                          ˜                         (2.5.11)
                                            𝑀 𝑑=1 𝑖

The final position can then be estimated by:
                                      𝑁
                                      βˆ‘
                               Λ†
                               p =          𝑀𝑖 β‹… p𝑖
                                      𝑖=1
                                      𝑁            Λ†
                                      βˆ‘        P(q∣F(p𝑖 ))
                                  =            βˆ‘π‘             β‹… p𝑖                  (2.5.12)
                                            P(        Λ†
                                                    q∣F(p𝑖 ))
                                      𝑖=1            𝑖=1
Chapter 3


                             A NEW APPROACH


The way to calculate the weights 𝑀𝑖 using the Gaussian likelihood presented in the
previous chapter can be seen as a problem of statistical learning. In other words,
given a number 𝑁 of classes (in our specific case the 𝑁 survey locations) and a new
point π‘₯ (in our context a new RSS measurement r), for each class 𝑖 = 1, . . . , 𝑁 the
likelihood that the new point π‘₯ belongs to the class 𝑖 is calculated. We then search
the maximum of these and the class that realizes the maximum should be the class
to which the new point π‘₯ belongs. In this context, the method of the separating
ellipsoids [4] might constitute an improving alternative.

3.1    Theory
Suppose that one has a number 𝑁 of distinct classes and a number 𝑀 of points
(π‘₯𝑖 , 𝑐𝑖 ), where 𝑐𝑖 denotes the class to which the 𝑖 point belongs. Fixing a class π‘˜,
with π‘˜ = 1, . . . , 𝑁 , one would like to include the points of this class in an ellipsoid,
in order to obtain the best separation ratio from all the points of the other classes.
A way to express this is to ask for two different separating ellipsoids that share the
same center and axis directions, but, the second one is larger by a factor πœŒπ‘˜ , which
is subsequently called the separation ratio. The inner ellipsoid encloses all labeled
points π‘₯𝑖 with 𝑐𝑖 = π‘˜, while, all the remaining points lie outside the external one.
The higher the value of πœŒπ‘˜ , the better the separation is. In Figure 3.1 it is possible
to observe 10 different classes randomly generated, each with 50 points. For these
classes, the separating ellipsoids have been computed and the separation ratios πœŒπ‘˜
with π‘˜ = 1, . . . , 10, have been maximized.
     An ellipsoid in ℝ𝑑 can be parametrized by its center πœ‡ and a symmetric, positive
semidefinite matrix P that determines its shape and size

                      πœ€π‘‘ (πœ‡, 𝑃 ) = {π‘₯ ∈ ℝ𝑑 ∣(π‘₯ βˆ’ πœ‡)β€² 𝑃 (π‘₯ βˆ’ πœ‡) ≀ 1}.               (3.1.1)

where ’ denotes the transpose. The ellipsoid is degenerate if 𝑃 is singular. A scaled
concentric ellipsoid with the same shape can be obtained by dividing the matrix 𝑃
with a scalar 𝜌 > 0 and the scaled ellipsoid has an equivalent representation as:

                     πœ€π‘‘ (πœ‡, 𝑃 ) = {π‘₯ ∈ ℝ𝑑 ∣(π‘₯ βˆ’ πœ‡)β€² 𝑃 (π‘₯ βˆ’ πœ‡) ≀ 𝜌},                (3.1.2)

10
Section 3.1.   Theory                                                                 11




Figure 3.1. In this figure it is possile to observe 10 different classes randomly
generated, each with 50 points. For these classes, the separating ellipsoids have been
computed and the separation ratios πœŒπ‘˜ with π‘˜ = 1, . . . , 10 have been maximized.

                   √
where the ratio 𝜌 is the ratio between the lengths of the corresponding semimajor
axes of the two concentric ellipsoids. Suppose we are given the labeled training set
{(π‘₯𝑖 , 𝑐𝑖 )}𝑀 . For each class π‘˜ ∈ {1, . . . , 𝑁 } the associated problem is to find the
            𝑖=1
ellipsoids πœ€π‘‘ (πœ‡π‘˜ , π‘ƒπ‘˜ ) and πœ€π‘‘ (πœ‡π‘˜ , π‘ƒπ‘˜ ) such that πœŒπ‘˜ is maximized while satisfying the
                                      𝜌
                                        π‘˜


constraints

                           π‘₯𝑖 ∈ πœ€π‘‘ (πœ‡π‘˜ , π‘ƒπ‘˜ ) , βˆ€π‘– : 𝑐𝑖 = π‘˜
                                   (          )
                                          π‘ƒπ‘˜
                           π‘₯𝑖 ∈ πœ€π‘‘ πœ‡π‘˜ ,
                              /                 , βˆ€π‘– : 𝑐𝑖 βˆ•= π‘˜.                   (3.1.3)
                                          πœŒπ‘˜

    In other words, the problem is

                        maximize                     πœŒπ‘˜
                                                β€²
                        subject to   (π‘₯𝑖 βˆ’ πœ‡π‘˜ ) π‘ƒπ‘˜ (π‘₯𝑖 βˆ’ πœ‡π‘˜ ) ≀ 1,
                                               βˆ€π‘– : 𝑐𝑖 = π‘˜
                                     (π‘₯𝑖 βˆ’ πœ‡π‘˜ )β€² π‘ƒπ‘˜ (π‘₯𝑖 βˆ’ πœ‡π‘˜ ) β‰₯ πœŒπ‘˜ ,
                                              βˆ€π‘– : 𝑐𝑖 βˆ•= π‘˜,
                                                    π‘ƒπ‘˜ ΰͺ° 0
                                                πœŒπ‘˜ β‰₯ 1,                           (3.1.4)

where the optimization variables are πœ‡π‘˜ , π‘ƒπ‘˜ , πœŒπ‘˜ and π‘ƒπ‘˜ ΰͺ° 0 denotes the constraint
that π‘ƒπ‘˜ must be a positive semidefinite matrix. This problem is always feasible and
the patterns of class π‘˜ are separable from all others using ellipsoids if and only if
the optimal πœŒπ‘˜ > 1.
12                                                                A new approach   Chapter 3


    To handle also the case when the patterns are not strictly separable using ellip-
soids, the idea of soft margins is used, introducing slack variables πœ‰π‘– for each of the
pattern inclusion or exclusion constraints, and a weighted penalty term is added to
the objective function. The new problem is then formulated in the following way:
                                                  βˆ‘
                     maximize              πœŒπ‘˜ βˆ’ 𝛾 𝑖 πœ‰π‘–
                   subject to    (π‘₯𝑖 βˆ’ πœ‡π‘˜ )β€² π‘ƒπ‘˜ (π‘₯𝑖 βˆ’ πœ‡π‘˜ ) ≀ 1 + πœ‰π‘– ,
                                              βˆ€π‘– : 𝑐𝑖 = π‘˜
                                          β€²
                                (π‘₯𝑖 βˆ’ πœ‡π‘˜ ) π‘ƒπ‘˜ (π‘₯𝑖 βˆ’ πœ‡π‘˜ ) β‰₯ πœŒπ‘˜ βˆ’ πœ‰π‘– ,
                                              βˆ€π‘– : 𝑐𝑖 βˆ•= π‘˜,
                                          π‘ƒπ‘˜ ΰͺ° 0, πœ‰π‘– β‰₯ 0.                           (3.1.5)

Here, 𝛾 is a positive weighting parameter. Both the problems (3.1.4) and (3.1.5)
are nonconvex but can be turned into convex optimization problems (being SDPs)
using an homogeneous embedding technique.

3.2    Homogeneous Embedding
Any ellipsoid in ℝ𝑑 can be viewed as the intersection of a homogeneous ellipsoid
(one centered at the origin) in ℝ𝑑+1 and the hyperplane

                         𝐻 = {𝑧 ∈ ℝ𝑑+1 βˆ£π‘§ = (π‘₯, 1), π‘₯ ∈ ℝ𝑑 }.                       (3.2.1)

An homogeneous ellipsoid in ℝ𝑑+1 can be expressed as

                         πœ€π‘‘+1 (0, Ξ¦) = {𝑧 ∈ ℝ𝑑+1 βˆ£π‘§ β€² Φ𝑧 ≀ 1},                      (3.2.2)

where Φ is a symmetric positive semidefinite matrix. To find the intersection of
πœ€π‘‘+1 (0, Ξ¦) with the hyperplane 𝐻, the matrix 𝑃 is partitioned as follows:
                                       (      )
                                         𝑃 π‘ž
                                  Ξ¦=            ,                          (3.2.3)
                                         π‘žβ€² π‘Ÿ

where 𝑃 ∈ ℝ𝑑×𝑑 , π‘ž ∈ ℝ𝑑 , and π‘Ÿ ∈ ℝ. If 𝑧 = (π‘₯, 1), then

                          𝑧 β€² Φ𝑧 ≀ 1 ⇔ π‘₯β€² 𝑃 π‘₯ + 2π‘ž β€² π‘₯ + π‘Ÿ ≀ 1.                     (3.2.4)

Now let

                                  πœ‡ = βˆ’π‘ƒ βˆ’1 π‘ž
                                   𝛿 = π‘Ÿ βˆ’ π‘ž β€² 𝑃 βˆ’1 π‘ž,                              (3.2.5)

then
                       𝑧 β€² Φ𝑧 ≀ 1 ⇔ (π‘₯ βˆ’ πœ‡)β€² 𝑃 (π‘₯ βˆ’ πœ‡) + 𝛿 ≀ 1.                     (3.2.6)
Section 3.3.   SDP Formulations                                                     13

    Since P is positive semidefinite, we always have 𝛿 ≀ 1. In addition, whenever 𝑃
is positive definite, we have 0 ≀ 𝛿 < 1, and
                                               (           )
                                                      𝑃
                           πœ€π‘‘+1 (0, Ξ¦) ∩ 𝐻 = πœ€π‘‘ πœ‡,           .              (3.2.7)
                                                   (1 βˆ’ 𝛿)
                                                                 (        )
                                                                        𝑃
In this case, πœ€π‘‘+1 (0, Ξ¦) is called a homogeneous embedding of πœ€π‘‘ πœ‡, (1βˆ’π›Ώ) .
   Given a nondegenerate ellipsoid πœ€π‘‘ (πœ‡, 𝑃 ) its homogeneous embedding in ℝ𝑑+1 is
nonunique, and can be parametrized as πœ€π‘‘+1 (0, Φ𝛿 ), for 0 ≀ 𝛿 < 1, where
                         (                                  )
                              (1 βˆ’ 𝛿)𝑃         βˆ’(1 βˆ’ 𝛿)𝑃 πœ‡)
                    Φ𝛿 =                                                   (3.2.8)
                           βˆ’(1 βˆ’ 𝛿)πœ‡β€² 𝑃 (1 βˆ’ 𝛿)β€² πœ‡β€² 𝑃 πœ‡ + 𝛿
    The special case 𝛿 = 0, is called a canonical embedding and this case πœ€π‘‘+1 (0, Ξ¦0 )
is a degenerate ellipsoid because the matrix Ξ¦0 is singular.

3.3     SDP Formulations
In this section, the problem of separating patterns in ℝ𝑑 using homogeneous ellip-
soids in ℝ𝑑+1 is considered. At first only two classes: the inner points and the outer
ones. For this purpose, the 𝑀 training data are divided into two sets {π‘₯𝑖 }𝑀1 and
                                                                              𝑖=1
{𝑦𝑗 }𝑀2 , where the π‘₯β€² 𝑠 are points belonging to a specific class, and 𝑦𝑗 𝑠 are all the
     𝑗=1               𝑖
                                                                        β€²

other points and 𝑀1 + 𝑀2 = 𝑀.
    First, the training data must be embedded in the hyperplane 𝐻 by letting
                                  ( )
                                    π‘₯𝑖
                            π‘Žπ‘– =        , 𝑖 = 1, . . . , 𝑀1 ,                  (3.3.1)
                                    1
                                  ( )
                                    𝑦𝑗
                            𝑏𝑗 =          𝑗 = 1, . . . , 𝑀2 .                  (3.3.2)
                                    1
The problem (3.1.5) using homogeneous embedding can be stated as
                                  maximize             𝜌
                                  subject to     π‘Žβ€² Ξ¦π‘Žπ‘–
                                                  𝑖        ≀ 1,
                                               𝑖 = 1, . . . , 𝑀1
                                                 𝑏′ Φ𝑏𝑗 β‰₯ 𝜌,
                                                  𝑗
                                               𝑗 = 1, . . . , 𝑀2 ,
                                               Ξ¦ ΰͺ° 0, 𝜌 β‰₯ 1                     (3.3.3)
where the optimization variables are Φ and 𝜌. This is a convex optimization problem,
more specifically an SDP. Once the problem (3.3.3) is solved, the parametrization
in ℝ can be recovered using the transformations in (3.2.4) and (3.2.5) The two
concentric separating ellipsoids are
                          πœ€π‘‘ (πœ‡, 𝑃/(1 βˆ’ 𝛿)),     πœ€π‘‘ (πœ‡, 𝑃/(𝜌(1 βˆ’ 𝛿))).          (3.3.4)
    The next important properties are satisfied:
14                                                                    A new approach   Chapter 3


     βˆ™ If the patterns are separable, then the optimal solution to (3.3.3) is always a
       canonical embedding, i.e, 𝛿 = 0.

     βˆ™ If the patterns are nonseparable, then the optimal solution to (3.3.3) always
       has 𝜌 = 1, and Ξ¦ is degenerate such that 𝛿 = 1.

     Similarly, the version with slack variables can be formulated as an SDP problem:
                                             (βˆ‘         βˆ‘      )
                           maximize 𝜌 βˆ’ 𝛾        𝑖 πœ‰π‘– +   𝑗 πœ‚π‘—

                          subject to       π‘Žβ€² Ξ¦π‘Žπ‘– ≀ 1 + πœ‰π‘– ,
                                            𝑖
                                               𝑖 = 1, . . . , 𝑀1
                                           𝑏′ Φ𝑏𝑗 β‰₯ 𝜌 βˆ’ πœ‚π‘— ,
                                            𝑗
                                               𝑗 = 1, . . . , 𝑀2 ,
                                                                                        (3.3.5)

where the optimization variables are Ξ¦, 𝜌, and the slack variables πœ‰π‘– and πœ‚π‘— , while 𝛾 is
a weighting parameter. Once the SDP problem (3.3.5) is solved, the transformations
(3.2.4) and (3.2.5) can be also used to find the separating ellipsoids in ℝ𝑑 .
    Instead of using the weighting parameter 𝛾, the problem (3.3.5) can be parametrized
using the ratio 𝜌. Switching to notations that are explicit for multiclass problems,
given any fixed 𝜌 β‰₯ 1, for each class π‘˜ ∈ {1, . . . , 𝑁 }, the problem to solve is:
                                            βˆ‘π‘›
                          minimize            𝑖=1 πœ‚π‘–π‘˜
                                           β€²
                          subject to      𝑧𝑖 Ξ¦π‘˜ 𝑧𝑖 ≀ 1 + πœ‚π‘–π‘˜ ,
                                                 βˆ€π‘– : 𝑐𝑖 = π‘˜
                                           β€²
                                          𝑧𝑖 Ξ¦π‘˜ 𝑧𝑖    β‰₯ 𝜌 βˆ’ πœ‚π‘–π‘˜ ,
                                                 βˆ€π‘– : 𝑐𝑖 βˆ•= π‘˜
                                                   Ξ¦π‘˜ ΰͺ° 0,
                                       πœ‚π‘–π‘˜ β‰₯ 0, 𝑖 = 1, . . . , 𝑀,                       (3.3.6)

where 𝑧𝑖 = (π‘₯𝑖 , 1), and the optimization variables are Ξ¦π‘˜ and πœ‚π‘–π‘˜ .
    In Figures 3.2 and 3.3 the problem (3.3.6) has been solved for ten classes ran-
domly generated, each consisting of 50 points, using respectively values of 𝜌 equal
to 1.5 and 2.

3.4     Classification Rule
After solving in the training phase the problem (3.3.6) for each class π‘˜ ∈ {1, . . . , 𝑁 },
let Ξ¦βˆ— , with π‘˜ = 1, . . . , 𝑁 , be the optimal obtained solution. Then, given a new
     π‘˜
data point π‘₯ ∈ ℝ𝑑 , we first let 𝑧 = (π‘₯, 1) ∈ ℝ𝑑+1 and compute

                              πœŒπ‘˜ = 𝑧 β€² Ξ¦βˆ— 𝑧,
                                        π‘˜         π‘˜ = 1, . . . , 𝑁,                     (3.4.1)
Section 3.4.   Classification Rule                                                     15




Figure 3.2. In this figure the problem (3.3.6) has been solved for ten classes
randomly generated, each consisting of 50 points, using a value of 𝜌 equal to 1.5.




Figure 3.3. In this figure the problem (3.3.6) has been solved for ten classes
randomly generated, each consisting of 50 points, using a value of 𝜌 equal to 2.


then label the data with the class
                                    Λ†
                                    π‘˜ = π‘Žπ‘Ÿπ‘” min 𝑧 β€² Ξ¦βˆ— 𝑧 β€² .                      (3.4.2)
                                                     π‘˜
                                               π‘˜

The quantity πœŒπ‘˜ = 𝑧 β€² Ξ¦π‘˜ 𝑧 for each π‘˜ = 1, . . . , 𝑁 can be seen as a distance (ellipsoid
                                                                     Λ†
distance) of the point 𝑧 ∈ ℝ𝑑+1 from the class π‘˜. The class π‘˜ that realizes the
minimum of this distance is the class which mostly represents the point 𝑧.
    Next chapters will deal with the application of this concept to the problem of
wireless positioning as an alternative to the Gaussian method.
Chapter 4


                                                                     DATA


4.1    Plotting the locations on the maps
The first hurdle to overcome concerns the plot of the survey locations on the map.
Given the building map image, the GPS coordinates of both the top-left corner
(NW) of the map and the bottom-right corner of the map (SE) and the GPS coor-
dinates of the locations, we need to remap these coordinates to the ones to be used
on the display.
   For each pair of GPS coordinates (π‘™π‘Žπ‘‘π‘–π‘‘π‘’π‘‘π‘’(β‹…), π‘™π‘œπ‘›π‘”π‘–π‘‘π‘’π‘‘π‘’(β‹…)) we set:



                              π‘₯𝐺𝑃 𝑆 (β‹…) = π‘™π‘œπ‘›π‘”π‘–π‘‘π‘’π‘‘π‘’(β‹…)                         (4.1.1)
                              𝑦𝐺𝑃 𝑆 (β‹…) = π‘™π‘Žπ‘‘π‘–π‘‘π‘’π‘‘π‘’(β‹…)                          (4.1.2)



First, the GPS coordinates (π‘₯𝐺𝑃 𝑆 (β‹…), 𝑦𝐺𝑃 𝑆 (β‹…)) must be converted to a regular coor-
dinate system where the π‘₯ unit distance is no longer dependent on the 𝑦 scale value.
To perform this, it is necessary to fix a common latitude 𝑦𝐢𝑂𝑀 , that should be the
same for all the points that are to be used in calculation. The needed transformation
is the following:


                                               (                )
                                                   2πœ‹
                       π‘₯π‘ˆ 𝐺𝑃 𝑆 = π‘₯𝐺𝑃 𝑆 β‹… cos           β‹… 𝑦𝐢𝑂𝑀                  (4.1.3)
                                                   360
                       π‘¦π‘ˆ 𝐺𝑃 𝑆 = 𝑦𝐺𝑃 𝑆                                         (4.1.4)



Applying this transformation, the uniformed GPS coordinates of the 𝑁 π‘Š and 𝑆𝐸
corners are found.
   Let now 𝑃 be a survey location with uniformed GPS coordinates (π‘₯π‘ˆ 𝐺𝑃 𝑆 (𝑃 ), π‘¦π‘ˆ 𝐺𝑃 𝑆 (𝑃 )).
By a translation, one put the origin in the 𝑆𝐸 corner and find the new coordinates

16
Section 4.1.   Plotting the locations on the maps                                17




Figure 4.1. A rotation of angle 𝛽 βˆ’ 𝛼 is needed so the rectangle becomes straight.

of the 𝑁 π‘Š corner and of the location 𝑃 as
                          π‘₯π‘ˆ 𝐺𝑃 𝑆𝑑 (𝑃 ) = π‘₯π‘ˆ 𝐺𝑃 𝑆 (𝑃 ) βˆ’ π‘₯π‘ˆ 𝐺𝑃 𝑆 (𝑆𝐸)        (4.1.5)
                          π‘¦π‘ˆ 𝐺𝑃 𝑆𝑑 (𝑃 ) = π‘¦π‘ˆ 𝐺𝑃 𝑆 (𝑃 ) βˆ’ π‘¦π‘ˆ 𝐺𝑃 𝑆 (𝑆𝐸)        (4.1.6)
                       π‘₯π‘ˆ 𝐺𝑃 𝑆𝑑 (𝑁 π‘Š ) = π‘₯π‘ˆ 𝐺𝑃 𝑆 (𝑁 π‘Š ) βˆ’ π‘₯π‘ˆ 𝐺𝑃 𝑆 (𝑆𝐸)       (4.1.7)
                       π‘¦π‘ˆ 𝐺𝑃 𝑆𝑑 (𝑁 π‘Š ) = π‘¦π‘ˆ 𝐺𝑃 𝑆 (𝑁 π‘Š ) βˆ’ π‘¦π‘ˆ 𝐺𝑃 𝑆 (𝑆𝐸)       (4.1.8)
                        π‘₯π‘ˆ 𝐺𝑃 𝑆𝑑 (𝑆𝐸) = π‘₯π‘ˆ 𝐺𝑃 𝑆 (𝑆𝐸) βˆ’ π‘₯π‘ˆ 𝐺𝑃 𝑆 (𝑆𝐸)          (4.1.9)
                        π‘¦π‘ˆ 𝐺𝑃 𝑆𝑑 (𝑆𝐸) = π‘¦π‘ˆ 𝐺𝑃 𝑆 (𝑆𝐸) βˆ’ π‘₯π‘ˆ 𝐺𝑃 𝑆 (𝑆𝐸)         (4.1.10)
                                                                            (4.1.11)
The next steps are suggested by the scheme in Figure 4.1. An anticlockwise rotation
of angle 𝛼 βˆ’ 𝛽 is applied to the points 𝑁 π‘Š and 𝑃 so that the 𝑁 𝐸 and π‘†π‘Š corners
lie respectively on the π‘¦βˆ’ axes and on the π‘₯βˆ’ axes. It is possible then to use these
new coordinates to plot the locations on the image of the map. The angle 𝛽 is
computed using the image of the map. In particular, denoting with 𝑀 the width
and with β„Ž the height of the image of the map in pixels, the length of the diagonal
𝑑 of the image in pixels is given by:
                                        √
                                   𝑑 = 𝑀2 + β„Ž2                              (4.1.12)
18                                                                Data   Chapter 4




Figure 4.2. After the rotation we are able to compute the new coordinates for the
𝑁 𝐸 and π‘†π‘Š corners.

The angle 𝛽 is then given by:
                                               (𝑀)
                                  𝛽 = arcsin                             (4.1.13)
                                                𝑑
The angle 𝛼 instead is given by:
                              (                               )
                                π‘¦π‘ˆ 𝐺𝑃 𝑆 (𝑁 π‘Š ) βˆ’ π‘¦π‘ˆ 𝐺𝑃 𝑆 (𝑆𝐸)
                  𝛼 = arctan                                             (4.1.14)
                                π‘₯π‘ˆ 𝐺𝑃 𝑆 (𝑆𝐸) βˆ’ π‘₯π‘ˆ 𝐺𝑃 𝑆 (𝑁 π‘Š )
An anticlockwise rotation is applied to the points 𝑃 and 𝑁 π‘Š using the following
matrix:
                           (                          )
                             cos(𝛼 βˆ’ 𝛽) βˆ’ sin(𝛼 βˆ’ 𝛽)
                      𝐴=                                                 (4.1.15)
                             sin(𝛼 βˆ’ 𝛽) cos(𝛼 βˆ’ 𝛽)
The new coordinates for the 𝑁 π‘Š and for the 𝑆𝐸 corners are given then by:
                  (                   )      (                 )
                    π‘₯π‘ˆ 𝐺𝑃 π‘†π‘‘π‘Ÿ (𝑁 π‘Š )           π‘₯π‘ˆ 𝐺𝑃 𝑆𝑑 (𝑁 π‘Š )
                                        = 𝐴⋅                           (4.1.16)
                    π‘¦π‘ˆ 𝐺𝑃 π‘†π‘‘π‘Ÿ (𝑁 π‘Š )           π‘¦π‘ˆ 𝐺𝑃 𝑆𝑑 (𝑁 π‘Š )
                     (                )      (               )
                       π‘₯π‘ˆ 𝐺𝑃 π‘†π‘‘π‘Ÿ (𝑃 )          π‘₯π‘ˆ 𝐺𝑃 𝑆𝑑 (𝑃 )
                                        = 𝐴⋅                           (4.1.17)
                       π‘¦π‘ˆ 𝐺𝑃 π‘†π‘‘π‘Ÿ (𝑃 )          π‘¦π‘ˆ 𝐺𝑃 𝑆𝑑 (𝑃 )
Section 4.1.   Plotting the locations on the maps                                  19




Figure 4.3. Change of coordinates of the location 𝑃 to the domain [0, 1] Γ— [0, 1].
The new coordinates 𝑒(𝑃 ) and 𝑣(𝑃 ) represent respectively the ratio of the width 𝑀
and of the height β„Ž of the map image, where the corresponding pixel is.

   In this new coordinate system it is easy now to find the coordinates of the 𝑁 𝐸
corner and of the π‘†π‘Š corner as shown in Figure 4.2. In particular:
                     (                )    (                )
                       π‘₯π‘ˆ 𝐺𝑃 𝑆𝑑 (𝑁 𝐸)              0
                                        =                                 (4.1.18)
                       π‘¦π‘ˆ 𝐺𝑃 𝑆𝑑 (𝑁 𝐸)       π‘¦π‘ˆ 𝐺𝑃 𝑆𝑑 (𝑁 π‘Š )
                     (                )    (                )
                       π‘₯π‘ˆ 𝐺𝑃 𝑆𝑑 (π‘†π‘Š )       π‘₯π‘ˆ 𝐺𝑃 𝑆𝑑 (𝑁 π‘Š )
                                        =                                 (4.1.19)
                       π‘¦π‘ˆ 𝐺𝑃 𝑆𝑑 (π‘†π‘Š )              0
                                                                              (4.1.20)
   A final translation of the points is necessary to put the origin in the π‘†π‘Š
corner and get new positive coordinates (π‘₯π‘ˆ 𝐺𝑃 π‘†π‘‘π‘Ÿπ‘‘ (β‹…), π‘¦π‘ˆ πΊπ‘†π‘‘π‘Ÿπ‘‘ (β‹…)) for the corners
𝑆𝐸,𝑁 π‘Š ,𝑁 𝐸 and the point 𝑃 .
   At this point we calculate:
                                             π‘₯π‘ˆ 𝐺𝑃 π‘†π‘‘π‘Ÿπ‘‘ (𝑃 )
                                   𝑒(𝑃 ) =                                    (4.1.21)
                                            π‘₯π‘ˆ 𝐺𝑃 π‘†π‘‘π‘Ÿπ‘‘ (𝑆𝐸)
                                             π‘¦π‘ˆ 𝐺𝑃 π‘†π‘‘π‘Ÿπ‘‘ (𝑃 )
                                    𝑣(𝑃 ) =                                   (4.1.22)
                                            π‘¦π‘ˆ 𝐺𝑃 π‘†π‘‘π‘Ÿπ‘‘ (𝑁 π‘Š )
20                                                                        Data   Chapter 4


Basically, as shown in Figure 4.3, the initial domain is restricted to [0, 1] Γ— [0, 1] and
𝑒(𝑃 ), 𝑣(𝑃 ) are the coordinates of the location 𝑃 in the new domain and represent
respectively the ratio of the width 𝑀 and of the height β„Ž of the map image where
the corresponding pixel is. Then, in order to find the corresponding pixels (𝑝π‘₯, 𝑝𝑦)
in the map:

                               𝑝π‘₯ = π‘Ÿπ‘œπ‘’π‘›π‘‘(𝑀 β‹… 𝑒)                                 (4.1.23)
                               𝑝𝑦 = π‘Ÿπ‘œπ‘’π‘›π‘‘(β„Ž β‹… (1 βˆ’ 𝑣))                           (4.1.24)

where π‘Ÿπ‘œπ‘’π‘›π‘‘ means rounding to the nearest integer.
Section 4.2.   Data                                                                 21




Figure 4.4. Map of the Entr´ shopping mall in Malm¨ with the 61 plotted loca-
                           e                      o
tions.

4.2     Data
The data used to carry on the preliminary analyses have been taken in the EntrΒ΄
                                                                              e
shopping mall in Malm¨. The dataset consists of:
                      o
    βˆ™ 70 access points (APs) for the Wireless signal.
    βˆ™ 13 access points (APs) for the GSM signal.
    βˆ™ 61 survey locations.
    βˆ™ 320 measurements of the wireless signal in each of the 61 locations.
    βˆ™ 240 measurements of the GSM signal in each of the 61 locations.
The GSM signal is treated in the same way of the Wifi signal. Since the final
results for the GSM measurements are not as good as for the Wifi measurements,
this report will treat only the Wifi signal.
    The first step is to project the Wifi measurements in a lower dimensional space
by a projection matrix A, as described in the Section 2.5 to reduce the number of
APs from 70 to 13 DCs. Since, we will perform this step on all the datasets that we
will have to deal with, from now on, to simplify the notation, we use the wording
RSS measurements to denote the projected RSS measurements.
    Furthermore, for each location 𝑖 with 𝑖 = 1, . . . , 61 the signature vectors, thus
the mean vector and the standard deviation value for the RSS measurements, have
been calculated.
22                                                                      Data   Chapter 4


    We initially attempt to apply the method of the separating ellipsoids to the
training measurements for each survey location π‘˜ . In other words, fixing a location
π‘˜ we want to construct the ellipsoid that contains the RSS measurements of the
location π‘˜ and keeps outside all the measurements of the other sites different from
π‘˜. Given q𝑖 (1), . . . , q𝑖 (𝑀 ), the 𝑀 RSS measurements at the location 𝑖 and letting
                                                (       )
                                                 q𝑖 (𝑗)
                                       z𝑖 (𝑗) =                                (4.2.1)
                                                   1

the SDP problem to solve is the following:

                           minimize                 𝜌
                          subject to      zπ‘˜ (𝑖)β€² Ξ¦π‘˜ zπ‘˜ (𝑖) ≀ 1,
                                            βˆ€π‘– = 1, . . . , 𝑀
                                          z𝑙 (𝑗)β€² Ξ¦π‘˜ z𝑙 (𝑗) β‰₯ 𝜌,
                                        βˆ€π‘™ = 1, . . . , 𝑁   𝑙 βˆ•= π‘˜
                                            βˆ€π‘— = 1, . . . , 𝑀
                                                Ξ¦π‘˜ ΰͺ° 0,
                                                 𝜌 > 1,                          (4.2.2)

where the optimization variables are Ξ¦π‘˜ and 𝜌. As expected the measurements are
not strictly separable and for this reason the version with all slack variables is used:
                                             βˆ‘
                         minimize               𝑖,𝑗 πœ‚π‘–π‘—
                        subject to zπ‘˜ (𝑗)β€² Ξ¦π‘˜ zπ‘˜ (𝑗) ≀ 1 + πœ‚π‘˜π‘— ,
                                            βˆ€π‘— = 1, . . . , 𝑀
                                       z𝑙 (𝑗)β€² Ξ¦π‘˜ z𝑙 (𝑗) β‰₯ 𝜌 βˆ’ πœ‚π‘™π‘— ,
                                        βˆ€π‘™ = 1, . . . , 𝑁   𝑙 βˆ•= π‘˜
                                            βˆ€π‘— = 1, . . . , 𝑀
                                                Ξ¦π‘˜ ΰͺ° 0,
                                                πœ‚π‘–π‘˜ β‰₯ 0.                         (4.2.3)

where the optimization variables are Ξ¦π‘˜ and all the slack variables πœ‚π‘–π‘— . Regrettably,
due to the large number of variables required to solve this optimization problem,
the solution requires a vast amount of computational power and memory allocation.
   For this reason, in view of larger environments, alternative versions have been
formulated in order to decrease the number of variables and constraints to optimize.

4.3    Version with one slack for location
One may simplify the original problem by restricting the formulation to only allow
for one slack variable for each location. This approach will differ from the version
Section 4.4.   Iterative version                                                    23

with all slack variables since the points of each location have all the same slack
variable. That is, let π‘˜ be a fixed survey location. The formulation of the problem
is:
                                           βˆ‘π‘
                          minimize            π‘˜=1 πœ‚π‘˜
                              subject to zβ€² (𝑖)Ξ¦π‘˜ zπ‘˜ (𝑖) ≀ 1 + πœ‚π‘˜ ,
                                          π‘˜
                                               βˆ€π‘– = 1, . . . , 𝑀
                                           zβ€² (𝑖)Ξ¦π‘˜ z𝑙 (𝑖)
                                            𝑙                 β‰₯ 𝜌 βˆ’ πœ‚π‘™ ,
                                           βˆ€π‘™ = 1, . . . , 𝑁        𝑙 βˆ•= π‘˜
                                               βˆ€π‘– = 1, . . . , 𝑀                (4.3.1)
                                                   Ξ¦π‘˜ ΰͺ° 0,                      (4.3.2)
                                           πœ‚π‘– β‰₯ 0, βˆ€π‘– = 1, . . . , 𝑁,           (4.3.3)

where the optimization variables are Ξ¦π‘˜ and the 𝑁 slack variables πœ‚π‘˜ . The efficiency
in computing the ellipsoid has improved substantially since the number of variables
to optimize is 𝑁 against 𝑁 β‹… 𝑀 of the version with all the slack variables. The
main drawback is when there is no separability in the classes. In fact, in this case
the ellipsoid, even if they contain the most part of the point of the respective class,
overlap themselves critically and this can lead to a misclassification of the data.


4.4     Iterative version
As an alternative, instead of considering for each location π‘˜ all the measurements
𝑀 , one may initially solve the optimization problem (4.2.3), taking into account a
lower number of measurements for each location. Then, during next iterations, one
may add the same number of new measurements and solve again the optimization
problem, using for the old points the values of the slack variables computed at the
previous steps.
    In particular, let 𝑛1 be an integer number such that 𝑛1 < 𝑀 and 𝑀 is a multiple
of 𝑛1 . At the first iteration, the version with all slack variables is applied but with
only 𝑛1 points. That is, let π‘˜ be a fixed location, then:
                                                  βˆ‘           (1)
                              minimize                 𝑖,π‘˜   πœ‚π‘˜π‘–
                                                (1)                      (1)
                             subject to zβ€² (𝑖)Ξ¦π‘˜ zπ‘˜ (𝑖) ≀ 1 + πœ‚π‘˜π‘– ,
                                         π‘˜
                                               βˆ€π‘– = 1, . . . , 𝑛1
                                                (1)                     (1)
                                         zβ€² (𝑖)Ξ¦π‘˜ z𝑗 (𝑖)
                                          𝑗                   β‰₯ 𝜌 βˆ’ πœ‚π‘—π‘– ,
                                           βˆ€π‘— = 1, . . . , 𝑁        𝑗 βˆ•= π‘˜
                                               βˆ€π‘– = 1, . . . , 𝑛1
                                                      (1)
                                                  Ξ¦π‘˜ ΰͺ° 0,                       (4.4.1)
                                                (1)
                                               πœ‚π‘–π‘˜    β‰₯ 0, βˆ€π‘– βˆ€π‘˜                (4.4.2)
24                                                                           Data     Chapter 4




Figure 4.5. In this Figure, the iterative version has been solved for ten classes
randomly generated, each consisting of 50 points, using a value of 𝜌 equal to 1.1
and adding 25 points at each iteration.

                                         (1)                                        (1)
where the optimization variables are Ξ¦π‘˜ and the 𝑁 β‹… 𝑛1 slack variables πœ‚π‘–π‘— . From
the second iteration 𝑛1 points are added to each location. For the old points the
values of the slack variables computed at the previous steps are used. That is, at
iteration (𝑙) with 𝑙 β‰₯ 2
                                                βˆ‘        (𝑙)
                     minimize                     𝑖,π‘˜   πœ‚π‘–π‘˜
                                          (𝑙)                  (π‘™βˆ’1)
                    subject to    zβ€² (𝑖)Ξ¦π‘˜ zπ‘˜ (𝑖) ≀ 1 + πœ‚π‘–π‘˜
                                   π‘˜                                     ,
                                     βˆ€π‘– = 1, . . . , (𝑙 βˆ’ 1) β‹… 𝑛1
                                           (𝑙)                     (𝑙)
                                    zβ€² (𝑖)Ξ¦π‘˜ zπ‘˜ (𝑖)
                                     π‘˜                   ≀ 1 + πœ‚π‘–π‘˜ ,
                                 βˆ€π‘– = (𝑙 βˆ’ 1) β‹… 𝑛1 + 1, . . . , 𝑙 β‹… 𝑛1
                                      βˆ€π‘— = 1, . . . , 𝑁, 𝑗 βˆ•= π‘˜
                                          (𝑙)                  (π‘™βˆ’1)
                                  zβ€² (𝑖)Ξ¦π‘˜ z𝑗 (𝑖) β‰₯ 𝜌 βˆ’ πœ‚π‘–π‘—
                                   𝑗                                     ,
                                     βˆ€π‘– = 1, . . . , (𝑙 βˆ’ 1) β‹… 𝑛1
                                           (𝑙)                     (𝑙)
                                    zβ€² (𝑖)Ξ¦π‘˜ z𝑗 (𝑖)
                                     𝑗                  β‰₯ 𝜌 βˆ’ πœ‚π‘–π‘— ,
                                 βˆ€π‘– = (𝑙 βˆ’ 1) β‹… 𝑛1 + 1, . . . , 𝑛1 β‹… 𝑙
                                                 (𝑙)
                                                Ξ¦π‘˜ ΰͺ° 0,
                                         (𝑙)
                                        πœ‚π‘–π‘˜ β‰₯ 0,         βˆ€π‘–, βˆ€π‘˜,                          (4.4.3)
        (π‘™βˆ’1)                                                                       (𝑙)
where πœ‚π‘–π‘˜     are the slack variables optimized at the previous step while πœ‚π‘–π‘˜ are the
ones that correspond to the new added points and must be optimized. After the
last iteration the last slack variables might be used to recompute again the previous
Section 4.5.   Variance ellipsoids                                                    25

ones.
    After testing this method on the simulated data in which each class consists of
50 points it has been noticed that if only two points are added at each iteration the
ellipsoid are inaccurate. They are moved with respect to the points that they should
contain, while increasing the number of points that we add at each iteration,the
obtained ellipsoids are almost equal to the ones computed with the all slack version.
In order to obtain accurate ellipsoids the number of points 𝑛1 should be major or
equal to 𝑀 . However, such a solution will still suffer from the demanding memory
           2
allocation similar to the original problem.
    In the Figure 4.5, the iterative version has been solved for ten classes randomly
generated, each consisting of 50 points, using a value of 𝜌 equal to 1.1 and adding
25 points at each iteration.

4.5     Variance ellipsoids
A third option consists of building, after fixing a location π‘˜, a normal oriented el-
lipsoid (variance ellipsoid) that contains at least the most part of the measurements
of the survey site π‘˜. For all the remaining locations, it is possible then to check how
many measurements are inside this ellipsoid and take into account, when computing
the separating ellipsoid for the location π‘˜, only the locations 𝑖 whose measurements
mostly overlap the measurements of the site π‘˜.
    In particular, the equation of a normal oriented ellipsoid in an 𝑛-dimensional
space is defined by the equation
                                     π‘₯𝑇 β‹… A β‹… π‘₯ <= 1
where A is a n-dimensional diagonal matrix
                                  βŽ›                   ⎞
                                    πœ†1 . . .        0
                                  ⎜ . ..            . ⎟
                              A=⎝ .  .     .        . ⎠
                                                    .
                                         0   ...   πœ†π‘›
where πœ†π‘– , for 𝑖 = 1, . . . , 𝑛, are the eigenvalues of the matrix. The relation between
the equatorial radii π‘Ÿπ‘— for 𝑗 = 1, . . . , 𝑛 of the ellipsoids and the eigenvalues πœ†π‘— is
given by
                                                  1
                                           π‘Ÿπ‘— = √                                 (4.5.1)
                                                  πœ†π‘—
Since for each location 𝑖 it is possible to calculate the mean vector of the RSS
measurements and the vector of the standard deviations 𝜎(𝑖) = [𝜎1 (𝑖), . . . , 𝜎𝐷 (𝑖)]𝑇 ,
it is also possible to construct the ellipsoid that contains most of the points of the
site 𝑖. The matrix which defines the ellipsoid is given by:
                                     βŽ› 1                     ⎞
                                          2
                                         𝜎1 (𝑖)
                                                ...     0
                          B(𝑖) = 𝐢 β‹… ⎜ .        ..      . ⎟
                                     ⎜
                                     ⎝ .   .        .   . ⎟
                                                        . ⎠                     (4.5.2)
                                                        1
                                           0    . . . 𝜎2 (𝑖)
                                                       𝐷
26                                                                       Data    Chapter 4


where 𝐢 is a chosen constant and it depends on the number of DCs.
   Once computed the ellipsoids for each location we perform the following steps:
     1. Fix a location π‘˜.
     2. For each location 𝑗, with 𝑗 = 1, . . . , 𝑁 , count how many measurements of this
        location are inside the variance ellipsoid of the site π‘˜.
     3. Repeat the two steps above for each location π‘˜ with π‘˜ = 1, . . . , 𝑁 .
By doing this, it is possible to know, for each site π‘˜, which are the locations 𝑗,
whose measurements mostly overlap the measurements of the site π‘˜. Finally, taking
into considerations only these locations 𝑗 to compute the separating ellipsoid for
the location π‘˜, it is possible to reduce the number of constraints and variables to
optimize. Regrettably, it can happen that far away locations have many measure-
ments that overlap, or many locations overlap each other. For this reason, larger
buildings will still necessitate prohibitive memory requirements.




Figure 4.6. In this Figure, it is possible to observe a set of 3-D points with the
corresponding variance ellipsoid, computed as described in the section 4.5. As it is
possible to observe, almost all the points lie inside the ellipsoid.
Chapter 5


                       THE A* ALGORITHM


5.1     The A* algorithm
In computer science, Aβˆ— is a widely used algorithm for path-finding and graph
traversal, the process of plotting an efficiently traversable path between points,
called nodes. Noted for its performance and accuracy, the algorithm was first de-
scribed by Peter Hart, Nils Nilsson and Bertram Raphael in 1968 [6]. It is an
extension of Edger Dijkstra’s 1959 algorithm and it achieves better performance
(with respect to time) by using heuristic distances.
    Aβˆ— uses a best-first search and finds the least cost-path from a given initial node
to one goal node. To achieve this, it uses a distance-plus-cost heuristic function
(usually denoted by 𝑓 (π‘₯)) to determine the order in which the search visits nodes
in the tree. The distance-plus-cost heuristic is a sum of two functions:
   βˆ™ the path-cost function, which is the cost from the starting node to the current
     node (usually denoted by 𝑔(π‘₯))
   βˆ™ and an admissible ”heuristic estimate” of the distance from each node to the
     goal node (usually denoted by β„Ž(π‘₯)).
In mathematical terms 𝑓 (π‘₯) is then given by:
                                𝑓 (π‘₯) = 𝑔(π‘₯) + β„Ž(π‘₯).                           (5.1.1)
The function β„Ž(π‘₯) is a mathematical way of using previous knowledge in order to
expand the fewest possible nodes in searching for an optimal path and speed up the
algorithm. For example, inside a building, β„Ž(π‘₯) might represent the straight-line
distance to the goal (without taking into account the walls), since that is physically
the smallest possible distance between any two points or nodes. The adjective ”ad-
missible” means that β„Ž(π‘₯) must not overestimate the distance to the goal. Dijkstra’s
algorithm is a particular case of the Aβˆ— algorithm, then using β„Ž(π‘₯) = 0.

5.1.1       How A* works
        βˆ—
The A algorithm requires a starting node 𝑠 and a goal node 𝑑. The steps of the
algorithm are the followings:

                                                                                   27
28                                                             The A* algorithm   Chapter 5


     1. Add the starting node 𝑠 to the ”open set”. The open set contains the nodes
        that must be explored and they might be inserted in the optimal path towards
        the goal node 𝑠.

     2. Repeat the following:

       a) Look for the node in the open set which has the lowest 𝑓 value. We refer
          to this as the ”current node”.
       b) Switch it to the ”closed set”.
        c) For each of the nodes that are reachable from the current node:
             βˆ™ If it is in the closed list, ignore it. Otherwise do the following.
             βˆ™ If it is not on the open list, add it to the open list. Make the current
               node the parent of this node, and record the 𝑓 , 𝑔 and β„Ž costs of the
               node.
             βˆ™ If it is on the open list already, check to see if this path to that square
               costs less, using 𝑔 cost as the measure. A lower 𝑔 cost means that this
               is a better path. If so, change the parent of the node to the current
               node and recalculate the 𝑔 and 𝑓 scores of the node.
       d) Stop when:
             βˆ™ The target square is added to the closed list. In this case the path has
               been found. Or,
             βˆ™ the target square has not been found, and the open list is empty. In
               this case there is no path.

     3. Save the path. Working backwards from goal node 𝑑, go from each node to
        its parent node until the starting node 𝑠 is reached.

5.2      Applied A* star
Since the buildings we have to deal with might have many walls or non-walkable
zones, it would be useful to compute the shortest paths from each survey location
to the remaining ones. In fact, during the online phase, the RSS measurement is
taken by the mobile client (MC) at least every two seconds. During this interval,
the user is unlikely to move to a location which is a long way far from the point
in which he or she is. Then, by calculating the real distances (the lengths of the
shortest paths taking into account possible obstacles) from one survey site to the
other ones, we will be able to compute the ellipsoids considering only the locations
that are most likely to reach.
    More specifically, fixing a survey point π‘˜ with π‘˜ = 1, . . . , 𝑁 , we compute the
real distance 𝐷𝑖𝑠𝑑(π‘˜, 𝑖) from the location π‘˜ to the location 𝑖 for every 𝑖 = 1, . . . , 𝑁 .
Then, fixing a parameter 𝐷𝑀 𝐴𝑋, the locations 𝑗’s with 𝑗 = 1, . . . , 𝑁 such that

                                 𝐷𝑖𝑠𝑑(π‘˜, 𝑗) <= 𝐷𝑀 𝐴𝑋.                              (5.2.1)
Section 5.2.   Applied A* star                                                             29

are selected. The ellipsoid for the location π‘˜ is then computed, taking into account
only the locations 𝑗’s that satisfy the inequality (5.2.1). This approach solves in
a smart and intuitive way the memory problem since the number of variables and
constraints to optimize is always reasonable also for large environments.
     The binary images (black and white) are stored in the computer as binary ma-
trices where 0 represents the black color and 1 represents the white color. In the
case the image represents a map, the black color is used for the non walkable zones
and the white color for the walkable zones.
     In order to be able to compute the real distances in meters from one point of the
building to another one, the binary map needs to be resized. To perform this, a small
distance 𝑑 is fixed, depending on the grade of accuracy that we want to keep after
the resizing. Usually, 𝑑 is chosen to be equal to the half of the minimum distance
between all the locations. Given the GPS coordinates of the North-West angle and
the South-East angle of the map, it is possible to compute the GPS coordinates
of the other two angles (North-East and South-West) and the distances in meters
between these four corners. Using this data, we build a grid of the image, in which
each edge of a square represents a distance equal to 𝑑. Furthermore, the resized
binary map is always a binary matrix, in which each element, 1 or 0, represents a
square of the grid.
     The Aβˆ— algorithm, implemented in MATLAB, takes as input the resized binary
image, a starting node (𝑖𝑠 , 𝑗𝑠 ) in the image matrix and a destination node (𝑖𝑑 , 𝑗𝑑 )
in the image matrix. The resulting path is a sequence of nodes in the image matrix
(𝑖𝑠 , 𝑗𝑠 ), (𝑖1 , 𝑗1 ), . . . , (𝑖𝑑 , 𝑗𝑑 ). An example can be observed in Figures 5.1 and 5.2.
For the heuristic distance β„Ž, it is reasonable to take the distance from the current
element (𝑖, 𝑗) to the destination element (𝑖𝑑 , 𝑗𝑑 ) multiplied by 𝑑, that is:
                                                     √
                                        β„Ž(𝑖, 𝑗) = 𝑑 β‹… (𝑖𝑑 βˆ’ 𝑖)2 + (𝑗𝑑 βˆ’ 𝑗)2             (5.2.2)

The transition costs are instead simply computed in the following way:
    βˆ™ cost of an horizontal movement = 𝑑,
    βˆ™ cost of a vertical movement = 𝑑,
                                              √
    βˆ™ cost of a diagonal movement = 𝑑 β‹…           2.

The distance of the path is then computed by summing the costs of the made
transitions from the starting point (𝑖𝑠 , 𝑗𝑠 ) to the final point (𝑖𝑑 , 𝑗𝑑 ). It is possible
then to build the distance matrix 𝐷𝑖𝑠𝑑.
30                                                              The A* algorithm   Chapter 5




       βŽ›                           ⎞
        0    0   0   0   0   0   0
       ⎜0    1   1   0   1   1   0⎟
       ⎜                           ⎟
       ⎜0    1   0   1   0   1   0⎟
       ⎜                           ⎟
       ⎜0    1   0   1   0   1   0⎟
       ⎜                           ⎟ =β‡’
       ⎜0    1   0   1   0   1   0⎟
       ⎜                           ⎟
       ⎜0    1   1   1   1   1   0⎟
       ⎜                           ⎟
       ⎝0    1   1   0   1   1   0⎠
        0    0   0   0   0   0   0

     Figure 5.1. An example of a binary matrix and the corresponding image




     βŽ›                       ⎞
      0   0 0        0 0 0 0
     ⎜0
     ⎜    ⃝ ⃝
          1 1        0 1 1 0⎟⎟
     ⎜0
     ⎜    ⃝ 0
          1          1 0 1 0⎟⎟
     ⎜0
     ⎜    ⃝ 0
          1          1 0 ⃝ 0⎟
                         1   ⎟ =β‡’
     ⎜0
     ⎜    ⃝ 0
          1          1 0 ⃝ 0⎟
                         1   ⎟
     ⎜0
     ⎜    ⃝ ⃝
          1 1        ⃝ ⃝ ⃝ 0⎟
                     1 1 1   ⎟
     ⎝0   1 1        0 1 1 0⎠
      0   0 0        0 0 0 0


Figure 5.2.         Example of the shortest path from the starting node (2, 3)
to the final node (4, 6).               The resulting path is: (2, 3),(2, 2),(3, 2),(4.2),
(5, 2),(6, 2),(6, 3),(6, 4),(6, 5),(6, 6),(5, 6),(4, 6), that corresponds to the red line in
the image.
Chapter 6


                                                     ANALYSES


6.1    Computation of the distance matrix and of the ellipsoids
To evaluate the performance of the Aβˆ— algorithm and of the ellipsoid method taking
into account the only reachable locations, two different datasets have been consid-
ered. The first comes from the first floor of the Hansa Mall in Malm¨ and it consists
                                                                    o
of 38 locations, with 160 Wifi measurements for the locations 𝑖 with 𝑖 = 1, . . . , 35
and with 80 Wifi measurements for the locations 𝑖 with 𝑖 = 36, . . . , 38. The sec-
ond comes from the ground floor of the Hansa Mall in Malm¨ and it consists of 30
                                                              o
locations, each with 160 Wifi measurements.
    In Figure 6.1 it is possible to observe the two maps, of the ground floor and of
the first floor with the plotted locations.




                       (a)                              (b)

                   Figure 6.1. (a) First floor (b) Ground floor

   After resizing the map, Aβˆ— has been applied, in order to obtain the distance
matrix 𝐷𝑖𝑠𝑑, where 𝐷𝑖𝑠𝑑(𝑖, 𝑗) is the real distance between the location 𝑖 and the
location 𝑗 taking into account the walls. In the Figures 6.2-6.4, it is possible to

                                                                                  31
32                                                                  Analyses   Chapter 6


observe a comparison between the distances computed from some locations 𝑗 to
other survey sites without taking into account the walls and the distances computed
taking into account the walls.
    In Figure 6.5, it is clearly visible how the distance from the red location to the
yellow one, if walls are taken into account, is much higher (44π‘š) than the case in
which walls are not taken into account (16π‘š). After, for each location 𝑗 only those
sites that are reachable are selected. Here, we set the parameter 𝐷𝑀 𝐴𝑋 equal to
12π‘š, and form a vector 𝑃 𝑇 (π‘˜) that contains only the locations 𝑗 that has a distance
less than 𝐷𝑀 𝐴𝑋 meters from the location π‘˜. The ellipsoids are then computed
for each location π‘˜ taking into account only the sites 𝑗 stored in the vector 𝑃 𝑇 (π‘˜).
More specifically, the problem to solve is the following:

                                             βˆ‘
                        minimize                𝑖,𝑗   πœ‚π‘–π‘—
                                         β€²
                       subject to zπ‘˜ (𝑗) Ξ¦π‘˜ zπ‘˜ (𝑗) ≀ 1 + πœ‚π‘—π‘˜ ,
                                       βˆ€π‘— = 1, . . . , 𝑀 (π‘˜)
                                    z𝑙 (𝑗)β€² Ξ¦π‘˜ z𝑙 (𝑗) β‰₯ 𝜌 βˆ’ πœ‚π‘—π‘™ ,
                                      βˆ€π‘™ ∈ 𝑃 𝑇 (π‘˜) 𝑙 βˆ•= π‘˜
                                        βˆ€π‘— = 1, . . . , 𝑀 (𝑙)
                                             Ξ¦π‘˜ ΰͺ° 0,
                                             πœ‚π‘–π‘˜ β‰₯ 0.                           (6.1.1)

where the optimization variables are the slack variables πœ‚π‘–π‘— , Ξ¦π‘˜ and with 𝑀 (𝑗) with
𝑗 = 1, . . . , 𝑁 we denote the number of measurements at the location π‘˜, typically
being different for every location. Here, the values of 𝜌 used are: 1.075, 1.05, 1.025.
   The main difference between the algorithm in (6.1.1) and the one in (4.2.2) is
that the number of the points that have to lie outside the ellipsoid for the location
π‘˜ are notably less since we now consider only the locations that are reachable.
On the other hand, as it is possible to observe, there are no constraints for the
points of locations further than 12 meters. It can happen, then, that some RSS
measurements of some location 𝑗 would be classified in ellipsoids of far locations.
However, this does not constitute a problem, since the far locations are already cut
out by selecting the only reachable ones.
Section 6.1.   Computation of the distance matrix and of the ellipsoids         33




                       (a)                                                (b)

                      Figure 6.2. (a) With no walls (b) With walls




                       (a)                                                (b)

                      Figure 6.3. (a) With no walls (b) With walls
34                                                               Analyses   Chapter 6




                  (a)                                         (b)

                  Figure 6.4. (a) With no walls (b) With walls




                  (a)                                         (b)

Figure 6.5. (a) With no walls (b) With walls. In this Figure, it is clearly visible
how the distance from the red location to the yellow one, if walls are taken into
account, is much higher (44π‘š) than the case in which walls are not taken into
account (16π‘š).
Section 6.2.   Classification                                                        35

6.2     Classification
The next step in the analysis is the classification. For each location π‘˜, we compare
the percentage of its measurements that are correctly mapped to site π‘˜ when using:
    βˆ™ The Gaussian likelihood taking into account all the locations.
    βˆ™ The Gaussian likelihood taking into account only the reachable locations in
      𝑃 𝑇 (π‘˜).
    βˆ™ The ellipsoid distance taking into account only the reachable locations.
More specifically, setting the initial counters π‘…π‘–π‘”β„Žπ‘‘1(π‘˜) = 0, π‘…π‘–π‘”β„Žπ‘‘2(π‘˜) = 0 and
π‘…π‘–π‘”β„Žπ‘‘3(π‘˜) = 0, for all the measurements q = qπ‘˜ (𝑖) at the location π‘˜, the following
quantities are computed:
      Λ†                         Λ†
    βˆ™ π‘˜1 = π‘Žπ‘Ÿπ‘” max𝑖=1,...,𝑁 P(q∣F(p𝑖 )).
      Λ†                            Λ†
    βˆ™ π‘˜2 = π‘Žπ‘Ÿπ‘” maxπ‘–βˆˆπ‘ƒ 𝑇 (π‘˜) P(q∣F(p𝑖 )).
                                        ( )
      Λ†3 = π‘Žπ‘Ÿπ‘” minπ‘–βˆˆπ‘ƒ 𝑇 (π‘˜) (q 1) β‹… Φ𝑖 β‹… q
    βˆ™ π‘˜
                                         1
       Λ†       Λ†         Λ†
When π‘˜1 = π‘˜, π‘˜2 = π‘˜ or π‘˜3 = π‘˜, we increment π‘…π‘–π‘”β„Žπ‘‘1(π‘˜), π‘…π‘–π‘”β„Žπ‘‘2(π‘˜) or π‘…π‘–π‘”β„Žπ‘‘3(π‘˜).
We perform this for all the locations π‘˜ with π‘˜ = 1, . . . , 𝑁 . Then, it is possible
to obtain the percentage of points, for each method, that are mapped in the right
location. The final results are available in the next tables, both for the ground floor
and for the first floor.
    βˆ™ First column: number of the location.
    βˆ™ Second column: percentage of points mapped in the right location using the
      Gaussian likelihood and taking into account all the locations.
    βˆ™ Third column: percentage of points mapped in the right location using the
      Gaussian likelihood and taking into account only the reachable ones.
    βˆ™ Fourth column: percentage of points mapped in the right location using the
      Ellipsoid distance and taking into account only the reachable ones.
    The expectations for the ellipsoid method taking into account only the reachable
locations are pretty high. In fact, as it is possible to observe in all the tables, the
percentages of points that are mapped in the right survey site using this method
are much larger than using both the Gaussian method with all the locations and
the Gaussian method with only the reachable locations.
    It is also worth to notice that the percentages of points mapped with the Gaus-
sian method with only the reachable locations is higher than the percentages of
points mapped with the Gaussian method with all the locations, since we do not
consider far locations.
36                                                           Analyses   Chapter 6


                First floor with DMAX = 12 with rho = 1.075

      Gaussian Method (All)   Ell Method (Reach)   Gaussian Method (Reach)

 1    68.75                   93.75                85
 2    36.875                  64.375               36.875
 3    8.75                    73.75                8.75
 4    31.875                  71.25                34.375
 5    16.25                   53.75                16.25
 6    26.25                   38.75                27.5
 7    46.25                   67.5                 46.25
 8    21.25                   82.5                 25
 9    85                      93.75                85
 10   36.25                   78.125               36.25
 11   42.5                    35.625               42.5
 12   30                      61.25                41.25
 13   51.25                   55                   51.25
 14   18.75                   63.75                28.75
 15   28.75                   51.25                30
 16   17.5                    56.25                27.5
 17   23.75                   73.125               46.25
 18   54.375                  78.125               60.625
 19   62.5                    87.5                 68.75
 20   43.75                   66.25                50
 21   21.25                   25                   21.25
 22   37.5                    71.25                40
 23   17.5                    73.75                18.75
 24   52.5                    70                   57.5
 25   12.5                    40                   12.5
 26   21.25                   66.25                21.25
 27   68.75                   65                   71.25
 28   37.5                    68.75                40
 29   38.75                   83.75                45
 30   60                      76.25                60
 31   60                      67.5                 60
 32   59.375                  95                   60.625
 33   32.5                    77.5                 45
 34   63.75                   86.25                65
 35   76.875                  89.375               76.875
 36   40                      50                   40
 37   45                      97.5                 53.75
 38   70                      85                   70
Section 6.2.   Classification                                                 37

                          First floor DMAX = 12 with rho = 1.05

        Gaussian Method (All)      Ell Method (Reach)   Gaussian Method (Reach)

 1      68.75                      95                   85
 2      36.875                     60.625               36.875
 3      8.75                       73.75                8.75
 4      31.875                     71.25                34.375
 5      16.25                      55                   16.25
 6      26.25                      33.75                27.5
 7      46.25                      67.5                 46.25
 8      21.25                      82.5                 25
 9      85                         93.75                85
 10     36.25                      78.125               36.25
 11     42.5                       32.5                 42.5
 12     30                         61.25                41.25
 13     51.25                      55                   51.25
 14     18.75                      65                   28.75
 15     28.75                      51.25                30
 16     17.5                       56.25                27.5
 17     23.75                      73.125               46.25
 18     54.375                     79.375               60.625
 19     62.5                       87.5                 68.75
 20     43.75                      66.25                50
 21     21.25                      26.25                21.25
 22     37.5                       71.25                40
 23     17.5                       73.75                18.75
 24     52.5                       70                   57.5
 25     12.5                       40                   12.5
 26     21.25                      66.25                21.25
 27     68.75                      65                   71.25
 28     37.5                       68.75                40
 29     38.75                      83.75                45
 30     60                         76.25                60
 31     60                         67.5                 60
 32     59.375                     93.75                60.625
 33     32.5                       77.5                 45
 34     63.75                      85                   65
 35     76.875                     89.375               76.875
 36     40                         50                   40
 37     45                         97.5                 53.75
 38     70                         90                   70
38                                                          Analyses   Chapter 6


                  First floor DMAX = 12 with rho = 1.025

      Gaussian Method (All)   Ell Method (Reach)   Gaussian Method (Reach)

 1    68.75                   95                   85
 2    36.875                  61.875               36.875
 3    8.75                    73.75                8.75
 4    31.875                  70                   34.375
 5    16.25                   55                   16.25
 6    26.25                   33.75                27.5
 7    46.25                   67.5                 46.25
 8    21.25                   82.5                 25
 9    85                      93.75                85
 10   36.25                   78.125               36.25
 11   42.5                    31.25                42.5
 12   30                      61.25                41.25
 13   51.25                   55                   51.25
 14   18.75                   65                   28.75
 15   28.75                   51.25                30
 16   17.5                    56.25                27.5
 17   23.75                   73.125               46.25
 18   54.375                  79.375               60.625
 19   62.5                    88.75                68.75
 20   43.75                   67.5                 50
 21   21.25                   30                   21.25
 22   37.5                    71.25                40
 23   17.5                    73.75                18.75
 24   52.5                    68.75                57.5
 25   12.5                    38.75                12.5
 26   21.25                   66.25                21.25
 27   68.75                   65                   71.25
 28   37.5                    68.75                40
 29   38.75                   83.75                45
 30   60                      76.25                60
 31   60                      67.5                 60
 32   59.375                  93.75                60.625
 33   32.5                    78.75                45
 34   63.75                   85                   65
 35   76.875                  89.375               76.875
 36   40                      50                   40
 37   45                      97.5                 53.75
 38   70                      87.5                 70
Section 6.2.   Classification                                               39


                     Ground floor with DMAX = 12 m and rho = 1.075

       Gaussian Method (All)      Ell Method (Reach)    Gaussian Method (Reach)

 1     56                         79.2                  58.3
 2     42                         80                    52.5
 3     No Wifi measurements        No Wifi measurements   No Wifi measurements
 4     104                        88.75                 65
 5     64                         90                    80
 6     56                         92.5                  70
 7     48                         100                   60
 8     58                         80                    72.5
 9     46                         85                    60
 10    12                         100                   15
 11    66                         83.75                 46.25
 12    44                         92.5                  62.5
 13    46                         95                    57.5
 14    38                         95                    47.5
 15    48                         95                    60
 16    50                         87.5                  62.5
 17    28                         87.5                  35
 18    34                         95                    42.5
 19    70                         100                   87.5
 20    24                         72.5                  30
 21    46                         90                    57.5
 22    58                         80                    72.5
 23    22                         97.5                  40
 24    52                         97.5                  65
 25    No Wifi measurements        No Wifi measurements   No Wifi measurements
 26    60                         97.5                  82.5
 27    44                         87.5                  62.5
 28    22                         80                    30
 29    40                         87.5                  52.5
 30    38                         87.5                  55
40                                                        Analyses   Chapter 6



                Ground floor with DMAX = 12 m and rho = 1.05

      Gaussian Method (All)   Ell Method (Reach)    Gaussian Method (Reach)

 1    56                      79.2                  58.3
 2    42                      82.5                  52.5
 3    No Wifi measurements     No Wifi measurements   No Wifi measurements
 4    104                     87.5                  65
 5    64                      90                    80
 6    56                      92.5                  70
 7    48                      100                   60
 8    58                      82.5                  72.5
 9    46                      87.5                  60
 10   12                      100                   15
 11   66                      83.75                 46.25
 12   44                      92.5                  62.5
 13   46                      97.5                  57.5
 14   38                      97.5                  47.5
 15   48                      95                    60
 16   50                      90                    62.5
 17   28                      87.5                  35
 18   34                      95                    42.5
 19   70                      100                   87.5
 20   24                      72.5                  30
 21   46                      90                    57.5
 22   58                      80                    72.5
 23   22                      97.5                  40
 24   52                      97.5                  65
 25   No Wifi measurements     No Wifi measurements   No Wifi measurements
 26   60                      97.5                  82.5
 27   44                      90                    62.5
 28   22                      77.5                  30
 29   40                      87.5                  52.5
 30   38                      87.5                  55
Section 6.2.   Classification                                               41


                      Ground floor with DMAX = 12 m and rho = 1.05

       Gaussian Method (All)      Ell Method (Reach)    Gaussian Method (Reach)

 1     56                         79.2                  58.3
 2     42                         82.5                  52.5
 3     No Wifi measurements        No Wifi measurements   No Wifi measurements
 4     104                        87.5                  65
 5     64                         90                    80
 6     56                         92.5                  70
 7     48                         100                   60
 8     58                         82.5                  72.5
 9     46                         87.5                  60
 10    12                         100                   15
 11    66                         83.75                 46.25
 12    44                         92.5                  62.5
 13    46                         97.5                  57.5
 14    38                         97.5                  47.5
 15    48                         95                    60
 16    50                         90                    62.5
 17    28                         87.5                  35
 18    34                         95                    42.5
 19    70                         100                   87.5
 20    24                         72.5                  30
 21    46                         90                    57.5
 22    58                         80                    72.5
 23    22                         97.5                  40
 24    52                         97.5                  65
 25    No Wifi measurements        No Wifi measurements   No Wifi measurements
 26    60                         97.5                  82.5
 27    44                         90                    62.5
 28    22                         77.5                  30
 29    40                         87.5                  52.5
 30    38                         87.5                  55
42                                                                          Analyses   Chapter 6


6.3     Interpolation Step
The last step is the interpolation step. Let q = qπ‘˜ (𝑗) be the 𝑗th training measure-
ment in the location π‘˜ and let pπ‘˜ be the vector of the GPS coordinates (latitude
and longitude) of the π‘˜th location. By using the measurement q and setting

                           ⎧
                           1
                           ⎨     if 0 < 𝐷𝑖𝑠𝑑(𝑖, π‘˜) < 5 π‘š
                     𝐢(𝑖) = 0.8 if 5 ≀ 𝐷𝑖𝑠𝑑(𝑖, π‘˜) < 8 π‘š                                 (6.3.1)
                           
                             0.6 if 8 ≀ 𝐷𝑖𝑠𝑑(𝑖, π‘˜) ≀ 12 π‘š
                           ⎩


the position

                                         𝑁
                                         βˆ‘
                                   Λ†
                                   p=          𝑀𝑖 β‹… p𝑖                                  (6.3.2)
                                         𝑖=1


is estimated by using the following weights:

     βˆ™ Gaussian Method taking into account all the locations

                                                 Λ†
                                             P(q∣F(p𝑖 ))
                                    𝑀𝑖 =   𝑁
                                           βˆ‘
                                                      Λ†
                                                  P(q∣F(p𝑖 ))
                                            𝑖=1
                                       for 𝑖 = 1, . . . , 𝑁                             (6.3.3)


     βˆ™ Gaussian Method taking into account only the reachable locations

                                           Λ†
                        ⎧
                         βˆ‘ 𝐢(𝑖)β‹…P(q∣F(p𝑖 ))                    βˆ€π‘– ∈ 𝑃 𝑇 (π‘˜)
                        
                        ⎨                      Λ†
                                    𝐢(𝑖) β‹… P(q∣F(p𝑖 ))
                   𝑀𝑖 =                                                                 (6.3.4)
                         π‘–βˆˆπ‘ƒ 𝑇 (π‘˜)
                        
                         0                                      π‘œπ‘‘β„Žπ‘’π‘Ÿπ‘€π‘–π‘ π‘’
                        ⎩



     βˆ™ Ellipsoid Method taking into account only the reachable locations
                           ⎧ βˆ‘
                           
                           
                           ⎨   𝐢(𝑖) β‹… qβ€² β‹… Φ𝑖 β‹… q
                             π‘–βˆˆπ‘ƒ 𝑇 (π‘˜)
                    𝑀𝑖 =                                      βˆ€π‘– ∈ 𝑃 𝑇 (π‘˜)              (6.3.5)
                                    𝐢(𝑖)β‹…qβ€² ⋅Φ𝑖 β‹…q
                           
                           ⎩0                                 π‘œπ‘‘β„Žπ‘’π‘Ÿπ‘€π‘–π‘ π‘’
Wireless Positioning using Ellipsoidal Constraints
Wireless Positioning using Ellipsoidal Constraints
Wireless Positioning using Ellipsoidal Constraints
Wireless Positioning using Ellipsoidal Constraints
Wireless Positioning using Ellipsoidal Constraints
Wireless Positioning using Ellipsoidal Constraints
Wireless Positioning using Ellipsoidal Constraints
Wireless Positioning using Ellipsoidal Constraints

More Related Content

What's hot

Improved Characters Feature Extraction and Matching Algorithm Based on SIFT
Improved Characters Feature Extraction and Matching Algorithm Based on SIFTImproved Characters Feature Extraction and Matching Algorithm Based on SIFT
Improved Characters Feature Extraction and Matching Algorithm Based on SIFTNooria Sukmaningtyas
Β 
Image Compression using WDR & ASWDR Techniques with different Wavelet Codecs
Image Compression using WDR & ASWDR Techniques with different Wavelet CodecsImage Compression using WDR & ASWDR Techniques with different Wavelet Codecs
Image Compression using WDR & ASWDR Techniques with different Wavelet CodecsIDES Editor
Β 
Learning Convolutional Neural Networks for Graphs
Learning Convolutional Neural Networks for GraphsLearning Convolutional Neural Networks for Graphs
Learning Convolutional Neural Networks for Graphspione30
Β 
Using Subspace Pursuit Algorithm to Improve Performance of the Distributed Co...
Using Subspace Pursuit Algorithm to Improve Performance of the Distributed Co...Using Subspace Pursuit Algorithm to Improve Performance of the Distributed Co...
Using Subspace Pursuit Algorithm to Improve Performance of the Distributed Co...Polytechnique Montreal
Β 
DTAM: Dense Tracking and Mapping in Real-Time, Robot vision Group
DTAM: Dense Tracking and Mapping in Real-Time, Robot vision GroupDTAM: Dense Tracking and Mapping in Real-Time, Robot vision Group
DTAM: Dense Tracking and Mapping in Real-Time, Robot vision GroupLihang Li
Β 
Interferogram Filtering Using Gaussians Scale Mixtures in Steerable Wavelet D...
Interferogram Filtering Using Gaussians Scale Mixtures in Steerable Wavelet D...Interferogram Filtering Using Gaussians Scale Mixtures in Steerable Wavelet D...
Interferogram Filtering Using Gaussians Scale Mixtures in Steerable Wavelet D...CSCJournals
Β 
4 satellite image fusion using fast discrete
4 satellite image fusion using fast discrete4 satellite image fusion using fast discrete
4 satellite image fusion using fast discreteAlok Padole
Β 
Vis03 Workshop. DT-MRI Visualization
Vis03 Workshop. DT-MRI VisualizationVis03 Workshop. DT-MRI Visualization
Vis03 Workshop. DT-MRI VisualizationLeonid Zhukov
Β 
A Wavelet - Based Object Watermarking System for MPEG4 Video
A Wavelet - Based Object Watermarking System for MPEG4 VideoA Wavelet - Based Object Watermarking System for MPEG4 Video
A Wavelet - Based Object Watermarking System for MPEG4 VideoCSCJournals
Β 
Parallel implementation of geodesic distance transform with application in su...
Parallel implementation of geodesic distance transform with application in su...Parallel implementation of geodesic distance transform with application in su...
Parallel implementation of geodesic distance transform with application in su...Tuan Q. Pham
Β 
Oriented Tensor Reconstruction. Tracing Neural Pathways from DT-MRI
Oriented Tensor Reconstruction. Tracing Neural Pathways from DT-MRIOriented Tensor Reconstruction. Tracing Neural Pathways from DT-MRI
Oriented Tensor Reconstruction. Tracing Neural Pathways from DT-MRILeonid Zhukov
Β 
Optimized linear spatial filters implemented in FPGA
Optimized linear spatial filters implemented in FPGAOptimized linear spatial filters implemented in FPGA
Optimized linear spatial filters implemented in FPGAIOSRJVSP
Β 
Performance Analysis of Image Enhancement Using Dual-Tree Complex Wavelet Tra...
Performance Analysis of Image Enhancement Using Dual-Tree Complex Wavelet Tra...Performance Analysis of Image Enhancement Using Dual-Tree Complex Wavelet Tra...
Performance Analysis of Image Enhancement Using Dual-Tree Complex Wavelet Tra...IJERD Editor
Β 
11.quadrature radon transform for smoother tomographic reconstruction
11.quadrature radon transform for smoother  tomographic reconstruction11.quadrature radon transform for smoother  tomographic reconstruction
11.quadrature radon transform for smoother tomographic reconstructionAlexander Decker
Β 
11.[23 36]quadrature radon transform for smoother tomographic reconstruction
11.[23 36]quadrature radon transform for smoother  tomographic reconstruction11.[23 36]quadrature radon transform for smoother  tomographic reconstruction
11.[23 36]quadrature radon transform for smoother tomographic reconstructionAlexander Decker
Β 
COMPLEMENTARY VISION BASED DATA FUSION FOR ROBUST POSITIONING AND DIRECTED FL...
COMPLEMENTARY VISION BASED DATA FUSION FOR ROBUST POSITIONING AND DIRECTED FL...COMPLEMENTARY VISION BASED DATA FUSION FOR ROBUST POSITIONING AND DIRECTED FL...
COMPLEMENTARY VISION BASED DATA FUSION FOR ROBUST POSITIONING AND DIRECTED FL...ijaia
Β 

What's hot (20)

Improved Characters Feature Extraction and Matching Algorithm Based on SIFT
Improved Characters Feature Extraction and Matching Algorithm Based on SIFTImproved Characters Feature Extraction and Matching Algorithm Based on SIFT
Improved Characters Feature Extraction and Matching Algorithm Based on SIFT
Β 
Orb feature by nitin
Orb feature by nitinOrb feature by nitin
Orb feature by nitin
Β 
Image Compression using WDR & ASWDR Techniques with different Wavelet Codecs
Image Compression using WDR & ASWDR Techniques with different Wavelet CodecsImage Compression using WDR & ASWDR Techniques with different Wavelet Codecs
Image Compression using WDR & ASWDR Techniques with different Wavelet Codecs
Β 
Learning Convolutional Neural Networks for Graphs
Learning Convolutional Neural Networks for GraphsLearning Convolutional Neural Networks for Graphs
Learning Convolutional Neural Networks for Graphs
Β 
Using Subspace Pursuit Algorithm to Improve Performance of the Distributed Co...
Using Subspace Pursuit Algorithm to Improve Performance of the Distributed Co...Using Subspace Pursuit Algorithm to Improve Performance of the Distributed Co...
Using Subspace Pursuit Algorithm to Improve Performance of the Distributed Co...
Β 
DTAM: Dense Tracking and Mapping in Real-Time, Robot vision Group
DTAM: Dense Tracking and Mapping in Real-Time, Robot vision GroupDTAM: Dense Tracking and Mapping in Real-Time, Robot vision Group
DTAM: Dense Tracking and Mapping in Real-Time, Robot vision Group
Β 
Interferogram Filtering Using Gaussians Scale Mixtures in Steerable Wavelet D...
Interferogram Filtering Using Gaussians Scale Mixtures in Steerable Wavelet D...Interferogram Filtering Using Gaussians Scale Mixtures in Steerable Wavelet D...
Interferogram Filtering Using Gaussians Scale Mixtures in Steerable Wavelet D...
Β 
4 satellite image fusion using fast discrete
4 satellite image fusion using fast discrete4 satellite image fusion using fast discrete
4 satellite image fusion using fast discrete
Β 
Vis03 Workshop. DT-MRI Visualization
Vis03 Workshop. DT-MRI VisualizationVis03 Workshop. DT-MRI Visualization
Vis03 Workshop. DT-MRI Visualization
Β 
Background Subtraction Based on Phase and Distance Transform Under Sudden Ill...
Background Subtraction Based on Phase and Distance Transform Under Sudden Ill...Background Subtraction Based on Phase and Distance Transform Under Sudden Ill...
Background Subtraction Based on Phase and Distance Transform Under Sudden Ill...
Β 
A Wavelet - Based Object Watermarking System for MPEG4 Video
A Wavelet - Based Object Watermarking System for MPEG4 VideoA Wavelet - Based Object Watermarking System for MPEG4 Video
A Wavelet - Based Object Watermarking System for MPEG4 Video
Β 
Parallel implementation of geodesic distance transform with application in su...
Parallel implementation of geodesic distance transform with application in su...Parallel implementation of geodesic distance transform with application in su...
Parallel implementation of geodesic distance transform with application in su...
Β 
paper
paperpaper
paper
Β 
Oriented Tensor Reconstruction. Tracing Neural Pathways from DT-MRI
Oriented Tensor Reconstruction. Tracing Neural Pathways from DT-MRIOriented Tensor Reconstruction. Tracing Neural Pathways from DT-MRI
Oriented Tensor Reconstruction. Tracing Neural Pathways from DT-MRI
Β 
Vf sift
Vf siftVf sift
Vf sift
Β 
Optimized linear spatial filters implemented in FPGA
Optimized linear spatial filters implemented in FPGAOptimized linear spatial filters implemented in FPGA
Optimized linear spatial filters implemented in FPGA
Β 
Performance Analysis of Image Enhancement Using Dual-Tree Complex Wavelet Tra...
Performance Analysis of Image Enhancement Using Dual-Tree Complex Wavelet Tra...Performance Analysis of Image Enhancement Using Dual-Tree Complex Wavelet Tra...
Performance Analysis of Image Enhancement Using Dual-Tree Complex Wavelet Tra...
Β 
11.quadrature radon transform for smoother tomographic reconstruction
11.quadrature radon transform for smoother  tomographic reconstruction11.quadrature radon transform for smoother  tomographic reconstruction
11.quadrature radon transform for smoother tomographic reconstruction
Β 
11.[23 36]quadrature radon transform for smoother tomographic reconstruction
11.[23 36]quadrature radon transform for smoother  tomographic reconstruction11.[23 36]quadrature radon transform for smoother  tomographic reconstruction
11.[23 36]quadrature radon transform for smoother tomographic reconstruction
Β 
COMPLEMENTARY VISION BASED DATA FUSION FOR ROBUST POSITIONING AND DIRECTED FL...
COMPLEMENTARY VISION BASED DATA FUSION FOR ROBUST POSITIONING AND DIRECTED FL...COMPLEMENTARY VISION BASED DATA FUSION FOR ROBUST POSITIONING AND DIRECTED FL...
COMPLEMENTARY VISION BASED DATA FUSION FOR ROBUST POSITIONING AND DIRECTED FL...
Β 

Viewers also liked

Deepi_Design Portfolio
Deepi_Design PortfolioDeepi_Design Portfolio
Deepi_Design PortfolioDeepi Das
Β 
Fathers day 2012 power point
Fathers day 2012 power pointFathers day 2012 power point
Fathers day 2012 power pointKristy Watts
Β 
Mark Twain Shoemake
Mark Twain ShoemakeMark Twain Shoemake
Mark Twain Shoemakestevenshoemake
Β 
Olhina auschwitz
Olhina auschwitzOlhina auschwitz
Olhina auschwitzOlhina
Β 
Mothers day slide show 2012
Mothers day slide show 2012Mothers day slide show 2012
Mothers day slide show 2012Kristy Watts
Β 
Wiki slideshow
Wiki slideshowWiki slideshow
Wiki slideshowmegmug87
Β 
Cell database
Cell databaseCell database
Cell databaseEmma Gibson
Β 
Traditii si obiceiuri
Traditii si obiceiuriTraditii si obiceiuri
Traditii si obiceiuritam_lemeni
Β 
Fathers day 2012 power point
Fathers day 2012 power pointFathers day 2012 power point
Fathers day 2012 power pointKristy Watts
Β 
ΠŸΠ»Π°Π½Π΅Ρ‚Π° для людСй
ΠŸΠ»Π°Π½Π΅Ρ‚Π° для Π»ΡŽΠ΄Π΅ΠΉΠŸΠ»Π°Π½Π΅Ρ‚Π° для людСй
ΠŸΠ»Π°Π½Π΅Ρ‚Π° для людСйar-technology
Β 
Учимся Ρ‡ΠΈΡ‚Π°Ρ‚ΡŒ ΠΏΠΎ английски
Учимся Ρ‡ΠΈΡ‚Π°Ρ‚ΡŒ ΠΏΠΎ английскиУчимся Ρ‡ΠΈΡ‚Π°Ρ‚ΡŒ ΠΏΠΎ английски
Учимся Ρ‡ΠΈΡ‚Π°Ρ‚ΡŒ ΠΏΠΎ английскиar-technology
Β 

Viewers also liked (13)

Deepi_Design Portfolio
Deepi_Design PortfolioDeepi_Design Portfolio
Deepi_Design Portfolio
Β 
Fathers day 2012 power point
Fathers day 2012 power pointFathers day 2012 power point
Fathers day 2012 power point
Β 
Ishikawa
IshikawaIshikawa
Ishikawa
Β 
Mark Twain Shoemake
Mark Twain ShoemakeMark Twain Shoemake
Mark Twain Shoemake
Β 
Olhina auschwitz
Olhina auschwitzOlhina auschwitz
Olhina auschwitz
Β 
Mothers day slide show 2012
Mothers day slide show 2012Mothers day slide show 2012
Mothers day slide show 2012
Β 
Wiki slideshow
Wiki slideshowWiki slideshow
Wiki slideshow
Β 
26 3-2-redert
26 3-2-redert26 3-2-redert
26 3-2-redert
Β 
Cell database
Cell databaseCell database
Cell database
Β 
Traditii si obiceiuri
Traditii si obiceiuriTraditii si obiceiuri
Traditii si obiceiuri
Β 
Fathers day 2012 power point
Fathers day 2012 power pointFathers day 2012 power point
Fathers day 2012 power point
Β 
ΠŸΠ»Π°Π½Π΅Ρ‚Π° для людСй
ΠŸΠ»Π°Π½Π΅Ρ‚Π° для Π»ΡŽΠ΄Π΅ΠΉΠŸΠ»Π°Π½Π΅Ρ‚Π° для людСй
ΠŸΠ»Π°Π½Π΅Ρ‚Π° для людСй
Β 
Учимся Ρ‡ΠΈΡ‚Π°Ρ‚ΡŒ ΠΏΠΎ английски
Учимся Ρ‡ΠΈΡ‚Π°Ρ‚ΡŒ ΠΏΠΎ английскиУчимся Ρ‡ΠΈΡ‚Π°Ρ‚ΡŒ ΠΏΠΎ английски
Учимся Ρ‡ΠΈΡ‚Π°Ρ‚ΡŒ ΠΏΠΎ английски
Β 

Similar to Wireless Positioning using Ellipsoidal Constraints

Unsupervised Building Extraction from High Resolution Satellite Images Irresp...
Unsupervised Building Extraction from High Resolution Satellite Images Irresp...Unsupervised Building Extraction from High Resolution Satellite Images Irresp...
Unsupervised Building Extraction from High Resolution Satellite Images Irresp...CSCJournals
Β 
Performance Analysis of Iterative Closest Point (ICP) Algorithm using Modifie...
Performance Analysis of Iterative Closest Point (ICP) Algorithm using Modifie...Performance Analysis of Iterative Closest Point (ICP) Algorithm using Modifie...
Performance Analysis of Iterative Closest Point (ICP) Algorithm using Modifie...IRJET Journal
Β 
Performance Analysis of DV-Hop Localization Using Voronoi Approach
Performance Analysis of DV-Hop Localization Using Voronoi ApproachPerformance Analysis of DV-Hop Localization Using Voronoi Approach
Performance Analysis of DV-Hop Localization Using Voronoi ApproachIJMER
Β 
Skyline Query Processing using Filtering in Distributed Environment
Skyline Query Processing using Filtering in Distributed EnvironmentSkyline Query Processing using Filtering in Distributed Environment
Skyline Query Processing using Filtering in Distributed EnvironmentIJMER
Β 
IRJET- Digital Image Forgery Detection using Local Binary Patterns (LBP) and ...
IRJET- Digital Image Forgery Detection using Local Binary Patterns (LBP) and ...IRJET- Digital Image Forgery Detection using Local Binary Patterns (LBP) and ...
IRJET- Digital Image Forgery Detection using Local Binary Patterns (LBP) and ...IRJET Journal
Β 
Object tracking with SURF: ARM-Based platform Implementation
Object tracking with SURF: ARM-Based platform ImplementationObject tracking with SURF: ARM-Based platform Implementation
Object tracking with SURF: ARM-Based platform ImplementationEditor IJCATR
Β 
IRJET- Deep Convolution Neural Networks for Galaxy Morphology Classification
IRJET- Deep Convolution Neural Networks for Galaxy Morphology ClassificationIRJET- Deep Convolution Neural Networks for Galaxy Morphology Classification
IRJET- Deep Convolution Neural Networks for Galaxy Morphology ClassificationIRJET Journal
Β 
Comparison of Segmentation Algorithms and Estimation of Optimal Segmentation ...
Comparison of Segmentation Algorithms and Estimation of Optimal Segmentation ...Comparison of Segmentation Algorithms and Estimation of Optimal Segmentation ...
Comparison of Segmentation Algorithms and Estimation of Optimal Segmentation ...Pinaki Ranjan Sarkar
Β 
Data argumentation
Data argumentationData argumentation
Data argumentationAzizi Abdullah
Β 
Accurate indoor positioning system based on modify nearest point technique
Accurate indoor positioning system based on modify nearest point techniqueAccurate indoor positioning system based on modify nearest point technique
Accurate indoor positioning system based on modify nearest point techniqueIJECEIAES
Β 
ABayesianApproachToLocalizedMultiKernelLearningUsingTheRelevanceVectorMachine...
ABayesianApproachToLocalizedMultiKernelLearningUsingTheRelevanceVectorMachine...ABayesianApproachToLocalizedMultiKernelLearningUsingTheRelevanceVectorMachine...
ABayesianApproachToLocalizedMultiKernelLearningUsingTheRelevanceVectorMachine...grssieee
Β 
Multi-class Classification on Riemannian Manifolds for Video Surveillance
Multi-class Classification on Riemannian Manifolds for Video SurveillanceMulti-class Classification on Riemannian Manifolds for Video Surveillance
Multi-class Classification on Riemannian Manifolds for Video SurveillanceDiego Tosato
Β 
Energy Efficient GPS Acquisition with Sparse-GPS
Energy Efficient GPS Acquisition with Sparse-GPSEnergy Efficient GPS Acquisition with Sparse-GPS
Energy Efficient GPS Acquisition with Sparse-GPSPrasant Misra
Β 
A hybrid model based on constraint oselm, adaptive weighted src and knn for l...
A hybrid model based on constraint oselm, adaptive weighted src and knn for l...A hybrid model based on constraint oselm, adaptive weighted src and knn for l...
A hybrid model based on constraint oselm, adaptive weighted src and knn for l...Journal Papers
Β 
Remote Sensing IEEE 2015 Projects
Remote Sensing IEEE 2015 ProjectsRemote Sensing IEEE 2015 Projects
Remote Sensing IEEE 2015 ProjectsVijay Karan
Β 
Classification of Iris Data using Kernel Radial Basis Probabilistic Neural N...
Classification of Iris Data using Kernel Radial Basis Probabilistic  Neural N...Classification of Iris Data using Kernel Radial Basis Probabilistic  Neural N...
Classification of Iris Data using Kernel Radial Basis Probabilistic Neural N...Scientific Review SR
Β 

Similar to Wireless Positioning using Ellipsoidal Constraints (20)

Unsupervised Building Extraction from High Resolution Satellite Images Irresp...
Unsupervised Building Extraction from High Resolution Satellite Images Irresp...Unsupervised Building Extraction from High Resolution Satellite Images Irresp...
Unsupervised Building Extraction from High Resolution Satellite Images Irresp...
Β 
Performance Analysis of Iterative Closest Point (ICP) Algorithm using Modifie...
Performance Analysis of Iterative Closest Point (ICP) Algorithm using Modifie...Performance Analysis of Iterative Closest Point (ICP) Algorithm using Modifie...
Performance Analysis of Iterative Closest Point (ICP) Algorithm using Modifie...
Β 
key.net
key.netkey.net
key.net
Β 
Radial Basis Function
Radial Basis FunctionRadial Basis Function
Radial Basis Function
Β 
C42011318
C42011318C42011318
C42011318
Β 
sibgrapi2015
sibgrapi2015sibgrapi2015
sibgrapi2015
Β 
Performance Analysis of DV-Hop Localization Using Voronoi Approach
Performance Analysis of DV-Hop Localization Using Voronoi ApproachPerformance Analysis of DV-Hop Localization Using Voronoi Approach
Performance Analysis of DV-Hop Localization Using Voronoi Approach
Β 
Skyline Query Processing using Filtering in Distributed Environment
Skyline Query Processing using Filtering in Distributed EnvironmentSkyline Query Processing using Filtering in Distributed Environment
Skyline Query Processing using Filtering in Distributed Environment
Β 
IRJET- Digital Image Forgery Detection using Local Binary Patterns (LBP) and ...
IRJET- Digital Image Forgery Detection using Local Binary Patterns (LBP) and ...IRJET- Digital Image Forgery Detection using Local Binary Patterns (LBP) and ...
IRJET- Digital Image Forgery Detection using Local Binary Patterns (LBP) and ...
Β 
Object tracking with SURF: ARM-Based platform Implementation
Object tracking with SURF: ARM-Based platform ImplementationObject tracking with SURF: ARM-Based platform Implementation
Object tracking with SURF: ARM-Based platform Implementation
Β 
IRJET- Deep Convolution Neural Networks for Galaxy Morphology Classification
IRJET- Deep Convolution Neural Networks for Galaxy Morphology ClassificationIRJET- Deep Convolution Neural Networks for Galaxy Morphology Classification
IRJET- Deep Convolution Neural Networks for Galaxy Morphology Classification
Β 
Comparison of Segmentation Algorithms and Estimation of Optimal Segmentation ...
Comparison of Segmentation Algorithms and Estimation of Optimal Segmentation ...Comparison of Segmentation Algorithms and Estimation of Optimal Segmentation ...
Comparison of Segmentation Algorithms and Estimation of Optimal Segmentation ...
Β 
Data argumentation
Data argumentationData argumentation
Data argumentation
Β 
Accurate indoor positioning system based on modify nearest point technique
Accurate indoor positioning system based on modify nearest point techniqueAccurate indoor positioning system based on modify nearest point technique
Accurate indoor positioning system based on modify nearest point technique
Β 
ABayesianApproachToLocalizedMultiKernelLearningUsingTheRelevanceVectorMachine...
ABayesianApproachToLocalizedMultiKernelLearningUsingTheRelevanceVectorMachine...ABayesianApproachToLocalizedMultiKernelLearningUsingTheRelevanceVectorMachine...
ABayesianApproachToLocalizedMultiKernelLearningUsingTheRelevanceVectorMachine...
Β 
Multi-class Classification on Riemannian Manifolds for Video Surveillance
Multi-class Classification on Riemannian Manifolds for Video SurveillanceMulti-class Classification on Riemannian Manifolds for Video Surveillance
Multi-class Classification on Riemannian Manifolds for Video Surveillance
Β 
Energy Efficient GPS Acquisition with Sparse-GPS
Energy Efficient GPS Acquisition with Sparse-GPSEnergy Efficient GPS Acquisition with Sparse-GPS
Energy Efficient GPS Acquisition with Sparse-GPS
Β 
A hybrid model based on constraint oselm, adaptive weighted src and knn for l...
A hybrid model based on constraint oselm, adaptive weighted src and knn for l...A hybrid model based on constraint oselm, adaptive weighted src and knn for l...
A hybrid model based on constraint oselm, adaptive weighted src and knn for l...
Β 
Remote Sensing IEEE 2015 Projects
Remote Sensing IEEE 2015 ProjectsRemote Sensing IEEE 2015 Projects
Remote Sensing IEEE 2015 Projects
Β 
Classification of Iris Data using Kernel Radial Basis Probabilistic Neural N...
Classification of Iris Data using Kernel Radial Basis Probabilistic  Neural N...Classification of Iris Data using Kernel Radial Basis Probabilistic  Neural N...
Classification of Iris Data using Kernel Radial Basis Probabilistic Neural N...
Β 

Recently uploaded

Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptxLBM Solutions
Β 
Artificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraArtificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraDeakin University
Β 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...shyamraj55
Β 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitecturePixlogix Infotech
Β 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationSafe Software
Β 
Unlocking the Potential of the Cloud for IBM Power Systems
Unlocking the Potential of the Cloud for IBM Power SystemsUnlocking the Potential of the Cloud for IBM Power Systems
Unlocking the Potential of the Cloud for IBM Power SystemsPrecisely
Β 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
Β 
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxMaking_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxnull - The Open Security Community
Β 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
Β 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsMemoori
Β 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Enterprise Knowledge
Β 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024The Digital Insurer
Β 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 3652toLead Limited
Β 
Science&tech:THE INFORMATION AGE STS.pdf
Science&tech:THE INFORMATION AGE STS.pdfScience&tech:THE INFORMATION AGE STS.pdf
Science&tech:THE INFORMATION AGE STS.pdfjimielynbastida
Β 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
Β 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
Β 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions
Β 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
Β 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
Β 
SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024Scott Keck-Warren
Β 

Recently uploaded (20)

Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptx
Β 
Artificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraArtificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning era
Β 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Β 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC Architecture
Β 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Β 
Unlocking the Potential of the Cloud for IBM Power Systems
Unlocking the Potential of the Cloud for IBM Power SystemsUnlocking the Potential of the Cloud for IBM Power Systems
Unlocking the Potential of the Cloud for IBM Power Systems
Β 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
Β 
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxMaking_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Β 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
Β 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial Buildings
Β 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024
Β 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024
Β 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Β 
Science&tech:THE INFORMATION AGE STS.pdf
Science&tech:THE INFORMATION AGE STS.pdfScience&tech:THE INFORMATION AGE STS.pdf
Science&tech:THE INFORMATION AGE STS.pdf
Β 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Β 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
Β 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food Manufacturing
Β 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Β 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
Β 
SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024
Β 

Wireless Positioning using Ellipsoidal Constraints

  • 1. WIRELESS POSITIONING USING ELLIPSOIDAL CONSTRAINTS Giovanni Soldi Lund University
  • 2. i Acknowledgements First, I would like to thank my supervisor at the department of Mathematical Statis- tics at Lund University, professor Andreas Jakobsson, for his continuous support and commitment throughout this master’s thesis. Furthermore, I am really grateful to Qubulus AB for providing me with the data and for giving me the opportunity of a three-month internship. I would like to thank also my supervisor in Milano, professor Marco Trubian, and my Erasmus coordinator, professor Kevin Payne, for his help with all the bureau- cratic issues. Special thanks go to all my international friends that I have known during these ten months and made my stay in Sweden more fun. Last but not least, I would like to thank my mother and all my family for their continuous support and encouragement during all these years of studies.
  • 3. ii Abstract Since the advent of the global positioning system in 1999, positioning systems have been used to deliver location-based services (LBs) in outdoor environments. In recent years, LBs have become of equal interest in indoor environments in a wide range of personal and commercial applications. For this reason, recent research has focused its attention on wireless positioning systems. This master’s thesis presents a new approach for indoor positioning, based on the notion of separating ellipsoids. In order to improve the position estimation algorithm, the technique is combined with the algorithm Aβˆ— , being applied to binary maps of the examined buildings to take into account obstacles such as walls. The combination of separating ellipsoids and Aβˆ— seems to promise an improve- ment over previous algorithms based on a probabilistic approaches.
  • 4. CONTENTS 1 INTRODUCTION 1 1.1 Wireless Positioning Systems 1 2 PROBLEM EXPLANATION AND PREVIOUS WORK 3 2.1 Notation 3 2.2 Problem explanation 3 2.3 Properties of the weights functions 𝑀𝑖 4 2.4 The kernel-based method 5 2.5 A projection-based method 6 3 A NEW APPROACH 9 3.1 Theory 9 3.2 Homogeneous Embedding 11 3.3 SDP Formulations 12 3.4 Classification Rule 13 4 DATA 15 4.1 Plotting the locations on the maps 15 4.2 Data 18 4.3 Version with one slack for location 19 4.4 Iterative version 20 4.5 Variance ellipsoids 22 5 THE A* ALGORITHM 24 5.1 The A* algorithm 24 5.1.1 How A* works 24 5.2 Applied A* star 25 iii
  • 5. iv 6 ANALYSES 28 6.1 Computation of the distance matrix and of the ellipsoids 28 6.2 Classification 31 6.3 Interpolation Step 38 6.4 Results for the first floor of the Hansa Mall 40 6.5 Results for the ground floor of the Hansa Mall 42 7 CONCLUSIONS 45 BIBLIOGRAPHY 46
  • 6. Chapter 1 INTRODUCTION One must learn by doing the thing; for though you think you know it You have no certainty, until you try. Sophocles 1.1 Wireless Positioning Systems Since the advent of the Global Positioning System (GPS) in 1999 [1], position- ing systems have been used to deliver Location-Based services (LBs) in outdoor environment. In the recent years, LBs have become of equal interest in indoor envi- ronments as well as in a wide range of personal and commercial applications. These include location-based network management and security, medicine and health care, personalized information delivery and context awareness. Unfortunately, coverage of the GPS system is limited in indoor environments and dense urban areas. For this reason, recent research has focused on some ex- isting wireless communication infrastructures such as wireless local area networks (WLANs) to be a complementary technique. A WLAN is characterized by a num- ber of access points (APs), which are devices that allow wireless communication based on the IEEE 802.11 standard and are widespread indoors for Internet access. Since the power-sensing function is available in every WLAN device, localization using the received signal strength (RSS) is a relatively cost-effective solution. Commonly, the procedure to estimate the position using RSS measurements is based on a two step method known as location fingerprinting. Fingerprinting based positioning is divided into an offline and an online phase. 1. In the offline phase (also known as training phase), by site-surveying, the RSS from multiple APs at different points in the building are collected and stored in a fingerprint database, called a radio map. More specifically, a number 𝑁 of survey locations in the site of interest are chosen and for each location a number 𝑀 of RSS measurements is collected and stored to constitute the fingerprint of that location. For each location, the sample mean and the sample variance of the 𝑀 measurements are computed. 1
  • 7. 2 Introduction Chapter 1 Figure 1.1. Example of a building map with the plotted survey locations. 2. In the online phase, the client’s position is inferred online by comparing the measured RSS with the previous stored measurements, and the position esti- mate is a combination of the locations whose fingerprints most closely match the observation. This approach can be viewed as an application of pattern recognition. There are four key challenges in fingerprinting: 1. generation of fingerprints, 2. preprocessing for reducing computational complexity and enhancing accuracy, 3. selection of APs for use in positioning, and 4. estimation of the distance between a new RSS observation and the fingerprints. When the positioning algorithm is performed on handheld devices, extra care should be taken due to their constrained resource and, clearly, using all available APs increases the computational complexity of the algorithm. For this reason, an
  • 8. Section 1.1. Wireless Positioning Systems 3 AP selection method is needed. The recent work of Kushki et al. [2] offers a real- time AP selection technique that minimizes the correlation between the selected APs to reduce the complexity and ensure the coverage. The contribution of this thesis is twofold: βˆ™ the initial development of a new algorithm based on the separating ellipsoids method, in order to obtain a better estimate of the user position compared to the previous methods. βˆ™ The application of the Aβˆ— algorithm in order to take into account the previous positions of the user and possible obstacles, such as walls, and to reduce the computational complexity of the positioning algorithm. The second chapter of this report explains the mathematical problem of the wireless positioning and gives a brief overview of the previous work by Kushki et al. [2] and by Fang et al. [3]. The third and the forth chapter concern the theory of the separating ellipsoids method and the application of this method to an initial dataset from the EntrΒ΄ mall in MalmΒ¨. The fifth chapter explains the π΄βˆ— algorithm e o and how it is used in the context of wireless positioning. Finally, the sixth chapter presents the obtained results by the carried analyses.
  • 9. Chapter 2 PROBLEM EXPLANATION AND PREVIOUS WORK 2.1 Notation The following notation will be used throughout the report: βˆ™ 𝐿 : number of APs βˆ™ 𝑁 : number of survey locations in the radio map βˆ™ p : point in 2-D cartesian space βˆ™ r : measured RSS vector βˆ™ F(p𝑖 ) : Fingerprint record at the 𝑖th location of the radio map βˆ™ A : Multiple Discriminant Analysis (MDA) projection matrix βˆ™ 𝐷 : number of discriminative components (DCs) generated after the MDA projection βˆ™ q : Projected RSS vector by the projection matrix A Λ† βˆ™ F(p𝑖 ) : Projected fingerprint record at the 𝑖th location of the radio map βˆ™ Φ𝑖 : Computed ellipsoid for the location 𝑖 βˆ™ 𝐷𝑖𝑠𝑑(𝑖, π‘˜) : real distance between the location 𝑖 and the location π‘˜ 2.2 Problem explanation One of the fundamental problems in location fingerprinting is to produce a position estimate using training information on a discrete grid of training points when a new 4
  • 10. Section 2.3. Properties of the weights functions 𝑀𝑖 5 RSS observation is received. That is, we seek a function 𝑔 such that 𝑔 : ℝ 𝐿 Γ— ℝ2 Γ— . . . Γ— ℝ 2 β†’ ℝ 2 𝑁 𝑔 : (r, p1 , . . . , p𝑁 ) β†’ p Λ† (2.2.1) where p1 , . . . , p𝑁 are the cartesian coordinates (π‘₯, 𝑦) of the 𝑁 survey locations with respect to a predefined reference point (0, 0). If the function 𝑔 is restricted to the class of linear functions, the problem is reduced to determining a set of weights such that βˆ‘π‘ Λ† p= 𝑀(r, F(p𝑖 ))p𝑖 , (2.2.2) 𝑖=1 where F(p𝑖 ) = [r𝑖 (1), . . . , r𝑖 (𝑀 )] is an 𝐿 Γ— 𝑀 matrix defined as βŽ› 1 1 ⎞ π‘Ÿπ‘– (1) . . . π‘Ÿπ‘– (𝑀 ) F(p𝑖 ) = ⎝ . .. . ⎟ ⎜ . . ⎠ . . . (2.2.3) 𝐿 𝐿 π‘Ÿπ‘– (1) ... π‘Ÿπ‘– (𝑀 ) The column of the fingerprint matrix are RSS vectors r𝑖 (𝑑) = [π‘Ÿπ‘– (𝑑), . . . , π‘Ÿπ‘– (𝑑)]𝑇 1 𝑀 that contain readings from 𝐿 APs at time 𝑑 in the 𝑖th survey location p𝑖 . For convenience the weights 𝑀(r, F(p𝑖 )) can be also denoted with 𝑀𝑖 . 2.3 Properties of the weights functions 𝑀𝑖 The weights 𝑀𝑖 in the estimation function (2.2.2) are required to be decreasing functions of the distance between an observation vector and the training records. That is, survey points whose training records closely match the observations should receive a higher weight. In particular, the functions 𝑀(r, F(p𝑖 ) should satisfy the following properties: βˆ‘π‘ 1. 𝑖=1 𝑀(r, F(p𝑖 )) = 1 so that the estimated position belongs to the convex hull defined by the set of survey positions. This can be achieved by including a normalization term in (2.2.2). 2. 𝑀(r, F(p𝑖 )) is a monotonically decreasing function in both arguments with respect to a distance measure 𝑑(r, F(p𝑗 )): 𝑑(r, F(p𝑖 )) β‰₯ 𝑑(rβ€² , F(p𝑖 )) β‡’ 𝑀(r, F(p𝑖 )) ≀ 𝑀(rβ€² , F(p𝑖 )), (2.3.1) 𝑑(r, F(p𝑖 )) β‰₯ 𝑑(r, F(p𝑗 )) β‡’ 𝑀(r, F(p𝑖 )) ≀ 𝑀(r, F(p𝑗 )). (2.3.2) The inequality (2.3.1) means that, if a RSS measurement rβ€² is closer than another RSS measurement r to the training records in a location 𝑖, then, the survey point 𝑖 should receive a higher weight for the RSS measurement rβ€² than the RSS measurement r.
  • 11. 6 Problem Explanation and previous work Chapter 2 The inequality (2.3.2) means that, if a RSS measurement r is further from the training measurements in a location 𝑖 than the training measurements in another location 𝑗, then, the survey site 𝑖 should receive a lower weight than the location 𝑗. 2.4 The kernel-based method The method developed by Kushki et al. [2] involves the use of kernel functions to estimate the weights 𝑀𝑖 . In particular, a non-linear mapping πœ™ : r ∈ ℝ𝑑 β†’ πœ™(r) ∈ β„± is used to map the input to a higher (possibly infinite) dimensional space β„± where the weight calculations take place. Once mapped, the weights are calculated in the following way: 𝑀 1 βˆ‘ βŸ¨πœ™(r), πœ™(r𝑖 (𝑑))⟩ 𝑀(r, F(p𝑖 )) = . (2.4.1) 𝑀 𝑑=1 βˆ₯πœ™(r)βˆ₯ βˆ₯πœ™(r𝑖 (𝑑))βˆ₯ where βŸ¨β‹…, β‹…βŸ© denotes the scalar product in β„±. At first glance, the calculation of weights in a possibly infinite dimensional space may seem computationally in- tractable. Fortunately, the kernel trick can be used to calculate the inner product in β„± without the need for explicit evaluation of the mapping πœ™. The kernel trick allows the replacement of inner products in β„± by a kernel evaluation on the input vectors. In the WLAN context, the kernel is a function π‘˜ : ℝ𝐿 Γ— ℝ𝐿 β†’ ℝ such that π‘˜(r, rβ€² ) = βŸ¨πœ™(r), πœ™(rβ€² )⟩. Since the training data only enter weight calculations through inner products, the kernel trick can be used to carry out inner products in β„± without the need for explicit evaluation of mapping πœ™. The weights function 𝑀(r, F(p𝑖 )) then becomes 𝑀 1 βˆ‘ π‘˜(r, r𝑖 (𝑑)) 𝑀(r, F(p𝑖 )) = √ (2.4.2) 𝑀 𝑑=1 π‘˜(r, r)π‘˜(r𝑖 (𝑑), r𝑖 (𝑑)) . Mercer’s theorem guarantees the correspondence between a kernel function and an inner product in a feature space β„±, given that the kernel is a positive definite function. Moreover the weights 𝑀(r, F(p𝑖 )) satisfy the properties (2.3.1) and (2.3.2) with the distance 𝑑 defined as follows: 𝑁 πœ™(r) 1 βˆ‘ πœ™(r𝑖 (𝑑)) 𝑑(r, F(p𝑖 )) = βˆ’ (2.4.3) βˆ₯πœ™(r)βˆ₯ 𝑁 𝑑=1 βˆ₯πœ™(r𝑖 (𝑑))βˆ₯ In fact, we have that: 1( 𝑑(r, F(p𝑖 ))2 βˆ’ 𝐢 , ) 𝑀(r, F(p𝑖 )) = βˆ’ (2.4.4) 2 One of the kernels proposed in the article by Kushki et al. [2] is the Gaussian kernel defined as follows: 𝑁 βˆ’βˆ₯r βˆ’ r𝑖 (𝑑)βˆ₯2 ( ) 1 βˆ‘ 𝑀(r, F(p𝑖 )) = 𝑒π‘₯𝑝 . (2.4.5) 𝑁 𝑑=1 2𝜎 2
  • 12. Section 2.5. A projection-based method 7 One drawback is that the Gaussian Kernel is linearly dependent on the number 𝑁 of the survey locations. 2.5 A projection-based method The method proposed by Fang et al. [3] is based on the calculation of a projection matrix A to map the RSS measurements in a feature space of lower dimension where they are uncorrelated and then to take into account only the most discriminative information. More specifically, when AP selection techniques are applied to reduce the com- putational cost, an additional binary-decision vector v = [0, 1, 0, . . . 0]𝑇 ∈ ℝ𝐿 is required to indicate which APs are retained. If the number of non-zero components is 𝐷, the dimension of the RSS measurement r is reduced from 𝐿 to 𝐷. Unlike the zero-one weighting (binary decision) in the selection of APs, the projection-based system reduces dimension from 𝐿 to 𝐷 by combining APs with a discriminative projection A. The projection matrix A and the number 𝐿 is determined by Multiple Discriminant Analysis (MDA). The MDA criterion has widely been applied for the problem of multiple-class classification. In our context, the classes can be viewed as the training measure- ments in the different reference locations in a fingerprinting problem. The objective of MDA is to find the components that are useful for discriminating between them. After the MDA projection, the generated components, which are named discrim- inative components (DCs), carry the most discriminative information and rank it by its information quantity. Assuming that r ∈ ℝ𝐿 is the RSS measurement, the projection-based system extracts the required DCs as follows: q = A𝑇 r (2.5.1) where q ∈ ℝ𝐷 , A𝐿×𝐷 is the MDA optimized projection matrix. The value of 𝐷 represents the number of required DCs for the location system and it can be determined by a system threshold according to different applications. In classical MDA, two scatter matrices called between-class (S𝐡 ) and within- class (Sπ‘Š ) matrices are defined to quantify the quality of the projection, as follows: 𝑁 𝑀 βˆ‘βˆ‘ Sπ‘Š = (r𝑖 (𝑑) βˆ’ u𝑖 )(r𝑖 (𝑑) βˆ’ u𝑖 )𝑇 Β― Β― (2.5.2) 𝑖=1 𝑑=1 𝑁 βˆ‘ S𝐡 = 𝑀 (Β― 𝑖 βˆ’ u)(Β― 𝑖 βˆ’ u)𝑇 u Β― u Β― (2.5.3) 𝑖=1 1 βˆ‘π‘€ Β― where u𝑖 = 𝑀 𝑑=1 r𝑖 (𝑑) is the mean of the training measurements in the 𝑖th βˆ‘π‘ βˆ‘π‘€ location and u = 𝑁 1 Β― ⋅𝑀 𝑖=1 𝑑=1 r𝑖 (𝑑) is the global mean of the all measurements. Sπ‘Š measures the closeness of the samples within the locations, whereas S𝐡 measures the separation between locations. An optimal projection would maximize
  • 13. 8 Problem Explanation and previous work Chapter 2 Figure 2.1. In this figure it is possible to observe an example of projected RSS measurement space for three different locations (three DCs). the between-class measure and minimize the within-class measure. One way to achieve this result is to maximize the ratio as follows: Λ† ∣A𝑇 S𝐡 A∣ A𝑀 𝐷𝐴 = arg maxA 𝑇 (2.5.4) ∣A Sπ‘Š A∣ The numerator of (2.5.4) measures the separation among different locations that is the RSS spatial separation (between-class), whereas the denominator measures the compactness of each location that is the RSS temporal variation (within-class). As a result, the optimal weighting matrix A satisfying (2.5.4) leads to the best separation between different reference locations, because the discrimination and the compactness are maximized at the same time. This way, the generated DCs in the projected space have low temporal variation but high spatial separation to provide discriminating information as much as possible. The dimension in the projected space (number of DCs) can be determined by a system threshold according to different applications. Once obtained, the projection matrix A and the number of desired DCs, 𝐷, a probabilistic approach is used to derive the weighting functions 𝑀𝑖 . First, the observed RSS signal r and the fingerprint matrix F(p𝑖 ) are projected by the projection matrix A: q = A𝑇 r (2.5.5) Λ† F(p𝑖 ) = [A𝑇 β‹… r𝑖 (1), . . . , A𝑇 β‹… r𝑖 (𝑀 )] (2.5.6) βŽ› 1 1 ⎞ π‘žπ‘– (1) . . . π‘žπ‘– (𝑀 ) = ⎝ . .. . ⎟ ⎜ . . ⎠ . . . (2.5.7) 𝐷 𝐷 π‘žπ‘– (1) ... π‘žπ‘– (𝑀 )
  • 14. Section 2.5. A projection-based method 9 For 𝑖 = 1, . . . , 𝑁 , we have: Λ† P(q∣F(p𝑖 )) 𝑀𝑖 = βˆ‘π‘ (2.5.8) Λ† P(q∣F(p𝑖 )) 𝑖=1 Λ† where P(q∣F(p𝑖 )) indicates the likelihood value of the location p𝑖 , given the ob- servation q = [π‘ž 1 , . . . , π‘ž 𝐷 ]𝑇 . The likelihood that the Gaussian model produces is formulated as: 𝐷 (π‘ž 𝑑 βˆ’ 𝑒𝑖 (𝑑))2 ( ) Λ† βˆ‘ 1 ˜ P(q∣F(p𝑖 )) = √ β‹… 𝑒π‘₯𝑝 βˆ’ (2.5.9) 2πœ‹Λœπ‘– (𝑑) 𝑠 2Λœπ‘– (𝑑) 𝑠 𝑑=1 where 𝑒𝑖 (𝑑) and 𝑠𝑖 (𝑑) are calculated in the following way: ˜ ˜ 𝑀 1 βˆ‘ 𝑑 𝑒𝑖 (𝑑) = ˜ π‘ž (𝑑) (2.5.10) 𝑀 𝑑=1 𝑖 𝑀 1 βˆ‘ 𝑑 𝑠𝑖 (𝑑) = ˜ (π‘ž (𝑑) βˆ’ 𝑒𝑖 (𝑑))2 ˜ (2.5.11) 𝑀 𝑑=1 𝑖 The final position can then be estimated by: 𝑁 βˆ‘ Λ† p = 𝑀𝑖 β‹… p𝑖 𝑖=1 𝑁 Λ† βˆ‘ P(q∣F(p𝑖 )) = βˆ‘π‘ β‹… p𝑖 (2.5.12) P( Λ† q∣F(p𝑖 )) 𝑖=1 𝑖=1
  • 15. Chapter 3 A NEW APPROACH The way to calculate the weights 𝑀𝑖 using the Gaussian likelihood presented in the previous chapter can be seen as a problem of statistical learning. In other words, given a number 𝑁 of classes (in our specific case the 𝑁 survey locations) and a new point π‘₯ (in our context a new RSS measurement r), for each class 𝑖 = 1, . . . , 𝑁 the likelihood that the new point π‘₯ belongs to the class 𝑖 is calculated. We then search the maximum of these and the class that realizes the maximum should be the class to which the new point π‘₯ belongs. In this context, the method of the separating ellipsoids [4] might constitute an improving alternative. 3.1 Theory Suppose that one has a number 𝑁 of distinct classes and a number 𝑀 of points (π‘₯𝑖 , 𝑐𝑖 ), where 𝑐𝑖 denotes the class to which the 𝑖 point belongs. Fixing a class π‘˜, with π‘˜ = 1, . . . , 𝑁 , one would like to include the points of this class in an ellipsoid, in order to obtain the best separation ratio from all the points of the other classes. A way to express this is to ask for two different separating ellipsoids that share the same center and axis directions, but, the second one is larger by a factor πœŒπ‘˜ , which is subsequently called the separation ratio. The inner ellipsoid encloses all labeled points π‘₯𝑖 with 𝑐𝑖 = π‘˜, while, all the remaining points lie outside the external one. The higher the value of πœŒπ‘˜ , the better the separation is. In Figure 3.1 it is possible to observe 10 different classes randomly generated, each with 50 points. For these classes, the separating ellipsoids have been computed and the separation ratios πœŒπ‘˜ with π‘˜ = 1, . . . , 10, have been maximized. An ellipsoid in ℝ𝑑 can be parametrized by its center πœ‡ and a symmetric, positive semidefinite matrix P that determines its shape and size πœ€π‘‘ (πœ‡, 𝑃 ) = {π‘₯ ∈ ℝ𝑑 ∣(π‘₯ βˆ’ πœ‡)β€² 𝑃 (π‘₯ βˆ’ πœ‡) ≀ 1}. (3.1.1) where ’ denotes the transpose. The ellipsoid is degenerate if 𝑃 is singular. A scaled concentric ellipsoid with the same shape can be obtained by dividing the matrix 𝑃 with a scalar 𝜌 > 0 and the scaled ellipsoid has an equivalent representation as: πœ€π‘‘ (πœ‡, 𝑃 ) = {π‘₯ ∈ ℝ𝑑 ∣(π‘₯ βˆ’ πœ‡)β€² 𝑃 (π‘₯ βˆ’ πœ‡) ≀ 𝜌}, (3.1.2) 10
  • 16. Section 3.1. Theory 11 Figure 3.1. In this figure it is possile to observe 10 different classes randomly generated, each with 50 points. For these classes, the separating ellipsoids have been computed and the separation ratios πœŒπ‘˜ with π‘˜ = 1, . . . , 10 have been maximized. √ where the ratio 𝜌 is the ratio between the lengths of the corresponding semimajor axes of the two concentric ellipsoids. Suppose we are given the labeled training set {(π‘₯𝑖 , 𝑐𝑖 )}𝑀 . For each class π‘˜ ∈ {1, . . . , 𝑁 } the associated problem is to find the 𝑖=1 ellipsoids πœ€π‘‘ (πœ‡π‘˜ , π‘ƒπ‘˜ ) and πœ€π‘‘ (πœ‡π‘˜ , π‘ƒπ‘˜ ) such that πœŒπ‘˜ is maximized while satisfying the 𝜌 π‘˜ constraints π‘₯𝑖 ∈ πœ€π‘‘ (πœ‡π‘˜ , π‘ƒπ‘˜ ) , βˆ€π‘– : 𝑐𝑖 = π‘˜ ( ) π‘ƒπ‘˜ π‘₯𝑖 ∈ πœ€π‘‘ πœ‡π‘˜ , / , βˆ€π‘– : 𝑐𝑖 βˆ•= π‘˜. (3.1.3) πœŒπ‘˜ In other words, the problem is maximize πœŒπ‘˜ β€² subject to (π‘₯𝑖 βˆ’ πœ‡π‘˜ ) π‘ƒπ‘˜ (π‘₯𝑖 βˆ’ πœ‡π‘˜ ) ≀ 1, βˆ€π‘– : 𝑐𝑖 = π‘˜ (π‘₯𝑖 βˆ’ πœ‡π‘˜ )β€² π‘ƒπ‘˜ (π‘₯𝑖 βˆ’ πœ‡π‘˜ ) β‰₯ πœŒπ‘˜ , βˆ€π‘– : 𝑐𝑖 βˆ•= π‘˜, π‘ƒπ‘˜ ΰͺ° 0 πœŒπ‘˜ β‰₯ 1, (3.1.4) where the optimization variables are πœ‡π‘˜ , π‘ƒπ‘˜ , πœŒπ‘˜ and π‘ƒπ‘˜ ΰͺ° 0 denotes the constraint that π‘ƒπ‘˜ must be a positive semidefinite matrix. This problem is always feasible and the patterns of class π‘˜ are separable from all others using ellipsoids if and only if the optimal πœŒπ‘˜ > 1.
  • 17. 12 A new approach Chapter 3 To handle also the case when the patterns are not strictly separable using ellip- soids, the idea of soft margins is used, introducing slack variables πœ‰π‘– for each of the pattern inclusion or exclusion constraints, and a weighted penalty term is added to the objective function. The new problem is then formulated in the following way: βˆ‘ maximize πœŒπ‘˜ βˆ’ 𝛾 𝑖 πœ‰π‘– subject to (π‘₯𝑖 βˆ’ πœ‡π‘˜ )β€² π‘ƒπ‘˜ (π‘₯𝑖 βˆ’ πœ‡π‘˜ ) ≀ 1 + πœ‰π‘– , βˆ€π‘– : 𝑐𝑖 = π‘˜ β€² (π‘₯𝑖 βˆ’ πœ‡π‘˜ ) π‘ƒπ‘˜ (π‘₯𝑖 βˆ’ πœ‡π‘˜ ) β‰₯ πœŒπ‘˜ βˆ’ πœ‰π‘– , βˆ€π‘– : 𝑐𝑖 βˆ•= π‘˜, π‘ƒπ‘˜ ΰͺ° 0, πœ‰π‘– β‰₯ 0. (3.1.5) Here, 𝛾 is a positive weighting parameter. Both the problems (3.1.4) and (3.1.5) are nonconvex but can be turned into convex optimization problems (being SDPs) using an homogeneous embedding technique. 3.2 Homogeneous Embedding Any ellipsoid in ℝ𝑑 can be viewed as the intersection of a homogeneous ellipsoid (one centered at the origin) in ℝ𝑑+1 and the hyperplane 𝐻 = {𝑧 ∈ ℝ𝑑+1 βˆ£π‘§ = (π‘₯, 1), π‘₯ ∈ ℝ𝑑 }. (3.2.1) An homogeneous ellipsoid in ℝ𝑑+1 can be expressed as πœ€π‘‘+1 (0, Ξ¦) = {𝑧 ∈ ℝ𝑑+1 βˆ£π‘§ β€² Φ𝑧 ≀ 1}, (3.2.2) where Ξ¦ is a symmetric positive semidefinite matrix. To find the intersection of πœ€π‘‘+1 (0, Ξ¦) with the hyperplane 𝐻, the matrix 𝑃 is partitioned as follows: ( ) 𝑃 π‘ž Ξ¦= , (3.2.3) π‘žβ€² π‘Ÿ where 𝑃 ∈ ℝ𝑑×𝑑 , π‘ž ∈ ℝ𝑑 , and π‘Ÿ ∈ ℝ. If 𝑧 = (π‘₯, 1), then 𝑧 β€² Φ𝑧 ≀ 1 ⇔ π‘₯β€² 𝑃 π‘₯ + 2π‘ž β€² π‘₯ + π‘Ÿ ≀ 1. (3.2.4) Now let πœ‡ = βˆ’π‘ƒ βˆ’1 π‘ž 𝛿 = π‘Ÿ βˆ’ π‘ž β€² 𝑃 βˆ’1 π‘ž, (3.2.5) then 𝑧 β€² Φ𝑧 ≀ 1 ⇔ (π‘₯ βˆ’ πœ‡)β€² 𝑃 (π‘₯ βˆ’ πœ‡) + 𝛿 ≀ 1. (3.2.6)
  • 18. Section 3.3. SDP Formulations 13 Since P is positive semidefinite, we always have 𝛿 ≀ 1. In addition, whenever 𝑃 is positive definite, we have 0 ≀ 𝛿 < 1, and ( ) 𝑃 πœ€π‘‘+1 (0, Ξ¦) ∩ 𝐻 = πœ€π‘‘ πœ‡, . (3.2.7) (1 βˆ’ 𝛿) ( ) 𝑃 In this case, πœ€π‘‘+1 (0, Ξ¦) is called a homogeneous embedding of πœ€π‘‘ πœ‡, (1βˆ’π›Ώ) . Given a nondegenerate ellipsoid πœ€π‘‘ (πœ‡, 𝑃 ) its homogeneous embedding in ℝ𝑑+1 is nonunique, and can be parametrized as πœ€π‘‘+1 (0, Φ𝛿 ), for 0 ≀ 𝛿 < 1, where ( ) (1 βˆ’ 𝛿)𝑃 βˆ’(1 βˆ’ 𝛿)𝑃 πœ‡) Φ𝛿 = (3.2.8) βˆ’(1 βˆ’ 𝛿)πœ‡β€² 𝑃 (1 βˆ’ 𝛿)β€² πœ‡β€² 𝑃 πœ‡ + 𝛿 The special case 𝛿 = 0, is called a canonical embedding and this case πœ€π‘‘+1 (0, Ξ¦0 ) is a degenerate ellipsoid because the matrix Ξ¦0 is singular. 3.3 SDP Formulations In this section, the problem of separating patterns in ℝ𝑑 using homogeneous ellip- soids in ℝ𝑑+1 is considered. At first only two classes: the inner points and the outer ones. For this purpose, the 𝑀 training data are divided into two sets {π‘₯𝑖 }𝑀1 and 𝑖=1 {𝑦𝑗 }𝑀2 , where the π‘₯β€² 𝑠 are points belonging to a specific class, and 𝑦𝑗 𝑠 are all the 𝑗=1 𝑖 β€² other points and 𝑀1 + 𝑀2 = 𝑀. First, the training data must be embedded in the hyperplane 𝐻 by letting ( ) π‘₯𝑖 π‘Žπ‘– = , 𝑖 = 1, . . . , 𝑀1 , (3.3.1) 1 ( ) 𝑦𝑗 𝑏𝑗 = 𝑗 = 1, . . . , 𝑀2 . (3.3.2) 1 The problem (3.1.5) using homogeneous embedding can be stated as maximize 𝜌 subject to π‘Žβ€² Ξ¦π‘Žπ‘– 𝑖 ≀ 1, 𝑖 = 1, . . . , 𝑀1 𝑏′ Φ𝑏𝑗 β‰₯ 𝜌, 𝑗 𝑗 = 1, . . . , 𝑀2 , Ξ¦ ΰͺ° 0, 𝜌 β‰₯ 1 (3.3.3) where the optimization variables are Ξ¦ and 𝜌. This is a convex optimization problem, more specifically an SDP. Once the problem (3.3.3) is solved, the parametrization in ℝ can be recovered using the transformations in (3.2.4) and (3.2.5) The two concentric separating ellipsoids are πœ€π‘‘ (πœ‡, 𝑃/(1 βˆ’ 𝛿)), πœ€π‘‘ (πœ‡, 𝑃/(𝜌(1 βˆ’ 𝛿))). (3.3.4) The next important properties are satisfied:
  • 19. 14 A new approach Chapter 3 βˆ™ If the patterns are separable, then the optimal solution to (3.3.3) is always a canonical embedding, i.e, 𝛿 = 0. βˆ™ If the patterns are nonseparable, then the optimal solution to (3.3.3) always has 𝜌 = 1, and Ξ¦ is degenerate such that 𝛿 = 1. Similarly, the version with slack variables can be formulated as an SDP problem: (βˆ‘ βˆ‘ ) maximize 𝜌 βˆ’ 𝛾 𝑖 πœ‰π‘– + 𝑗 πœ‚π‘— subject to π‘Žβ€² Ξ¦π‘Žπ‘– ≀ 1 + πœ‰π‘– , 𝑖 𝑖 = 1, . . . , 𝑀1 𝑏′ Φ𝑏𝑗 β‰₯ 𝜌 βˆ’ πœ‚π‘— , 𝑗 𝑗 = 1, . . . , 𝑀2 , (3.3.5) where the optimization variables are Ξ¦, 𝜌, and the slack variables πœ‰π‘– and πœ‚π‘— , while 𝛾 is a weighting parameter. Once the SDP problem (3.3.5) is solved, the transformations (3.2.4) and (3.2.5) can be also used to find the separating ellipsoids in ℝ𝑑 . Instead of using the weighting parameter 𝛾, the problem (3.3.5) can be parametrized using the ratio 𝜌. Switching to notations that are explicit for multiclass problems, given any fixed 𝜌 β‰₯ 1, for each class π‘˜ ∈ {1, . . . , 𝑁 }, the problem to solve is: βˆ‘π‘› minimize 𝑖=1 πœ‚π‘–π‘˜ β€² subject to 𝑧𝑖 Ξ¦π‘˜ 𝑧𝑖 ≀ 1 + πœ‚π‘–π‘˜ , βˆ€π‘– : 𝑐𝑖 = π‘˜ β€² 𝑧𝑖 Ξ¦π‘˜ 𝑧𝑖 β‰₯ 𝜌 βˆ’ πœ‚π‘–π‘˜ , βˆ€π‘– : 𝑐𝑖 βˆ•= π‘˜ Ξ¦π‘˜ ΰͺ° 0, πœ‚π‘–π‘˜ β‰₯ 0, 𝑖 = 1, . . . , 𝑀, (3.3.6) where 𝑧𝑖 = (π‘₯𝑖 , 1), and the optimization variables are Ξ¦π‘˜ and πœ‚π‘–π‘˜ . In Figures 3.2 and 3.3 the problem (3.3.6) has been solved for ten classes ran- domly generated, each consisting of 50 points, using respectively values of 𝜌 equal to 1.5 and 2. 3.4 Classification Rule After solving in the training phase the problem (3.3.6) for each class π‘˜ ∈ {1, . . . , 𝑁 }, let Ξ¦βˆ— , with π‘˜ = 1, . . . , 𝑁 , be the optimal obtained solution. Then, given a new π‘˜ data point π‘₯ ∈ ℝ𝑑 , we first let 𝑧 = (π‘₯, 1) ∈ ℝ𝑑+1 and compute πœŒπ‘˜ = 𝑧 β€² Ξ¦βˆ— 𝑧, π‘˜ π‘˜ = 1, . . . , 𝑁, (3.4.1)
  • 20. Section 3.4. Classification Rule 15 Figure 3.2. In this figure the problem (3.3.6) has been solved for ten classes randomly generated, each consisting of 50 points, using a value of 𝜌 equal to 1.5. Figure 3.3. In this figure the problem (3.3.6) has been solved for ten classes randomly generated, each consisting of 50 points, using a value of 𝜌 equal to 2. then label the data with the class Λ† π‘˜ = π‘Žπ‘Ÿπ‘” min 𝑧 β€² Ξ¦βˆ— 𝑧 β€² . (3.4.2) π‘˜ π‘˜ The quantity πœŒπ‘˜ = 𝑧 β€² Ξ¦π‘˜ 𝑧 for each π‘˜ = 1, . . . , 𝑁 can be seen as a distance (ellipsoid Λ† distance) of the point 𝑧 ∈ ℝ𝑑+1 from the class π‘˜. The class π‘˜ that realizes the minimum of this distance is the class which mostly represents the point 𝑧. Next chapters will deal with the application of this concept to the problem of wireless positioning as an alternative to the Gaussian method.
  • 21. Chapter 4 DATA 4.1 Plotting the locations on the maps The first hurdle to overcome concerns the plot of the survey locations on the map. Given the building map image, the GPS coordinates of both the top-left corner (NW) of the map and the bottom-right corner of the map (SE) and the GPS coor- dinates of the locations, we need to remap these coordinates to the ones to be used on the display. For each pair of GPS coordinates (π‘™π‘Žπ‘‘π‘–π‘‘π‘’π‘‘π‘’(β‹…), π‘™π‘œπ‘›π‘”π‘–π‘‘π‘’π‘‘π‘’(β‹…)) we set: π‘₯𝐺𝑃 𝑆 (β‹…) = π‘™π‘œπ‘›π‘”π‘–π‘‘π‘’π‘‘π‘’(β‹…) (4.1.1) 𝑦𝐺𝑃 𝑆 (β‹…) = π‘™π‘Žπ‘‘π‘–π‘‘π‘’π‘‘π‘’(β‹…) (4.1.2) First, the GPS coordinates (π‘₯𝐺𝑃 𝑆 (β‹…), 𝑦𝐺𝑃 𝑆 (β‹…)) must be converted to a regular coor- dinate system where the π‘₯ unit distance is no longer dependent on the 𝑦 scale value. To perform this, it is necessary to fix a common latitude 𝑦𝐢𝑂𝑀 , that should be the same for all the points that are to be used in calculation. The needed transformation is the following: ( ) 2πœ‹ π‘₯π‘ˆ 𝐺𝑃 𝑆 = π‘₯𝐺𝑃 𝑆 β‹… cos β‹… 𝑦𝐢𝑂𝑀 (4.1.3) 360 π‘¦π‘ˆ 𝐺𝑃 𝑆 = 𝑦𝐺𝑃 𝑆 (4.1.4) Applying this transformation, the uniformed GPS coordinates of the 𝑁 π‘Š and 𝑆𝐸 corners are found. Let now 𝑃 be a survey location with uniformed GPS coordinates (π‘₯π‘ˆ 𝐺𝑃 𝑆 (𝑃 ), π‘¦π‘ˆ 𝐺𝑃 𝑆 (𝑃 )). By a translation, one put the origin in the 𝑆𝐸 corner and find the new coordinates 16
  • 22. Section 4.1. Plotting the locations on the maps 17 Figure 4.1. A rotation of angle 𝛽 βˆ’ 𝛼 is needed so the rectangle becomes straight. of the 𝑁 π‘Š corner and of the location 𝑃 as π‘₯π‘ˆ 𝐺𝑃 𝑆𝑑 (𝑃 ) = π‘₯π‘ˆ 𝐺𝑃 𝑆 (𝑃 ) βˆ’ π‘₯π‘ˆ 𝐺𝑃 𝑆 (𝑆𝐸) (4.1.5) π‘¦π‘ˆ 𝐺𝑃 𝑆𝑑 (𝑃 ) = π‘¦π‘ˆ 𝐺𝑃 𝑆 (𝑃 ) βˆ’ π‘¦π‘ˆ 𝐺𝑃 𝑆 (𝑆𝐸) (4.1.6) π‘₯π‘ˆ 𝐺𝑃 𝑆𝑑 (𝑁 π‘Š ) = π‘₯π‘ˆ 𝐺𝑃 𝑆 (𝑁 π‘Š ) βˆ’ π‘₯π‘ˆ 𝐺𝑃 𝑆 (𝑆𝐸) (4.1.7) π‘¦π‘ˆ 𝐺𝑃 𝑆𝑑 (𝑁 π‘Š ) = π‘¦π‘ˆ 𝐺𝑃 𝑆 (𝑁 π‘Š ) βˆ’ π‘¦π‘ˆ 𝐺𝑃 𝑆 (𝑆𝐸) (4.1.8) π‘₯π‘ˆ 𝐺𝑃 𝑆𝑑 (𝑆𝐸) = π‘₯π‘ˆ 𝐺𝑃 𝑆 (𝑆𝐸) βˆ’ π‘₯π‘ˆ 𝐺𝑃 𝑆 (𝑆𝐸) (4.1.9) π‘¦π‘ˆ 𝐺𝑃 𝑆𝑑 (𝑆𝐸) = π‘¦π‘ˆ 𝐺𝑃 𝑆 (𝑆𝐸) βˆ’ π‘₯π‘ˆ 𝐺𝑃 𝑆 (𝑆𝐸) (4.1.10) (4.1.11) The next steps are suggested by the scheme in Figure 4.1. An anticlockwise rotation of angle 𝛼 βˆ’ 𝛽 is applied to the points 𝑁 π‘Š and 𝑃 so that the 𝑁 𝐸 and π‘†π‘Š corners lie respectively on the π‘¦βˆ’ axes and on the π‘₯βˆ’ axes. It is possible then to use these new coordinates to plot the locations on the image of the map. The angle 𝛽 is computed using the image of the map. In particular, denoting with 𝑀 the width and with β„Ž the height of the image of the map in pixels, the length of the diagonal 𝑑 of the image in pixels is given by: √ 𝑑 = 𝑀2 + β„Ž2 (4.1.12)
  • 23. 18 Data Chapter 4 Figure 4.2. After the rotation we are able to compute the new coordinates for the 𝑁 𝐸 and π‘†π‘Š corners. The angle 𝛽 is then given by: (𝑀) 𝛽 = arcsin (4.1.13) 𝑑 The angle 𝛼 instead is given by: ( ) π‘¦π‘ˆ 𝐺𝑃 𝑆 (𝑁 π‘Š ) βˆ’ π‘¦π‘ˆ 𝐺𝑃 𝑆 (𝑆𝐸) 𝛼 = arctan (4.1.14) π‘₯π‘ˆ 𝐺𝑃 𝑆 (𝑆𝐸) βˆ’ π‘₯π‘ˆ 𝐺𝑃 𝑆 (𝑁 π‘Š ) An anticlockwise rotation is applied to the points 𝑃 and 𝑁 π‘Š using the following matrix: ( ) cos(𝛼 βˆ’ 𝛽) βˆ’ sin(𝛼 βˆ’ 𝛽) 𝐴= (4.1.15) sin(𝛼 βˆ’ 𝛽) cos(𝛼 βˆ’ 𝛽) The new coordinates for the 𝑁 π‘Š and for the 𝑆𝐸 corners are given then by: ( ) ( ) π‘₯π‘ˆ 𝐺𝑃 π‘†π‘‘π‘Ÿ (𝑁 π‘Š ) π‘₯π‘ˆ 𝐺𝑃 𝑆𝑑 (𝑁 π‘Š ) = 𝐴⋅ (4.1.16) π‘¦π‘ˆ 𝐺𝑃 π‘†π‘‘π‘Ÿ (𝑁 π‘Š ) π‘¦π‘ˆ 𝐺𝑃 𝑆𝑑 (𝑁 π‘Š ) ( ) ( ) π‘₯π‘ˆ 𝐺𝑃 π‘†π‘‘π‘Ÿ (𝑃 ) π‘₯π‘ˆ 𝐺𝑃 𝑆𝑑 (𝑃 ) = 𝐴⋅ (4.1.17) π‘¦π‘ˆ 𝐺𝑃 π‘†π‘‘π‘Ÿ (𝑃 ) π‘¦π‘ˆ 𝐺𝑃 𝑆𝑑 (𝑃 )
  • 24. Section 4.1. Plotting the locations on the maps 19 Figure 4.3. Change of coordinates of the location 𝑃 to the domain [0, 1] Γ— [0, 1]. The new coordinates 𝑒(𝑃 ) and 𝑣(𝑃 ) represent respectively the ratio of the width 𝑀 and of the height β„Ž of the map image, where the corresponding pixel is. In this new coordinate system it is easy now to find the coordinates of the 𝑁 𝐸 corner and of the π‘†π‘Š corner as shown in Figure 4.2. In particular: ( ) ( ) π‘₯π‘ˆ 𝐺𝑃 𝑆𝑑 (𝑁 𝐸) 0 = (4.1.18) π‘¦π‘ˆ 𝐺𝑃 𝑆𝑑 (𝑁 𝐸) π‘¦π‘ˆ 𝐺𝑃 𝑆𝑑 (𝑁 π‘Š ) ( ) ( ) π‘₯π‘ˆ 𝐺𝑃 𝑆𝑑 (π‘†π‘Š ) π‘₯π‘ˆ 𝐺𝑃 𝑆𝑑 (𝑁 π‘Š ) = (4.1.19) π‘¦π‘ˆ 𝐺𝑃 𝑆𝑑 (π‘†π‘Š ) 0 (4.1.20) A final translation of the points is necessary to put the origin in the π‘†π‘Š corner and get new positive coordinates (π‘₯π‘ˆ 𝐺𝑃 π‘†π‘‘π‘Ÿπ‘‘ (β‹…), π‘¦π‘ˆ πΊπ‘†π‘‘π‘Ÿπ‘‘ (β‹…)) for the corners 𝑆𝐸,𝑁 π‘Š ,𝑁 𝐸 and the point 𝑃 . At this point we calculate: π‘₯π‘ˆ 𝐺𝑃 π‘†π‘‘π‘Ÿπ‘‘ (𝑃 ) 𝑒(𝑃 ) = (4.1.21) π‘₯π‘ˆ 𝐺𝑃 π‘†π‘‘π‘Ÿπ‘‘ (𝑆𝐸) π‘¦π‘ˆ 𝐺𝑃 π‘†π‘‘π‘Ÿπ‘‘ (𝑃 ) 𝑣(𝑃 ) = (4.1.22) π‘¦π‘ˆ 𝐺𝑃 π‘†π‘‘π‘Ÿπ‘‘ (𝑁 π‘Š )
  • 25. 20 Data Chapter 4 Basically, as shown in Figure 4.3, the initial domain is restricted to [0, 1] Γ— [0, 1] and 𝑒(𝑃 ), 𝑣(𝑃 ) are the coordinates of the location 𝑃 in the new domain and represent respectively the ratio of the width 𝑀 and of the height β„Ž of the map image where the corresponding pixel is. Then, in order to find the corresponding pixels (𝑝π‘₯, 𝑝𝑦) in the map: 𝑝π‘₯ = π‘Ÿπ‘œπ‘’π‘›π‘‘(𝑀 β‹… 𝑒) (4.1.23) 𝑝𝑦 = π‘Ÿπ‘œπ‘’π‘›π‘‘(β„Ž β‹… (1 βˆ’ 𝑣)) (4.1.24) where π‘Ÿπ‘œπ‘’π‘›π‘‘ means rounding to the nearest integer.
  • 26. Section 4.2. Data 21 Figure 4.4. Map of the EntrΒ΄ shopping mall in MalmΒ¨ with the 61 plotted loca- e o tions. 4.2 Data The data used to carry on the preliminary analyses have been taken in the EntrΒ΄ e shopping mall in MalmΒ¨. The dataset consists of: o βˆ™ 70 access points (APs) for the Wireless signal. βˆ™ 13 access points (APs) for the GSM signal. βˆ™ 61 survey locations. βˆ™ 320 measurements of the wireless signal in each of the 61 locations. βˆ™ 240 measurements of the GSM signal in each of the 61 locations. The GSM signal is treated in the same way of the Wifi signal. Since the final results for the GSM measurements are not as good as for the Wifi measurements, this report will treat only the Wifi signal. The first step is to project the Wifi measurements in a lower dimensional space by a projection matrix A, as described in the Section 2.5 to reduce the number of APs from 70 to 13 DCs. Since, we will perform this step on all the datasets that we will have to deal with, from now on, to simplify the notation, we use the wording RSS measurements to denote the projected RSS measurements. Furthermore, for each location 𝑖 with 𝑖 = 1, . . . , 61 the signature vectors, thus the mean vector and the standard deviation value for the RSS measurements, have been calculated.
  • 27. 22 Data Chapter 4 We initially attempt to apply the method of the separating ellipsoids to the training measurements for each survey location π‘˜ . In other words, fixing a location π‘˜ we want to construct the ellipsoid that contains the RSS measurements of the location π‘˜ and keeps outside all the measurements of the other sites different from π‘˜. Given q𝑖 (1), . . . , q𝑖 (𝑀 ), the 𝑀 RSS measurements at the location 𝑖 and letting ( ) q𝑖 (𝑗) z𝑖 (𝑗) = (4.2.1) 1 the SDP problem to solve is the following: minimize 𝜌 subject to zπ‘˜ (𝑖)β€² Ξ¦π‘˜ zπ‘˜ (𝑖) ≀ 1, βˆ€π‘– = 1, . . . , 𝑀 z𝑙 (𝑗)β€² Ξ¦π‘˜ z𝑙 (𝑗) β‰₯ 𝜌, βˆ€π‘™ = 1, . . . , 𝑁 𝑙 βˆ•= π‘˜ βˆ€π‘— = 1, . . . , 𝑀 Ξ¦π‘˜ ΰͺ° 0, 𝜌 > 1, (4.2.2) where the optimization variables are Ξ¦π‘˜ and 𝜌. As expected the measurements are not strictly separable and for this reason the version with all slack variables is used: βˆ‘ minimize 𝑖,𝑗 πœ‚π‘–π‘— subject to zπ‘˜ (𝑗)β€² Ξ¦π‘˜ zπ‘˜ (𝑗) ≀ 1 + πœ‚π‘˜π‘— , βˆ€π‘— = 1, . . . , 𝑀 z𝑙 (𝑗)β€² Ξ¦π‘˜ z𝑙 (𝑗) β‰₯ 𝜌 βˆ’ πœ‚π‘™π‘— , βˆ€π‘™ = 1, . . . , 𝑁 𝑙 βˆ•= π‘˜ βˆ€π‘— = 1, . . . , 𝑀 Ξ¦π‘˜ ΰͺ° 0, πœ‚π‘–π‘˜ β‰₯ 0. (4.2.3) where the optimization variables are Ξ¦π‘˜ and all the slack variables πœ‚π‘–π‘— . Regrettably, due to the large number of variables required to solve this optimization problem, the solution requires a vast amount of computational power and memory allocation. For this reason, in view of larger environments, alternative versions have been formulated in order to decrease the number of variables and constraints to optimize. 4.3 Version with one slack for location One may simplify the original problem by restricting the formulation to only allow for one slack variable for each location. This approach will differ from the version
  • 28. Section 4.4. Iterative version 23 with all slack variables since the points of each location have all the same slack variable. That is, let π‘˜ be a fixed survey location. The formulation of the problem is: βˆ‘π‘ minimize π‘˜=1 πœ‚π‘˜ subject to zβ€² (𝑖)Ξ¦π‘˜ zπ‘˜ (𝑖) ≀ 1 + πœ‚π‘˜ , π‘˜ βˆ€π‘– = 1, . . . , 𝑀 zβ€² (𝑖)Ξ¦π‘˜ z𝑙 (𝑖) 𝑙 β‰₯ 𝜌 βˆ’ πœ‚π‘™ , βˆ€π‘™ = 1, . . . , 𝑁 𝑙 βˆ•= π‘˜ βˆ€π‘– = 1, . . . , 𝑀 (4.3.1) Ξ¦π‘˜ ΰͺ° 0, (4.3.2) πœ‚π‘– β‰₯ 0, βˆ€π‘– = 1, . . . , 𝑁, (4.3.3) where the optimization variables are Ξ¦π‘˜ and the 𝑁 slack variables πœ‚π‘˜ . The efficiency in computing the ellipsoid has improved substantially since the number of variables to optimize is 𝑁 against 𝑁 β‹… 𝑀 of the version with all the slack variables. The main drawback is when there is no separability in the classes. In fact, in this case the ellipsoid, even if they contain the most part of the point of the respective class, overlap themselves critically and this can lead to a misclassification of the data. 4.4 Iterative version As an alternative, instead of considering for each location π‘˜ all the measurements 𝑀 , one may initially solve the optimization problem (4.2.3), taking into account a lower number of measurements for each location. Then, during next iterations, one may add the same number of new measurements and solve again the optimization problem, using for the old points the values of the slack variables computed at the previous steps. In particular, let 𝑛1 be an integer number such that 𝑛1 < 𝑀 and 𝑀 is a multiple of 𝑛1 . At the first iteration, the version with all slack variables is applied but with only 𝑛1 points. That is, let π‘˜ be a fixed location, then: βˆ‘ (1) minimize 𝑖,π‘˜ πœ‚π‘˜π‘– (1) (1) subject to zβ€² (𝑖)Ξ¦π‘˜ zπ‘˜ (𝑖) ≀ 1 + πœ‚π‘˜π‘– , π‘˜ βˆ€π‘– = 1, . . . , 𝑛1 (1) (1) zβ€² (𝑖)Ξ¦π‘˜ z𝑗 (𝑖) 𝑗 β‰₯ 𝜌 βˆ’ πœ‚π‘—π‘– , βˆ€π‘— = 1, . . . , 𝑁 𝑗 βˆ•= π‘˜ βˆ€π‘– = 1, . . . , 𝑛1 (1) Ξ¦π‘˜ ΰͺ° 0, (4.4.1) (1) πœ‚π‘–π‘˜ β‰₯ 0, βˆ€π‘– βˆ€π‘˜ (4.4.2)
  • 29. 24 Data Chapter 4 Figure 4.5. In this Figure, the iterative version has been solved for ten classes randomly generated, each consisting of 50 points, using a value of 𝜌 equal to 1.1 and adding 25 points at each iteration. (1) (1) where the optimization variables are Ξ¦π‘˜ and the 𝑁 β‹… 𝑛1 slack variables πœ‚π‘–π‘— . From the second iteration 𝑛1 points are added to each location. For the old points the values of the slack variables computed at the previous steps are used. That is, at iteration (𝑙) with 𝑙 β‰₯ 2 βˆ‘ (𝑙) minimize 𝑖,π‘˜ πœ‚π‘–π‘˜ (𝑙) (π‘™βˆ’1) subject to zβ€² (𝑖)Ξ¦π‘˜ zπ‘˜ (𝑖) ≀ 1 + πœ‚π‘–π‘˜ π‘˜ , βˆ€π‘– = 1, . . . , (𝑙 βˆ’ 1) β‹… 𝑛1 (𝑙) (𝑙) zβ€² (𝑖)Ξ¦π‘˜ zπ‘˜ (𝑖) π‘˜ ≀ 1 + πœ‚π‘–π‘˜ , βˆ€π‘– = (𝑙 βˆ’ 1) β‹… 𝑛1 + 1, . . . , 𝑙 β‹… 𝑛1 βˆ€π‘— = 1, . . . , 𝑁, 𝑗 βˆ•= π‘˜ (𝑙) (π‘™βˆ’1) zβ€² (𝑖)Ξ¦π‘˜ z𝑗 (𝑖) β‰₯ 𝜌 βˆ’ πœ‚π‘–π‘— 𝑗 , βˆ€π‘– = 1, . . . , (𝑙 βˆ’ 1) β‹… 𝑛1 (𝑙) (𝑙) zβ€² (𝑖)Ξ¦π‘˜ z𝑗 (𝑖) 𝑗 β‰₯ 𝜌 βˆ’ πœ‚π‘–π‘— , βˆ€π‘– = (𝑙 βˆ’ 1) β‹… 𝑛1 + 1, . . . , 𝑛1 β‹… 𝑙 (𝑙) Ξ¦π‘˜ ΰͺ° 0, (𝑙) πœ‚π‘–π‘˜ β‰₯ 0, βˆ€π‘–, βˆ€π‘˜, (4.4.3) (π‘™βˆ’1) (𝑙) where πœ‚π‘–π‘˜ are the slack variables optimized at the previous step while πœ‚π‘–π‘˜ are the ones that correspond to the new added points and must be optimized. After the last iteration the last slack variables might be used to recompute again the previous
  • 30. Section 4.5. Variance ellipsoids 25 ones. After testing this method on the simulated data in which each class consists of 50 points it has been noticed that if only two points are added at each iteration the ellipsoid are inaccurate. They are moved with respect to the points that they should contain, while increasing the number of points that we add at each iteration,the obtained ellipsoids are almost equal to the ones computed with the all slack version. In order to obtain accurate ellipsoids the number of points 𝑛1 should be major or equal to 𝑀 . However, such a solution will still suffer from the demanding memory 2 allocation similar to the original problem. In the Figure 4.5, the iterative version has been solved for ten classes randomly generated, each consisting of 50 points, using a value of 𝜌 equal to 1.1 and adding 25 points at each iteration. 4.5 Variance ellipsoids A third option consists of building, after fixing a location π‘˜, a normal oriented el- lipsoid (variance ellipsoid) that contains at least the most part of the measurements of the survey site π‘˜. For all the remaining locations, it is possible then to check how many measurements are inside this ellipsoid and take into account, when computing the separating ellipsoid for the location π‘˜, only the locations 𝑖 whose measurements mostly overlap the measurements of the site π‘˜. In particular, the equation of a normal oriented ellipsoid in an 𝑛-dimensional space is defined by the equation π‘₯𝑇 β‹… A β‹… π‘₯ <= 1 where A is a n-dimensional diagonal matrix βŽ› ⎞ πœ†1 . . . 0 ⎜ . .. . ⎟ A=⎝ . . . . ⎠ . 0 ... πœ†π‘› where πœ†π‘– , for 𝑖 = 1, . . . , 𝑛, are the eigenvalues of the matrix. The relation between the equatorial radii π‘Ÿπ‘— for 𝑗 = 1, . . . , 𝑛 of the ellipsoids and the eigenvalues πœ†π‘— is given by 1 π‘Ÿπ‘— = √ (4.5.1) πœ†π‘— Since for each location 𝑖 it is possible to calculate the mean vector of the RSS measurements and the vector of the standard deviations 𝜎(𝑖) = [𝜎1 (𝑖), . . . , 𝜎𝐷 (𝑖)]𝑇 , it is also possible to construct the ellipsoid that contains most of the points of the site 𝑖. The matrix which defines the ellipsoid is given by: βŽ› 1 ⎞ 2 𝜎1 (𝑖) ... 0 B(𝑖) = 𝐢 β‹… ⎜ . .. . ⎟ ⎜ ⎝ . . . . ⎟ . ⎠ (4.5.2) 1 0 . . . 𝜎2 (𝑖) 𝐷
  • 31. 26 Data Chapter 4 where 𝐢 is a chosen constant and it depends on the number of DCs. Once computed the ellipsoids for each location we perform the following steps: 1. Fix a location π‘˜. 2. For each location 𝑗, with 𝑗 = 1, . . . , 𝑁 , count how many measurements of this location are inside the variance ellipsoid of the site π‘˜. 3. Repeat the two steps above for each location π‘˜ with π‘˜ = 1, . . . , 𝑁 . By doing this, it is possible to know, for each site π‘˜, which are the locations 𝑗, whose measurements mostly overlap the measurements of the site π‘˜. Finally, taking into considerations only these locations 𝑗 to compute the separating ellipsoid for the location π‘˜, it is possible to reduce the number of constraints and variables to optimize. Regrettably, it can happen that far away locations have many measure- ments that overlap, or many locations overlap each other. For this reason, larger buildings will still necessitate prohibitive memory requirements. Figure 4.6. In this Figure, it is possible to observe a set of 3-D points with the corresponding variance ellipsoid, computed as described in the section 4.5. As it is possible to observe, almost all the points lie inside the ellipsoid.
  • 32. Chapter 5 THE A* ALGORITHM 5.1 The A* algorithm In computer science, Aβˆ— is a widely used algorithm for path-finding and graph traversal, the process of plotting an efficiently traversable path between points, called nodes. Noted for its performance and accuracy, the algorithm was first de- scribed by Peter Hart, Nils Nilsson and Bertram Raphael in 1968 [6]. It is an extension of Edger Dijkstra’s 1959 algorithm and it achieves better performance (with respect to time) by using heuristic distances. Aβˆ— uses a best-first search and finds the least cost-path from a given initial node to one goal node. To achieve this, it uses a distance-plus-cost heuristic function (usually denoted by 𝑓 (π‘₯)) to determine the order in which the search visits nodes in the tree. The distance-plus-cost heuristic is a sum of two functions: βˆ™ the path-cost function, which is the cost from the starting node to the current node (usually denoted by 𝑔(π‘₯)) βˆ™ and an admissible ”heuristic estimate” of the distance from each node to the goal node (usually denoted by β„Ž(π‘₯)). In mathematical terms 𝑓 (π‘₯) is then given by: 𝑓 (π‘₯) = 𝑔(π‘₯) + β„Ž(π‘₯). (5.1.1) The function β„Ž(π‘₯) is a mathematical way of using previous knowledge in order to expand the fewest possible nodes in searching for an optimal path and speed up the algorithm. For example, inside a building, β„Ž(π‘₯) might represent the straight-line distance to the goal (without taking into account the walls), since that is physically the smallest possible distance between any two points or nodes. The adjective ”ad- missible” means that β„Ž(π‘₯) must not overestimate the distance to the goal. Dijkstra’s algorithm is a particular case of the Aβˆ— algorithm, then using β„Ž(π‘₯) = 0. 5.1.1 How A* works βˆ— The A algorithm requires a starting node 𝑠 and a goal node 𝑑. The steps of the algorithm are the followings: 27
  • 33. 28 The A* algorithm Chapter 5 1. Add the starting node 𝑠 to the ”open set”. The open set contains the nodes that must be explored and they might be inserted in the optimal path towards the goal node 𝑠. 2. Repeat the following: a) Look for the node in the open set which has the lowest 𝑓 value. We refer to this as the ”current node”. b) Switch it to the ”closed set”. c) For each of the nodes that are reachable from the current node: βˆ™ If it is in the closed list, ignore it. Otherwise do the following. βˆ™ If it is not on the open list, add it to the open list. Make the current node the parent of this node, and record the 𝑓 , 𝑔 and β„Ž costs of the node. βˆ™ If it is on the open list already, check to see if this path to that square costs less, using 𝑔 cost as the measure. A lower 𝑔 cost means that this is a better path. If so, change the parent of the node to the current node and recalculate the 𝑔 and 𝑓 scores of the node. d) Stop when: βˆ™ The target square is added to the closed list. In this case the path has been found. Or, βˆ™ the target square has not been found, and the open list is empty. In this case there is no path. 3. Save the path. Working backwards from goal node 𝑑, go from each node to its parent node until the starting node 𝑠 is reached. 5.2 Applied A* star Since the buildings we have to deal with might have many walls or non-walkable zones, it would be useful to compute the shortest paths from each survey location to the remaining ones. In fact, during the online phase, the RSS measurement is taken by the mobile client (MC) at least every two seconds. During this interval, the user is unlikely to move to a location which is a long way far from the point in which he or she is. Then, by calculating the real distances (the lengths of the shortest paths taking into account possible obstacles) from one survey site to the other ones, we will be able to compute the ellipsoids considering only the locations that are most likely to reach. More specifically, fixing a survey point π‘˜ with π‘˜ = 1, . . . , 𝑁 , we compute the real distance 𝐷𝑖𝑠𝑑(π‘˜, 𝑖) from the location π‘˜ to the location 𝑖 for every 𝑖 = 1, . . . , 𝑁 . Then, fixing a parameter 𝐷𝑀 𝐴𝑋, the locations 𝑗’s with 𝑗 = 1, . . . , 𝑁 such that 𝐷𝑖𝑠𝑑(π‘˜, 𝑗) <= 𝐷𝑀 𝐴𝑋. (5.2.1)
  • 34. Section 5.2. Applied A* star 29 are selected. The ellipsoid for the location π‘˜ is then computed, taking into account only the locations 𝑗’s that satisfy the inequality (5.2.1). This approach solves in a smart and intuitive way the memory problem since the number of variables and constraints to optimize is always reasonable also for large environments. The binary images (black and white) are stored in the computer as binary ma- trices where 0 represents the black color and 1 represents the white color. In the case the image represents a map, the black color is used for the non walkable zones and the white color for the walkable zones. In order to be able to compute the real distances in meters from one point of the building to another one, the binary map needs to be resized. To perform this, a small distance 𝑑 is fixed, depending on the grade of accuracy that we want to keep after the resizing. Usually, 𝑑 is chosen to be equal to the half of the minimum distance between all the locations. Given the GPS coordinates of the North-West angle and the South-East angle of the map, it is possible to compute the GPS coordinates of the other two angles (North-East and South-West) and the distances in meters between these four corners. Using this data, we build a grid of the image, in which each edge of a square represents a distance equal to 𝑑. Furthermore, the resized binary map is always a binary matrix, in which each element, 1 or 0, represents a square of the grid. The Aβˆ— algorithm, implemented in MATLAB, takes as input the resized binary image, a starting node (𝑖𝑠 , 𝑗𝑠 ) in the image matrix and a destination node (𝑖𝑑 , 𝑗𝑑 ) in the image matrix. The resulting path is a sequence of nodes in the image matrix (𝑖𝑠 , 𝑗𝑠 ), (𝑖1 , 𝑗1 ), . . . , (𝑖𝑑 , 𝑗𝑑 ). An example can be observed in Figures 5.1 and 5.2. For the heuristic distance β„Ž, it is reasonable to take the distance from the current element (𝑖, 𝑗) to the destination element (𝑖𝑑 , 𝑗𝑑 ) multiplied by 𝑑, that is: √ β„Ž(𝑖, 𝑗) = 𝑑 β‹… (𝑖𝑑 βˆ’ 𝑖)2 + (𝑗𝑑 βˆ’ 𝑗)2 (5.2.2) The transition costs are instead simply computed in the following way: βˆ™ cost of an horizontal movement = 𝑑, βˆ™ cost of a vertical movement = 𝑑, √ βˆ™ cost of a diagonal movement = 𝑑 β‹… 2. The distance of the path is then computed by summing the costs of the made transitions from the starting point (𝑖𝑠 , 𝑗𝑠 ) to the final point (𝑖𝑑 , 𝑗𝑑 ). It is possible then to build the distance matrix 𝐷𝑖𝑠𝑑.
  • 35. 30 The A* algorithm Chapter 5 βŽ› ⎞ 0 0 0 0 0 0 0 ⎜0 1 1 0 1 1 0⎟ ⎜ ⎟ ⎜0 1 0 1 0 1 0⎟ ⎜ ⎟ ⎜0 1 0 1 0 1 0⎟ ⎜ ⎟ =β‡’ ⎜0 1 0 1 0 1 0⎟ ⎜ ⎟ ⎜0 1 1 1 1 1 0⎟ ⎜ ⎟ ⎝0 1 1 0 1 1 0⎠ 0 0 0 0 0 0 0 Figure 5.1. An example of a binary matrix and the corresponding image βŽ› ⎞ 0 0 0 0 0 0 0 ⎜0 ⎜ ⃝ ⃝ 1 1 0 1 1 0⎟⎟ ⎜0 ⎜ ⃝ 0 1 1 0 1 0⎟⎟ ⎜0 ⎜ ⃝ 0 1 1 0 ⃝ 0⎟ 1 ⎟ =β‡’ ⎜0 ⎜ ⃝ 0 1 1 0 ⃝ 0⎟ 1 ⎟ ⎜0 ⎜ ⃝ ⃝ 1 1 ⃝ ⃝ ⃝ 0⎟ 1 1 1 ⎟ ⎝0 1 1 0 1 1 0⎠ 0 0 0 0 0 0 0 Figure 5.2. Example of the shortest path from the starting node (2, 3) to the final node (4, 6). The resulting path is: (2, 3),(2, 2),(3, 2),(4.2), (5, 2),(6, 2),(6, 3),(6, 4),(6, 5),(6, 6),(5, 6),(4, 6), that corresponds to the red line in the image.
  • 36. Chapter 6 ANALYSES 6.1 Computation of the distance matrix and of the ellipsoids To evaluate the performance of the Aβˆ— algorithm and of the ellipsoid method taking into account the only reachable locations, two different datasets have been consid- ered. The first comes from the first floor of the Hansa Mall in MalmΒ¨ and it consists o of 38 locations, with 160 Wifi measurements for the locations 𝑖 with 𝑖 = 1, . . . , 35 and with 80 Wifi measurements for the locations 𝑖 with 𝑖 = 36, . . . , 38. The sec- ond comes from the ground floor of the Hansa Mall in MalmΒ¨ and it consists of 30 o locations, each with 160 Wifi measurements. In Figure 6.1 it is possible to observe the two maps, of the ground floor and of the first floor with the plotted locations. (a) (b) Figure 6.1. (a) First floor (b) Ground floor After resizing the map, Aβˆ— has been applied, in order to obtain the distance matrix 𝐷𝑖𝑠𝑑, where 𝐷𝑖𝑠𝑑(𝑖, 𝑗) is the real distance between the location 𝑖 and the location 𝑗 taking into account the walls. In the Figures 6.2-6.4, it is possible to 31
  • 37. 32 Analyses Chapter 6 observe a comparison between the distances computed from some locations 𝑗 to other survey sites without taking into account the walls and the distances computed taking into account the walls. In Figure 6.5, it is clearly visible how the distance from the red location to the yellow one, if walls are taken into account, is much higher (44π‘š) than the case in which walls are not taken into account (16π‘š). After, for each location 𝑗 only those sites that are reachable are selected. Here, we set the parameter 𝐷𝑀 𝐴𝑋 equal to 12π‘š, and form a vector 𝑃 𝑇 (π‘˜) that contains only the locations 𝑗 that has a distance less than 𝐷𝑀 𝐴𝑋 meters from the location π‘˜. The ellipsoids are then computed for each location π‘˜ taking into account only the sites 𝑗 stored in the vector 𝑃 𝑇 (π‘˜). More specifically, the problem to solve is the following: βˆ‘ minimize 𝑖,𝑗 πœ‚π‘–π‘— β€² subject to zπ‘˜ (𝑗) Ξ¦π‘˜ zπ‘˜ (𝑗) ≀ 1 + πœ‚π‘—π‘˜ , βˆ€π‘— = 1, . . . , 𝑀 (π‘˜) z𝑙 (𝑗)β€² Ξ¦π‘˜ z𝑙 (𝑗) β‰₯ 𝜌 βˆ’ πœ‚π‘—π‘™ , βˆ€π‘™ ∈ 𝑃 𝑇 (π‘˜) 𝑙 βˆ•= π‘˜ βˆ€π‘— = 1, . . . , 𝑀 (𝑙) Ξ¦π‘˜ ΰͺ° 0, πœ‚π‘–π‘˜ β‰₯ 0. (6.1.1) where the optimization variables are the slack variables πœ‚π‘–π‘— , Ξ¦π‘˜ and with 𝑀 (𝑗) with 𝑗 = 1, . . . , 𝑁 we denote the number of measurements at the location π‘˜, typically being different for every location. Here, the values of 𝜌 used are: 1.075, 1.05, 1.025. The main difference between the algorithm in (6.1.1) and the one in (4.2.2) is that the number of the points that have to lie outside the ellipsoid for the location π‘˜ are notably less since we now consider only the locations that are reachable. On the other hand, as it is possible to observe, there are no constraints for the points of locations further than 12 meters. It can happen, then, that some RSS measurements of some location 𝑗 would be classified in ellipsoids of far locations. However, this does not constitute a problem, since the far locations are already cut out by selecting the only reachable ones.
  • 38. Section 6.1. Computation of the distance matrix and of the ellipsoids 33 (a) (b) Figure 6.2. (a) With no walls (b) With walls (a) (b) Figure 6.3. (a) With no walls (b) With walls
  • 39. 34 Analyses Chapter 6 (a) (b) Figure 6.4. (a) With no walls (b) With walls (a) (b) Figure 6.5. (a) With no walls (b) With walls. In this Figure, it is clearly visible how the distance from the red location to the yellow one, if walls are taken into account, is much higher (44π‘š) than the case in which walls are not taken into account (16π‘š).
  • 40. Section 6.2. Classification 35 6.2 Classification The next step in the analysis is the classification. For each location π‘˜, we compare the percentage of its measurements that are correctly mapped to site π‘˜ when using: βˆ™ The Gaussian likelihood taking into account all the locations. βˆ™ The Gaussian likelihood taking into account only the reachable locations in 𝑃 𝑇 (π‘˜). βˆ™ The ellipsoid distance taking into account only the reachable locations. More specifically, setting the initial counters π‘…π‘–π‘”β„Žπ‘‘1(π‘˜) = 0, π‘…π‘–π‘”β„Žπ‘‘2(π‘˜) = 0 and π‘…π‘–π‘”β„Žπ‘‘3(π‘˜) = 0, for all the measurements q = qπ‘˜ (𝑖) at the location π‘˜, the following quantities are computed: Λ† Λ† βˆ™ π‘˜1 = π‘Žπ‘Ÿπ‘” max𝑖=1,...,𝑁 P(q∣F(p𝑖 )). Λ† Λ† βˆ™ π‘˜2 = π‘Žπ‘Ÿπ‘” maxπ‘–βˆˆπ‘ƒ 𝑇 (π‘˜) P(q∣F(p𝑖 )). ( ) Λ†3 = π‘Žπ‘Ÿπ‘” minπ‘–βˆˆπ‘ƒ 𝑇 (π‘˜) (q 1) β‹… Φ𝑖 β‹… q βˆ™ π‘˜ 1 Λ† Λ† Λ† When π‘˜1 = π‘˜, π‘˜2 = π‘˜ or π‘˜3 = π‘˜, we increment π‘…π‘–π‘”β„Žπ‘‘1(π‘˜), π‘…π‘–π‘”β„Žπ‘‘2(π‘˜) or π‘…π‘–π‘”β„Žπ‘‘3(π‘˜). We perform this for all the locations π‘˜ with π‘˜ = 1, . . . , 𝑁 . Then, it is possible to obtain the percentage of points, for each method, that are mapped in the right location. The final results are available in the next tables, both for the ground floor and for the first floor. βˆ™ First column: number of the location. βˆ™ Second column: percentage of points mapped in the right location using the Gaussian likelihood and taking into account all the locations. βˆ™ Third column: percentage of points mapped in the right location using the Gaussian likelihood and taking into account only the reachable ones. βˆ™ Fourth column: percentage of points mapped in the right location using the Ellipsoid distance and taking into account only the reachable ones. The expectations for the ellipsoid method taking into account only the reachable locations are pretty high. In fact, as it is possible to observe in all the tables, the percentages of points that are mapped in the right survey site using this method are much larger than using both the Gaussian method with all the locations and the Gaussian method with only the reachable locations. It is also worth to notice that the percentages of points mapped with the Gaus- sian method with only the reachable locations is higher than the percentages of points mapped with the Gaussian method with all the locations, since we do not consider far locations.
  • 41. 36 Analyses Chapter 6 First floor with DMAX = 12 with rho = 1.075 Gaussian Method (All) Ell Method (Reach) Gaussian Method (Reach) 1 68.75 93.75 85 2 36.875 64.375 36.875 3 8.75 73.75 8.75 4 31.875 71.25 34.375 5 16.25 53.75 16.25 6 26.25 38.75 27.5 7 46.25 67.5 46.25 8 21.25 82.5 25 9 85 93.75 85 10 36.25 78.125 36.25 11 42.5 35.625 42.5 12 30 61.25 41.25 13 51.25 55 51.25 14 18.75 63.75 28.75 15 28.75 51.25 30 16 17.5 56.25 27.5 17 23.75 73.125 46.25 18 54.375 78.125 60.625 19 62.5 87.5 68.75 20 43.75 66.25 50 21 21.25 25 21.25 22 37.5 71.25 40 23 17.5 73.75 18.75 24 52.5 70 57.5 25 12.5 40 12.5 26 21.25 66.25 21.25 27 68.75 65 71.25 28 37.5 68.75 40 29 38.75 83.75 45 30 60 76.25 60 31 60 67.5 60 32 59.375 95 60.625 33 32.5 77.5 45 34 63.75 86.25 65 35 76.875 89.375 76.875 36 40 50 40 37 45 97.5 53.75 38 70 85 70
  • 42. Section 6.2. Classification 37 First floor DMAX = 12 with rho = 1.05 Gaussian Method (All) Ell Method (Reach) Gaussian Method (Reach) 1 68.75 95 85 2 36.875 60.625 36.875 3 8.75 73.75 8.75 4 31.875 71.25 34.375 5 16.25 55 16.25 6 26.25 33.75 27.5 7 46.25 67.5 46.25 8 21.25 82.5 25 9 85 93.75 85 10 36.25 78.125 36.25 11 42.5 32.5 42.5 12 30 61.25 41.25 13 51.25 55 51.25 14 18.75 65 28.75 15 28.75 51.25 30 16 17.5 56.25 27.5 17 23.75 73.125 46.25 18 54.375 79.375 60.625 19 62.5 87.5 68.75 20 43.75 66.25 50 21 21.25 26.25 21.25 22 37.5 71.25 40 23 17.5 73.75 18.75 24 52.5 70 57.5 25 12.5 40 12.5 26 21.25 66.25 21.25 27 68.75 65 71.25 28 37.5 68.75 40 29 38.75 83.75 45 30 60 76.25 60 31 60 67.5 60 32 59.375 93.75 60.625 33 32.5 77.5 45 34 63.75 85 65 35 76.875 89.375 76.875 36 40 50 40 37 45 97.5 53.75 38 70 90 70
  • 43. 38 Analyses Chapter 6 First floor DMAX = 12 with rho = 1.025 Gaussian Method (All) Ell Method (Reach) Gaussian Method (Reach) 1 68.75 95 85 2 36.875 61.875 36.875 3 8.75 73.75 8.75 4 31.875 70 34.375 5 16.25 55 16.25 6 26.25 33.75 27.5 7 46.25 67.5 46.25 8 21.25 82.5 25 9 85 93.75 85 10 36.25 78.125 36.25 11 42.5 31.25 42.5 12 30 61.25 41.25 13 51.25 55 51.25 14 18.75 65 28.75 15 28.75 51.25 30 16 17.5 56.25 27.5 17 23.75 73.125 46.25 18 54.375 79.375 60.625 19 62.5 88.75 68.75 20 43.75 67.5 50 21 21.25 30 21.25 22 37.5 71.25 40 23 17.5 73.75 18.75 24 52.5 68.75 57.5 25 12.5 38.75 12.5 26 21.25 66.25 21.25 27 68.75 65 71.25 28 37.5 68.75 40 29 38.75 83.75 45 30 60 76.25 60 31 60 67.5 60 32 59.375 93.75 60.625 33 32.5 78.75 45 34 63.75 85 65 35 76.875 89.375 76.875 36 40 50 40 37 45 97.5 53.75 38 70 87.5 70
  • 44. Section 6.2. Classification 39 Ground floor with DMAX = 12 m and rho = 1.075 Gaussian Method (All) Ell Method (Reach) Gaussian Method (Reach) 1 56 79.2 58.3 2 42 80 52.5 3 No Wifi measurements No Wifi measurements No Wifi measurements 4 104 88.75 65 5 64 90 80 6 56 92.5 70 7 48 100 60 8 58 80 72.5 9 46 85 60 10 12 100 15 11 66 83.75 46.25 12 44 92.5 62.5 13 46 95 57.5 14 38 95 47.5 15 48 95 60 16 50 87.5 62.5 17 28 87.5 35 18 34 95 42.5 19 70 100 87.5 20 24 72.5 30 21 46 90 57.5 22 58 80 72.5 23 22 97.5 40 24 52 97.5 65 25 No Wifi measurements No Wifi measurements No Wifi measurements 26 60 97.5 82.5 27 44 87.5 62.5 28 22 80 30 29 40 87.5 52.5 30 38 87.5 55
  • 45. 40 Analyses Chapter 6 Ground floor with DMAX = 12 m and rho = 1.05 Gaussian Method (All) Ell Method (Reach) Gaussian Method (Reach) 1 56 79.2 58.3 2 42 82.5 52.5 3 No Wifi measurements No Wifi measurements No Wifi measurements 4 104 87.5 65 5 64 90 80 6 56 92.5 70 7 48 100 60 8 58 82.5 72.5 9 46 87.5 60 10 12 100 15 11 66 83.75 46.25 12 44 92.5 62.5 13 46 97.5 57.5 14 38 97.5 47.5 15 48 95 60 16 50 90 62.5 17 28 87.5 35 18 34 95 42.5 19 70 100 87.5 20 24 72.5 30 21 46 90 57.5 22 58 80 72.5 23 22 97.5 40 24 52 97.5 65 25 No Wifi measurements No Wifi measurements No Wifi measurements 26 60 97.5 82.5 27 44 90 62.5 28 22 77.5 30 29 40 87.5 52.5 30 38 87.5 55
  • 46. Section 6.2. Classification 41 Ground floor with DMAX = 12 m and rho = 1.05 Gaussian Method (All) Ell Method (Reach) Gaussian Method (Reach) 1 56 79.2 58.3 2 42 82.5 52.5 3 No Wifi measurements No Wifi measurements No Wifi measurements 4 104 87.5 65 5 64 90 80 6 56 92.5 70 7 48 100 60 8 58 82.5 72.5 9 46 87.5 60 10 12 100 15 11 66 83.75 46.25 12 44 92.5 62.5 13 46 97.5 57.5 14 38 97.5 47.5 15 48 95 60 16 50 90 62.5 17 28 87.5 35 18 34 95 42.5 19 70 100 87.5 20 24 72.5 30 21 46 90 57.5 22 58 80 72.5 23 22 97.5 40 24 52 97.5 65 25 No Wifi measurements No Wifi measurements No Wifi measurements 26 60 97.5 82.5 27 44 90 62.5 28 22 77.5 30 29 40 87.5 52.5 30 38 87.5 55
  • 47. 42 Analyses Chapter 6 6.3 Interpolation Step The last step is the interpolation step. Let q = qπ‘˜ (𝑗) be the 𝑗th training measure- ment in the location π‘˜ and let pπ‘˜ be the vector of the GPS coordinates (latitude and longitude) of the π‘˜th location. By using the measurement q and setting ⎧ 1 ⎨ if 0 < 𝐷𝑖𝑠𝑑(𝑖, π‘˜) < 5 π‘š 𝐢(𝑖) = 0.8 if 5 ≀ 𝐷𝑖𝑠𝑑(𝑖, π‘˜) < 8 π‘š (6.3.1)  0.6 if 8 ≀ 𝐷𝑖𝑠𝑑(𝑖, π‘˜) ≀ 12 π‘š ⎩ the position 𝑁 βˆ‘ Λ† p= 𝑀𝑖 β‹… p𝑖 (6.3.2) 𝑖=1 is estimated by using the following weights: βˆ™ Gaussian Method taking into account all the locations Λ† P(q∣F(p𝑖 )) 𝑀𝑖 = 𝑁 βˆ‘ Λ† P(q∣F(p𝑖 )) 𝑖=1 for 𝑖 = 1, . . . , 𝑁 (6.3.3) βˆ™ Gaussian Method taking into account only the reachable locations Λ† ⎧  βˆ‘ 𝐢(𝑖)β‹…P(q∣F(p𝑖 )) βˆ€π‘– ∈ 𝑃 𝑇 (π‘˜)  ⎨ Λ† 𝐢(𝑖) β‹… P(q∣F(p𝑖 )) 𝑀𝑖 = (6.3.4)  π‘–βˆˆπ‘ƒ 𝑇 (π‘˜)  0 π‘œπ‘‘β„Žπ‘’π‘Ÿπ‘€π‘–π‘ π‘’ ⎩ βˆ™ Ellipsoid Method taking into account only the reachable locations ⎧ βˆ‘   ⎨ 𝐢(𝑖) β‹… qβ€² β‹… Φ𝑖 β‹… q π‘–βˆˆπ‘ƒ 𝑇 (π‘˜) 𝑀𝑖 = βˆ€π‘– ∈ 𝑃 𝑇 (π‘˜) (6.3.5)  𝐢(𝑖)β‹…qβ€² ⋅Φ𝑖 β‹…q  ⎩0 π‘œπ‘‘β„Žπ‘’π‘Ÿπ‘€π‘–π‘ π‘’