This master's thesis presents a new approach for indoor positioning, based on the notion of separating ellipsoids. In order to improve the position estimation algorithm, the technique is combined with the algorithm A*, being applied to binary maps of the examined buildings to take into account obstacles such as walls.
The combination of separating ellipsoids and A seems to promise an improvement over previous algorithms based on a probabilistic approaches.
2. i
Acknowledgements
First, I would like to thank my supervisor at the department of Mathematical Statis-
tics at Lund University, professor Andreas Jakobsson, for his continuous support
and commitment throughout this masterβs thesis. Furthermore, I am really grateful
to Qubulus AB for providing me with the data and for giving me the opportunity
of a three-month internship.
I would like to thank also my supervisor in Milano, professor Marco Trubian, and
my Erasmus coordinator, professor Kevin Payne, for his help with all the bureau-
cratic issues. Special thanks go to all my international friends that I have known
during these ten months and made my stay in Sweden more fun.
Last but not least, I would like to thank my mother and all my family for their
continuous support and encouragement during all these years of studies.
3. ii
Abstract
Since the advent of the global positioning system in 1999, positioning systems have
been used to deliver location-based services (LBs) in outdoor environments. In
recent years, LBs have become of equal interest in indoor environments in a wide
range of personal and commercial applications. For this reason, recent research has
focused its attention on wireless positioning systems. This masterβs thesis presents
a new approach for indoor positioning, based on the notion of separating ellipsoids.
In order to improve the position estimation algorithm, the technique is combined
with the algorithm Aβ , being applied to binary maps of the examined buildings to
take into account obstacles such as walls.
The combination of separating ellipsoids and Aβ seems to promise an improve-
ment over previous algorithms based on a probabilistic approaches.
4. CONTENTS
1 INTRODUCTION 1
1.1 Wireless Positioning Systems 1
2 PROBLEM EXPLANATION AND PREVIOUS WORK 3
2.1 Notation 3
2.2 Problem explanation 3
2.3 Properties of the weights functions π€π 4
2.4 The kernel-based method 5
2.5 A projection-based method 6
3 A NEW APPROACH 9
3.1 Theory 9
3.2 Homogeneous Embedding 11
3.3 SDP Formulations 12
3.4 Classiο¬cation Rule 13
4 DATA 15
4.1 Plotting the locations on the maps 15
4.2 Data 18
4.3 Version with one slack for location 19
4.4 Iterative version 20
4.5 Variance ellipsoids 22
5 THE A* ALGORITHM 24
5.1 The A* algorithm 24
5.1.1 How A* works 24
5.2 Applied A* star 25
iii
5. iv
6 ANALYSES 28
6.1 Computation of the distance matrix and of the ellipsoids 28
6.2 Classiο¬cation 31
6.3 Interpolation Step 38
6.4 Results for the ο¬rst ο¬oor of the Hansa Mall 40
6.5 Results for the ground ο¬oor of the Hansa Mall 42
7 CONCLUSIONS 45
BIBLIOGRAPHY 46
6. Chapter 1
INTRODUCTION
One must learn by doing the thing;
for though you think you know it
You have no certainty, until you try.
Sophocles
1.1 Wireless Positioning Systems
Since the advent of the Global Positioning System (GPS) in 1999 [1], position-
ing systems have been used to deliver Location-Based services (LBs) in outdoor
environment. In the recent years, LBs have become of equal interest in indoor envi-
ronments as well as in a wide range of personal and commercial applications. These
include location-based network management and security, medicine and health care,
personalized information delivery and context awareness.
Unfortunately, coverage of the GPS system is limited in indoor environments
and dense urban areas. For this reason, recent research has focused on some ex-
isting wireless communication infrastructures such as wireless local area networks
(WLANs) to be a complementary technique. A WLAN is characterized by a num-
ber of access points (APs), which are devices that allow wireless communication
based on the IEEE 802.11 standard and are widespread indoors for Internet access.
Since the power-sensing function is available in every WLAN device, localization
using the received signal strength (RSS) is a relatively cost-eο¬ective solution.
Commonly, the procedure to estimate the position using RSS measurements is
based on a two step method known as location ο¬ngerprinting. Fingerprinting based
positioning is divided into an oο¬ine and an online phase.
1. In the oο¬ine phase (also known as training phase), by site-surveying, the RSS
from multiple APs at diο¬erent points in the building are collected and stored
in a ο¬ngerprint database, called a radio map. More speciο¬cally, a number
π of survey locations in the site of interest are chosen and for each location
a number π of RSS measurements is collected and stored to constitute the
ο¬ngerprint of that location. For each location, the sample mean and the
sample variance of the π measurements are computed.
1
7. 2 Introduction Chapter 1
Figure 1.1. Example of a building map with the plotted survey locations.
2. In the online phase, the clientβs position is inferred online by comparing the
measured RSS with the previous stored measurements, and the position esti-
mate is a combination of the locations whose ο¬ngerprints most closely match
the observation. This approach can be viewed as an application of pattern
recognition.
There are four key challenges in ο¬ngerprinting:
1. generation of ο¬ngerprints,
2. preprocessing for reducing computational complexity and enhancing accuracy,
3. selection of APs for use in positioning, and
4. estimation of the distance between a new RSS observation and the ο¬ngerprints.
When the positioning algorithm is performed on handheld devices, extra care
should be taken due to their constrained resource and, clearly, using all available
APs increases the computational complexity of the algorithm. For this reason, an
8. Section 1.1. Wireless Positioning Systems 3
AP selection method is needed. The recent work of Kushki et al. [2] oο¬ers a real-
time AP selection technique that minimizes the correlation between the selected
APs to reduce the complexity and ensure the coverage.
The contribution of this thesis is twofold:
β the initial development of a new algorithm based on the separating ellipsoids
method, in order to obtain a better estimate of the user position compared to
the previous methods.
β The application of the Aβ algorithm in order to take into account the previous
positions of the user and possible obstacles, such as walls, and to reduce the
computational complexity of the positioning algorithm.
The second chapter of this report explains the mathematical problem of the
wireless positioning and gives a brief overview of the previous work by Kushki et
al. [2] and by Fang et al. [3]. The third and the forth chapter concern the theory
of the separating ellipsoids method and the application of this method to an initial
dataset from the EntrΒ΄ mall in MalmΒ¨. The ο¬fth chapter explains the π΄β algorithm
e o
and how it is used in the context of wireless positioning. Finally, the sixth chapter
presents the obtained results by the carried analyses.
9. Chapter 2
PROBLEM EXPLANATION
AND PREVIOUS WORK
2.1 Notation
The following notation will be used throughout the report:
β πΏ : number of APs
β π : number of survey locations in the radio map
β p : point in 2-D cartesian space
β r : measured RSS vector
β F(pπ ) : Fingerprint record at the πth location of the radio map
β A : Multiple Discriminant Analysis (MDA) projection matrix
β π· : number of discriminative components (DCs) generated after the MDA
projection
β q : Projected RSS vector by the projection matrix A
Λ
β F(pπ ) : Projected ο¬ngerprint record at the πth location of the radio map
β Ξ¦π : Computed ellipsoid for the location π
β π·ππ π‘(π, π) : real distance between the location π and the location π
2.2 Problem explanation
One of the fundamental problems in location ο¬ngerprinting is to produce a position
estimate using training information on a discrete grid of training points when a new
4
10. Section 2.3. Properties of the weights functions π€π 5
RSS observation is received. That is, we seek a function π such that
π : β πΏ Γ β2 Γ . . . Γ β 2 β β 2
π
π : (r, p1 , . . . , pπ ) β p
Λ (2.2.1)
where p1 , . . . , pπ are the cartesian coordinates (π₯, π¦) of the π survey locations with
respect to a predeο¬ned reference point (0, 0). If the function π is restricted to the
class of linear functions, the problem is reduced to determining a set of weights such
that
βπ
Λ
p= π€(r, F(pπ ))pπ , (2.2.2)
π=1
where F(pπ ) = [rπ (1), . . . , rπ (π )] is an πΏ Γ π matrix deο¬ned as
β 1 1
β
ππ (1) . . . ππ (π )
F(pπ ) = β . .. . β
β . . β
. . . (2.2.3)
πΏ πΏ
ππ (1) ... ππ (π )
The column of the ο¬ngerprint matrix are RSS vectors rπ (π‘) = [ππ (π‘), . . . , ππ (π‘)]π
1 π
that contain readings from πΏ APs at time π‘ in the πth survey location pπ . For
convenience the weights π€(r, F(pπ )) can be also denoted with π€π .
2.3 Properties of the weights functions π€π
The weights π€π in the estimation function (2.2.2) are required to be decreasing
functions of the distance between an observation vector and the training records.
That is, survey points whose training records closely match the observations should
receive a higher weight. In particular, the functions π€(r, F(pπ ) should satisfy the
following properties:
βπ
1. π=1 π€(r, F(pπ )) = 1 so that the estimated position belongs to the convex
hull deο¬ned by the set of survey positions. This can be achieved by including
a normalization term in (2.2.2).
2. π€(r, F(pπ )) is a monotonically decreasing function in both arguments with
respect to a distance measure π(r, F(pπ )):
π(r, F(pπ )) β₯ π(rβ² , F(pπ )) β π€(r, F(pπ )) β€ π€(rβ² , F(pπ )), (2.3.1)
π(r, F(pπ )) β₯ π(r, F(pπ )) β π€(r, F(pπ )) β€ π€(r, F(pπ )). (2.3.2)
The inequality (2.3.1) means that, if a RSS measurement rβ² is closer than
another RSS measurement r to the training records in a location π, then, the
survey point π should receive a higher weight for the RSS measurement rβ² than
the RSS measurement r.
12. Section 2.5. A projection-based method 7
One drawback is that the Gaussian Kernel is linearly dependent on the number π
of the survey locations.
2.5 A projection-based method
The method proposed by Fang et al. [3] is based on the calculation of a projection
matrix A to map the RSS measurements in a feature space of lower dimension where
they are uncorrelated and then to take into account only the most discriminative
information.
More speciο¬cally, when AP selection techniques are applied to reduce the com-
putational cost, an additional binary-decision vector v = [0, 1, 0, . . . 0]π β βπΏ is
required to indicate which APs are retained. If the number of non-zero components
is π·, the dimension of the RSS measurement r is reduced from πΏ to π·.
Unlike the zero-one weighting (binary decision) in the selection of APs, the
projection-based system reduces dimension from πΏ to π· by combining APs with
a discriminative projection A. The projection matrix A and the number πΏ is
determined by Multiple Discriminant Analysis (MDA).
The MDA criterion has widely been applied for the problem of multiple-class
classiο¬cation. In our context, the classes can be viewed as the training measure-
ments in the diο¬erent reference locations in a ο¬ngerprinting problem. The objective
of MDA is to ο¬nd the components that are useful for discriminating between them.
After the MDA projection, the generated components, which are named discrim-
inative components (DCs), carry the most discriminative information and rank it
by its information quantity. Assuming that r β βπΏ is the RSS measurement, the
projection-based system extracts the required DCs as follows:
q = Aπ r (2.5.1)
where q β βπ· , AπΏΓπ· is the MDA optimized projection matrix. The value of
π· represents the number of required DCs for the location system and it can be
determined by a system threshold according to diο¬erent applications.
In classical MDA, two scatter matrices called between-class (Sπ΅ ) and within-
class (Sπ ) matrices are deο¬ned to quantify the quality of the projection, as follows:
π π
ββ
Sπ = (rπ (π‘) β uπ )(rπ (π‘) β uπ )π
Β― Β― (2.5.2)
π=1 π‘=1
π
β
Sπ΅ = π (Β― π β u)(Β― π β u)π
u Β― u Β― (2.5.3)
π=1
1
βπ
Β―
where uπ = π π‘=1 rπ (π‘) is the mean of the training measurements in the πth
βπ βπ
location and u = π 1
Β― β π π=1 π‘=1 rπ (π‘) is the global mean of the all measurements.
Sπ measures the closeness of the samples within the locations, whereas Sπ΅
measures the separation between locations. An optimal projection would maximize
13. 8 Problem Explanation and previous work Chapter 2
Figure 2.1. In this ο¬gure it is possible to observe an example of projected RSS
measurement space for three diο¬erent locations (three DCs).
the between-class measure and minimize the within-class measure. One way to
achieve this result is to maximize the ratio as follows:
Λ β£Aπ Sπ΅ Aβ£
Aπ π·π΄ = arg maxA π (2.5.4)
β£A Sπ Aβ£
The numerator of (2.5.4) measures the separation among diο¬erent locations that
is the RSS spatial separation (between-class), whereas the denominator measures
the compactness of each location that is the RSS temporal variation (within-class).
As a result, the optimal weighting matrix A satisfying (2.5.4) leads to the best
separation between diο¬erent reference locations, because the discrimination and the
compactness are maximized at the same time. This way, the generated DCs in the
projected space have low temporal variation but high spatial separation to provide
discriminating information as much as possible. The dimension in the projected
space (number of DCs) can be determined by a system threshold according to
diο¬erent applications.
Once obtained, the projection matrix A and the number of desired DCs, π·, a
probabilistic approach is used to derive the weighting functions π€π .
First, the observed RSS signal r and the ο¬ngerprint matrix F(pπ ) are projected
by the projection matrix A:
q = Aπ r (2.5.5)
Λ
F(pπ ) = [Aπ β rπ (1), . . . , Aπ β rπ (π )] (2.5.6)
β 1 1
β
ππ (1) . . . ππ (π )
= β . .. . β
β . . β
. . . (2.5.7)
π· π·
ππ (1) ... ππ (π )
14. Section 2.5. A projection-based method 9
For π = 1, . . . , π , we have:
Λ
P(qβ£F(pπ ))
π€π = βπ (2.5.8)
Λ
P(qβ£F(pπ ))
π=1
Λ
where P(qβ£F(pπ )) indicates the likelihood value of the location pπ , given the ob-
servation q = [π 1 , . . . , π π· ]π . The likelihood that the Gaussian model produces is
formulated as:
π·
(π π β π’π (π))2
( )
Λ
β 1 Λ
P(qβ£F(pπ )) = β β ππ₯π β (2.5.9)
2πΛπ (π)
π 2Λπ (π)
π
π=1
where π’π (π) and π π (π) are calculated in the following way:
Λ Λ
π
1 β π
π’π (π) =
Λ π (π‘) (2.5.10)
π π‘=1 π
π
1 β π
π π (π) =
Λ (π (π‘) β π’π (π))2
Λ (2.5.11)
π π‘=1 π
The ο¬nal position can then be estimated by:
π
β
Λ
p = π€π β pπ
π=1
π Λ
β P(qβ£F(pπ ))
= βπ β pπ (2.5.12)
P( Λ
qβ£F(pπ ))
π=1 π=1
15. Chapter 3
A NEW APPROACH
The way to calculate the weights π€π using the Gaussian likelihood presented in the
previous chapter can be seen as a problem of statistical learning. In other words,
given a number π of classes (in our speciο¬c case the π survey locations) and a new
point π₯ (in our context a new RSS measurement r), for each class π = 1, . . . , π the
likelihood that the new point π₯ belongs to the class π is calculated. We then search
the maximum of these and the class that realizes the maximum should be the class
to which the new point π₯ belongs. In this context, the method of the separating
ellipsoids [4] might constitute an improving alternative.
3.1 Theory
Suppose that one has a number π of distinct classes and a number π of points
(π₯π , ππ ), where ππ denotes the class to which the π point belongs. Fixing a class π,
with π = 1, . . . , π , one would like to include the points of this class in an ellipsoid,
in order to obtain the best separation ratio from all the points of the other classes.
A way to express this is to ask for two diο¬erent separating ellipsoids that share the
same center and axis directions, but, the second one is larger by a factor ππ , which
is subsequently called the separation ratio. The inner ellipsoid encloses all labeled
points π₯π with ππ = π, while, all the remaining points lie outside the external one.
The higher the value of ππ , the better the separation is. In Figure 3.1 it is possible
to observe 10 diο¬erent classes randomly generated, each with 50 points. For these
classes, the separating ellipsoids have been computed and the separation ratios ππ
with π = 1, . . . , 10, have been maximized.
An ellipsoid in βπ can be parametrized by its center π and a symmetric, positive
semideο¬nite matrix P that determines its shape and size
ππ (π, π ) = {π₯ β βπ β£(π₯ β π)β² π (π₯ β π) β€ 1}. (3.1.1)
where β denotes the transpose. The ellipsoid is degenerate if π is singular. A scaled
concentric ellipsoid with the same shape can be obtained by dividing the matrix π
with a scalar π > 0 and the scaled ellipsoid has an equivalent representation as:
ππ (π, π ) = {π₯ β βπ β£(π₯ β π)β² π (π₯ β π) β€ π}, (3.1.2)
10
16. Section 3.1. Theory 11
Figure 3.1. In this ο¬gure it is possile to observe 10 diο¬erent classes randomly
generated, each with 50 points. For these classes, the separating ellipsoids have been
computed and the separation ratios ππ with π = 1, . . . , 10 have been maximized.
β
where the ratio π is the ratio between the lengths of the corresponding semimajor
axes of the two concentric ellipsoids. Suppose we are given the labeled training set
{(π₯π , ππ )}π . For each class π β {1, . . . , π } the associated problem is to ο¬nd the
π=1
ellipsoids ππ (ππ , ππ ) and ππ (ππ , ππ ) such that ππ is maximized while satisfying the
π
π
constraints
π₯π β ππ (ππ , ππ ) , βπ : ππ = π
( )
ππ
π₯π β ππ ππ ,
/ , βπ : ππ β= π. (3.1.3)
ππ
In other words, the problem is
maximize ππ
β²
subject to (π₯π β ππ ) ππ (π₯π β ππ ) β€ 1,
βπ : ππ = π
(π₯π β ππ )β² ππ (π₯π β ππ ) β₯ ππ ,
βπ : ππ β= π,
ππ ΰͺ° 0
ππ β₯ 1, (3.1.4)
where the optimization variables are ππ , ππ , ππ and ππ ΰͺ° 0 denotes the constraint
that ππ must be a positive semideο¬nite matrix. This problem is always feasible and
the patterns of class π are separable from all others using ellipsoids if and only if
the optimal ππ > 1.
17. 12 A new approach Chapter 3
To handle also the case when the patterns are not strictly separable using ellip-
soids, the idea of soft margins is used, introducing slack variables ππ for each of the
pattern inclusion or exclusion constraints, and a weighted penalty term is added to
the objective function. The new problem is then formulated in the following way:
β
maximize ππ β πΎ π ππ
subject to (π₯π β ππ )β² ππ (π₯π β ππ ) β€ 1 + ππ ,
βπ : ππ = π
β²
(π₯π β ππ ) ππ (π₯π β ππ ) β₯ ππ β ππ ,
βπ : ππ β= π,
ππ ΰͺ° 0, ππ β₯ 0. (3.1.5)
Here, πΎ is a positive weighting parameter. Both the problems (3.1.4) and (3.1.5)
are nonconvex but can be turned into convex optimization problems (being SDPs)
using an homogeneous embedding technique.
3.2 Homogeneous Embedding
Any ellipsoid in βπ can be viewed as the intersection of a homogeneous ellipsoid
(one centered at the origin) in βπ+1 and the hyperplane
π» = {π§ β βπ+1 β£π§ = (π₯, 1), π₯ β βπ }. (3.2.1)
An homogeneous ellipsoid in βπ+1 can be expressed as
ππ+1 (0, Ξ¦) = {π§ β βπ+1 β£π§ β² Ξ¦π§ β€ 1}, (3.2.2)
where Ξ¦ is a symmetric positive semideο¬nite matrix. To ο¬nd the intersection of
ππ+1 (0, Ξ¦) with the hyperplane π», the matrix π is partitioned as follows:
( )
π π
Ξ¦= , (3.2.3)
πβ² π
where π β βπΓπ , π β βπ , and π β β. If π§ = (π₯, 1), then
π§ β² Ξ¦π§ β€ 1 β π₯β² π π₯ + 2π β² π₯ + π β€ 1. (3.2.4)
Now let
π = βπ β1 π
πΏ = π β π β² π β1 π, (3.2.5)
then
π§ β² Ξ¦π§ β€ 1 β (π₯ β π)β² π (π₯ β π) + πΏ β€ 1. (3.2.6)
19. 14 A new approach Chapter 3
β If the patterns are separable, then the optimal solution to (3.3.3) is always a
canonical embedding, i.e, πΏ = 0.
β If the patterns are nonseparable, then the optimal solution to (3.3.3) always
has π = 1, and Ξ¦ is degenerate such that πΏ = 1.
Similarly, the version with slack variables can be formulated as an SDP problem:
(β β )
maximize π β πΎ π ππ + π ππ
subject to πβ² Ξ¦ππ β€ 1 + ππ ,
π
π = 1, . . . , π1
πβ² Ξ¦ππ β₯ π β ππ ,
π
π = 1, . . . , π2 ,
(3.3.5)
where the optimization variables are Ξ¦, π, and the slack variables ππ and ππ , while πΎ is
a weighting parameter. Once the SDP problem (3.3.5) is solved, the transformations
(3.2.4) and (3.2.5) can be also used to ο¬nd the separating ellipsoids in βπ .
Instead of using the weighting parameter πΎ, the problem (3.3.5) can be parametrized
using the ratio π. Switching to notations that are explicit for multiclass problems,
given any ο¬xed π β₯ 1, for each class π β {1, . . . , π }, the problem to solve is:
βπ
minimize π=1 πππ
β²
subject to π§π Ξ¦π π§π β€ 1 + πππ ,
βπ : ππ = π
β²
π§π Ξ¦π π§π β₯ π β πππ ,
βπ : ππ β= π
Ξ¦π ΰͺ° 0,
πππ β₯ 0, π = 1, . . . , π, (3.3.6)
where π§π = (π₯π , 1), and the optimization variables are Ξ¦π and πππ .
In Figures 3.2 and 3.3 the problem (3.3.6) has been solved for ten classes ran-
domly generated, each consisting of 50 points, using respectively values of π equal
to 1.5 and 2.
3.4 Classiο¬cation Rule
After solving in the training phase the problem (3.3.6) for each class π β {1, . . . , π },
let Ξ¦β , with π = 1, . . . , π , be the optimal obtained solution. Then, given a new
π
data point π₯ β βπ , we ο¬rst let π§ = (π₯, 1) β βπ+1 and compute
ππ = π§ β² Ξ¦β π§,
π π = 1, . . . , π, (3.4.1)
20. Section 3.4. Classiο¬cation Rule 15
Figure 3.2. In this ο¬gure the problem (3.3.6) has been solved for ten classes
randomly generated, each consisting of 50 points, using a value of π equal to 1.5.
Figure 3.3. In this ο¬gure the problem (3.3.6) has been solved for ten classes
randomly generated, each consisting of 50 points, using a value of π equal to 2.
then label the data with the class
Λ
π = πππ min π§ β² Ξ¦β π§ β² . (3.4.2)
π
π
The quantity ππ = π§ β² Ξ¦π π§ for each π = 1, . . . , π can be seen as a distance (ellipsoid
Λ
distance) of the point π§ β βπ+1 from the class π. The class π that realizes the
minimum of this distance is the class which mostly represents the point π§.
Next chapters will deal with the application of this concept to the problem of
wireless positioning as an alternative to the Gaussian method.
21. Chapter 4
DATA
4.1 Plotting the locations on the maps
The ο¬rst hurdle to overcome concerns the plot of the survey locations on the map.
Given the building map image, the GPS coordinates of both the top-left corner
(NW) of the map and the bottom-right corner of the map (SE) and the GPS coor-
dinates of the locations, we need to remap these coordinates to the ones to be used
on the display.
For each pair of GPS coordinates (πππ‘ππ‘π’ππ(β ), ππππππ‘π’ππ(β )) we set:
π₯πΊπ π (β ) = ππππππ‘π’ππ(β ) (4.1.1)
π¦πΊπ π (β ) = πππ‘ππ‘π’ππ(β ) (4.1.2)
First, the GPS coordinates (π₯πΊπ π (β ), π¦πΊπ π (β )) must be converted to a regular coor-
dinate system where the π₯ unit distance is no longer dependent on the π¦ scale value.
To perform this, it is necessary to ο¬x a common latitude π¦πΆππ , that should be the
same for all the points that are to be used in calculation. The needed transformation
is the following:
( )
2π
π₯π πΊπ π = π₯πΊπ π β cos β π¦πΆππ (4.1.3)
360
π¦π πΊπ π = π¦πΊπ π (4.1.4)
Applying this transformation, the uniformed GPS coordinates of the π π and ππΈ
corners are found.
Let now π be a survey location with uniformed GPS coordinates (π₯π πΊπ π (π ), π¦π πΊπ π (π )).
By a translation, one put the origin in the ππΈ corner and ο¬nd the new coordinates
16
22. Section 4.1. Plotting the locations on the maps 17
Figure 4.1. A rotation of angle π½ β πΌ is needed so the rectangle becomes straight.
of the π π corner and of the location π as
π₯π πΊπ ππ‘ (π ) = π₯π πΊπ π (π ) β π₯π πΊπ π (ππΈ) (4.1.5)
π¦π πΊπ ππ‘ (π ) = π¦π πΊπ π (π ) β π¦π πΊπ π (ππΈ) (4.1.6)
π₯π πΊπ ππ‘ (π π ) = π₯π πΊπ π (π π ) β π₯π πΊπ π (ππΈ) (4.1.7)
π¦π πΊπ ππ‘ (π π ) = π¦π πΊπ π (π π ) β π¦π πΊπ π (ππΈ) (4.1.8)
π₯π πΊπ ππ‘ (ππΈ) = π₯π πΊπ π (ππΈ) β π₯π πΊπ π (ππΈ) (4.1.9)
π¦π πΊπ ππ‘ (ππΈ) = π¦π πΊπ π (ππΈ) β π₯π πΊπ π (ππΈ) (4.1.10)
(4.1.11)
The next steps are suggested by the scheme in Figure 4.1. An anticlockwise rotation
of angle πΌ β π½ is applied to the points π π and π so that the π πΈ and ππ corners
lie respectively on the π¦β axes and on the π₯β axes. It is possible then to use these
new coordinates to plot the locations on the image of the map. The angle π½ is
computed using the image of the map. In particular, denoting with π€ the width
and with β the height of the image of the map in pixels, the length of the diagonal
π of the image in pixels is given by:
β
π = π€2 + β2 (4.1.12)
23. 18 Data Chapter 4
Figure 4.2. After the rotation we are able to compute the new coordinates for the
π πΈ and ππ corners.
The angle π½ is then given by:
(π€)
π½ = arcsin (4.1.13)
π
The angle πΌ instead is given by:
( )
π¦π πΊπ π (π π ) β π¦π πΊπ π (ππΈ)
πΌ = arctan (4.1.14)
π₯π πΊπ π (ππΈ) β π₯π πΊπ π (π π )
An anticlockwise rotation is applied to the points π and π π using the following
matrix:
( )
cos(πΌ β π½) β sin(πΌ β π½)
π΄= (4.1.15)
sin(πΌ β π½) cos(πΌ β π½)
The new coordinates for the π π and for the ππΈ corners are given then by:
( ) ( )
π₯π πΊπ ππ‘π (π π ) π₯π πΊπ ππ‘ (π π )
= π΄β (4.1.16)
π¦π πΊπ ππ‘π (π π ) π¦π πΊπ ππ‘ (π π )
( ) ( )
π₯π πΊπ ππ‘π (π ) π₯π πΊπ ππ‘ (π )
= π΄β (4.1.17)
π¦π πΊπ ππ‘π (π ) π¦π πΊπ ππ‘ (π )
24. Section 4.1. Plotting the locations on the maps 19
Figure 4.3. Change of coordinates of the location π to the domain [0, 1] Γ [0, 1].
The new coordinates π’(π ) and π£(π ) represent respectively the ratio of the width π€
and of the height β of the map image, where the corresponding pixel is.
In this new coordinate system it is easy now to ο¬nd the coordinates of the π πΈ
corner and of the ππ corner as shown in Figure 4.2. In particular:
( ) ( )
π₯π πΊπ ππ‘ (π πΈ) 0
= (4.1.18)
π¦π πΊπ ππ‘ (π πΈ) π¦π πΊπ ππ‘ (π π )
( ) ( )
π₯π πΊπ ππ‘ (ππ ) π₯π πΊπ ππ‘ (π π )
= (4.1.19)
π¦π πΊπ ππ‘ (ππ ) 0
(4.1.20)
A ο¬nal translation of the points is necessary to put the origin in the ππ
corner and get new positive coordinates (π₯π πΊπ ππ‘ππ‘ (β ), π¦π πΊππ‘ππ‘ (β )) for the corners
ππΈ,π π ,π πΈ and the point π .
At this point we calculate:
π₯π πΊπ ππ‘ππ‘ (π )
π’(π ) = (4.1.21)
π₯π πΊπ ππ‘ππ‘ (ππΈ)
π¦π πΊπ ππ‘ππ‘ (π )
π£(π ) = (4.1.22)
π¦π πΊπ ππ‘ππ‘ (π π )
25. 20 Data Chapter 4
Basically, as shown in Figure 4.3, the initial domain is restricted to [0, 1] Γ [0, 1] and
π’(π ), π£(π ) are the coordinates of the location π in the new domain and represent
respectively the ratio of the width π€ and of the height β of the map image where
the corresponding pixel is. Then, in order to ο¬nd the corresponding pixels (ππ₯, ππ¦)
in the map:
ππ₯ = πππ’ππ(π€ β π’) (4.1.23)
ππ¦ = πππ’ππ(β β (1 β π£)) (4.1.24)
where πππ’ππ means rounding to the nearest integer.
26. Section 4.2. Data 21
Figure 4.4. Map of the Entr´ shopping mall in Malm¨ with the 61 plotted loca-
e o
tions.
4.2 Data
The data used to carry on the preliminary analyses have been taken in the EntrΒ΄
e
shopping mall in Malm¨. The dataset consists of:
o
β 70 access points (APs) for the Wireless signal.
β 13 access points (APs) for the GSM signal.
β 61 survey locations.
β 320 measurements of the wireless signal in each of the 61 locations.
β 240 measurements of the GSM signal in each of the 61 locations.
The GSM signal is treated in the same way of the Wiο¬ signal. Since the ο¬nal
results for the GSM measurements are not as good as for the Wiο¬ measurements,
this report will treat only the Wiο¬ signal.
The ο¬rst step is to project the Wiο¬ measurements in a lower dimensional space
by a projection matrix A, as described in the Section 2.5 to reduce the number of
APs from 70 to 13 DCs. Since, we will perform this step on all the datasets that we
will have to deal with, from now on, to simplify the notation, we use the wording
RSS measurements to denote the projected RSS measurements.
Furthermore, for each location π with π = 1, . . . , 61 the signature vectors, thus
the mean vector and the standard deviation value for the RSS measurements, have
been calculated.
27. 22 Data Chapter 4
We initially attempt to apply the method of the separating ellipsoids to the
training measurements for each survey location π . In other words, ο¬xing a location
π we want to construct the ellipsoid that contains the RSS measurements of the
location π and keeps outside all the measurements of the other sites diο¬erent from
π. Given qπ (1), . . . , qπ (π ), the π RSS measurements at the location π and letting
( )
qπ (π)
zπ (π) = (4.2.1)
1
the SDP problem to solve is the following:
minimize π
subject to zπ (π)β² Ξ¦π zπ (π) β€ 1,
βπ = 1, . . . , π
zπ (π)β² Ξ¦π zπ (π) β₯ π,
βπ = 1, . . . , π π β= π
βπ = 1, . . . , π
Ξ¦π ΰͺ° 0,
π > 1, (4.2.2)
where the optimization variables are Ξ¦π and π. As expected the measurements are
not strictly separable and for this reason the version with all slack variables is used:
β
minimize π,π πππ
subject to zπ (π)β² Ξ¦π zπ (π) β€ 1 + πππ ,
βπ = 1, . . . , π
zπ (π)β² Ξ¦π zπ (π) β₯ π β πππ ,
βπ = 1, . . . , π π β= π
βπ = 1, . . . , π
Ξ¦π ΰͺ° 0,
πππ β₯ 0. (4.2.3)
where the optimization variables are Ξ¦π and all the slack variables πππ . Regrettably,
due to the large number of variables required to solve this optimization problem,
the solution requires a vast amount of computational power and memory allocation.
For this reason, in view of larger environments, alternative versions have been
formulated in order to decrease the number of variables and constraints to optimize.
4.3 Version with one slack for location
One may simplify the original problem by restricting the formulation to only allow
for one slack variable for each location. This approach will diο¬er from the version
28. Section 4.4. Iterative version 23
with all slack variables since the points of each location have all the same slack
variable. That is, let π be a ο¬xed survey location. The formulation of the problem
is:
βπ
minimize π=1 ππ
subject to zβ² (π)Ξ¦π zπ (π) β€ 1 + ππ ,
π
βπ = 1, . . . , π
zβ² (π)Ξ¦π zπ (π)
π β₯ π β ππ ,
βπ = 1, . . . , π π β= π
βπ = 1, . . . , π (4.3.1)
Ξ¦π ΰͺ° 0, (4.3.2)
ππ β₯ 0, βπ = 1, . . . , π, (4.3.3)
where the optimization variables are Ξ¦π and the π slack variables ππ . The eο¬ciency
in computing the ellipsoid has improved substantially since the number of variables
to optimize is π against π β π of the version with all the slack variables. The
main drawback is when there is no separability in the classes. In fact, in this case
the ellipsoid, even if they contain the most part of the point of the respective class,
overlap themselves critically and this can lead to a misclassiο¬cation of the data.
4.4 Iterative version
As an alternative, instead of considering for each location π all the measurements
π , one may initially solve the optimization problem (4.2.3), taking into account a
lower number of measurements for each location. Then, during next iterations, one
may add the same number of new measurements and solve again the optimization
problem, using for the old points the values of the slack variables computed at the
previous steps.
In particular, let π1 be an integer number such that π1 < π and π is a multiple
of π1 . At the ο¬rst iteration, the version with all slack variables is applied but with
only π1 points. That is, let π be a ο¬xed location, then:
β (1)
minimize π,π πππ
(1) (1)
subject to zβ² (π)Ξ¦π zπ (π) β€ 1 + πππ ,
π
βπ = 1, . . . , π1
(1) (1)
zβ² (π)Ξ¦π zπ (π)
π β₯ π β πππ ,
βπ = 1, . . . , π π β= π
βπ = 1, . . . , π1
(1)
Ξ¦π ΰͺ° 0, (4.4.1)
(1)
πππ β₯ 0, βπ βπ (4.4.2)
29. 24 Data Chapter 4
Figure 4.5. In this Figure, the iterative version has been solved for ten classes
randomly generated, each consisting of 50 points, using a value of π equal to 1.1
and adding 25 points at each iteration.
(1) (1)
where the optimization variables are Ξ¦π and the π β π1 slack variables πππ . From
the second iteration π1 points are added to each location. For the old points the
values of the slack variables computed at the previous steps are used. That is, at
iteration (π) with π β₯ 2
β (π)
minimize π,π πππ
(π) (πβ1)
subject to zβ² (π)Ξ¦π zπ (π) β€ 1 + πππ
π ,
βπ = 1, . . . , (π β 1) β π1
(π) (π)
zβ² (π)Ξ¦π zπ (π)
π β€ 1 + πππ ,
βπ = (π β 1) β π1 + 1, . . . , π β π1
βπ = 1, . . . , π, π β= π
(π) (πβ1)
zβ² (π)Ξ¦π zπ (π) β₯ π β πππ
π ,
βπ = 1, . . . , (π β 1) β π1
(π) (π)
zβ² (π)Ξ¦π zπ (π)
π β₯ π β πππ ,
βπ = (π β 1) β π1 + 1, . . . , π1 β π
(π)
Ξ¦π ΰͺ° 0,
(π)
πππ β₯ 0, βπ, βπ, (4.4.3)
(πβ1) (π)
where πππ are the slack variables optimized at the previous step while πππ are the
ones that correspond to the new added points and must be optimized. After the
last iteration the last slack variables might be used to recompute again the previous
30. Section 4.5. Variance ellipsoids 25
ones.
After testing this method on the simulated data in which each class consists of
50 points it has been noticed that if only two points are added at each iteration the
ellipsoid are inaccurate. They are moved with respect to the points that they should
contain, while increasing the number of points that we add at each iteration,the
obtained ellipsoids are almost equal to the ones computed with the all slack version.
In order to obtain accurate ellipsoids the number of points π1 should be major or
equal to π . However, such a solution will still suο¬er from the demanding memory
2
allocation similar to the original problem.
In the Figure 4.5, the iterative version has been solved for ten classes randomly
generated, each consisting of 50 points, using a value of π equal to 1.1 and adding
25 points at each iteration.
4.5 Variance ellipsoids
A third option consists of building, after ο¬xing a location π, a normal oriented el-
lipsoid (variance ellipsoid) that contains at least the most part of the measurements
of the survey site π. For all the remaining locations, it is possible then to check how
many measurements are inside this ellipsoid and take into account, when computing
the separating ellipsoid for the location π, only the locations π whose measurements
mostly overlap the measurements of the site π.
In particular, the equation of a normal oriented ellipsoid in an π-dimensional
space is deο¬ned by the equation
π₯π β A β π₯ <= 1
where A is a n-dimensional diagonal matrix
β β
π1 . . . 0
β . .. . β
A=β . . . . β
.
0 ... ππ
where ππ , for π = 1, . . . , π, are the eigenvalues of the matrix. The relation between
the equatorial radii ππ for π = 1, . . . , π of the ellipsoids and the eigenvalues ππ is
given by
1
ππ = β (4.5.1)
ππ
Since for each location π it is possible to calculate the mean vector of the RSS
measurements and the vector of the standard deviations π(π) = [π1 (π), . . . , ππ· (π)]π ,
it is also possible to construct the ellipsoid that contains most of the points of the
site π. The matrix which deο¬nes the ellipsoid is given by:
β 1 β
2
π1 (π)
... 0
B(π) = πΆ β β . .. . β
β
β . . . . β
. β (4.5.2)
1
0 . . . π2 (π)
π·
31. 26 Data Chapter 4
where πΆ is a chosen constant and it depends on the number of DCs.
Once computed the ellipsoids for each location we perform the following steps:
1. Fix a location π.
2. For each location π, with π = 1, . . . , π , count how many measurements of this
location are inside the variance ellipsoid of the site π.
3. Repeat the two steps above for each location π with π = 1, . . . , π .
By doing this, it is possible to know, for each site π, which are the locations π,
whose measurements mostly overlap the measurements of the site π. Finally, taking
into considerations only these locations π to compute the separating ellipsoid for
the location π, it is possible to reduce the number of constraints and variables to
optimize. Regrettably, it can happen that far away locations have many measure-
ments that overlap, or many locations overlap each other. For this reason, larger
buildings will still necessitate prohibitive memory requirements.
Figure 4.6. In this Figure, it is possible to observe a set of 3-D points with the
corresponding variance ellipsoid, computed as described in the section 4.5. As it is
possible to observe, almost all the points lie inside the ellipsoid.
32. Chapter 5
THE A* ALGORITHM
5.1 The A* algorithm
In computer science, Aβ is a widely used algorithm for path-ο¬nding and graph
traversal, the process of plotting an eο¬ciently traversable path between points,
called nodes. Noted for its performance and accuracy, the algorithm was ο¬rst de-
scribed by Peter Hart, Nils Nilsson and Bertram Raphael in 1968 [6]. It is an
extension of Edger Dijkstraβs 1959 algorithm and it achieves better performance
(with respect to time) by using heuristic distances.
Aβ uses a best-ο¬rst search and ο¬nds the least cost-path from a given initial node
to one goal node. To achieve this, it uses a distance-plus-cost heuristic function
(usually denoted by π (π₯)) to determine the order in which the search visits nodes
in the tree. The distance-plus-cost heuristic is a sum of two functions:
β the path-cost function, which is the cost from the starting node to the current
node (usually denoted by π(π₯))
β and an admissible βheuristic estimateβ of the distance from each node to the
goal node (usually denoted by β(π₯)).
In mathematical terms π (π₯) is then given by:
π (π₯) = π(π₯) + β(π₯). (5.1.1)
The function β(π₯) is a mathematical way of using previous knowledge in order to
expand the fewest possible nodes in searching for an optimal path and speed up the
algorithm. For example, inside a building, β(π₯) might represent the straight-line
distance to the goal (without taking into account the walls), since that is physically
the smallest possible distance between any two points or nodes. The adjective βad-
missibleβ means that β(π₯) must not overestimate the distance to the goal. Dijkstraβs
algorithm is a particular case of the Aβ algorithm, then using β(π₯) = 0.
5.1.1 How A* works
β
The A algorithm requires a starting node π and a goal node π. The steps of the
algorithm are the followings:
27
33. 28 The A* algorithm Chapter 5
1. Add the starting node π to the βopen setβ. The open set contains the nodes
that must be explored and they might be inserted in the optimal path towards
the goal node π .
2. Repeat the following:
a) Look for the node in the open set which has the lowest π value. We refer
to this as the βcurrent nodeβ.
b) Switch it to the βclosed setβ.
c) For each of the nodes that are reachable from the current node:
β If it is in the closed list, ignore it. Otherwise do the following.
β If it is not on the open list, add it to the open list. Make the current
node the parent of this node, and record the π , π and β costs of the
node.
β If it is on the open list already, check to see if this path to that square
costs less, using π cost as the measure. A lower π cost means that this
is a better path. If so, change the parent of the node to the current
node and recalculate the π and π scores of the node.
d) Stop when:
β The target square is added to the closed list. In this case the path has
been found. Or,
β the target square has not been found, and the open list is empty. In
this case there is no path.
3. Save the path. Working backwards from goal node π, go from each node to
its parent node until the starting node π is reached.
5.2 Applied A* star
Since the buildings we have to deal with might have many walls or non-walkable
zones, it would be useful to compute the shortest paths from each survey location
to the remaining ones. In fact, during the online phase, the RSS measurement is
taken by the mobile client (MC) at least every two seconds. During this interval,
the user is unlikely to move to a location which is a long way far from the point
in which he or she is. Then, by calculating the real distances (the lengths of the
shortest paths taking into account possible obstacles) from one survey site to the
other ones, we will be able to compute the ellipsoids considering only the locations
that are most likely to reach.
More speciο¬cally, ο¬xing a survey point π with π = 1, . . . , π , we compute the
real distance π·ππ π‘(π, π) from the location π to the location π for every π = 1, . . . , π .
Then, ο¬xing a parameter π·π π΄π, the locations πβs with π = 1, . . . , π such that
π·ππ π‘(π, π) <= π·π π΄π. (5.2.1)
34. Section 5.2. Applied A* star 29
are selected. The ellipsoid for the location π is then computed, taking into account
only the locations πβs that satisfy the inequality (5.2.1). This approach solves in
a smart and intuitive way the memory problem since the number of variables and
constraints to optimize is always reasonable also for large environments.
The binary images (black and white) are stored in the computer as binary ma-
trices where 0 represents the black color and 1 represents the white color. In the
case the image represents a map, the black color is used for the non walkable zones
and the white color for the walkable zones.
In order to be able to compute the real distances in meters from one point of the
building to another one, the binary map needs to be resized. To perform this, a small
distance π is ο¬xed, depending on the grade of accuracy that we want to keep after
the resizing. Usually, π is chosen to be equal to the half of the minimum distance
between all the locations. Given the GPS coordinates of the North-West angle and
the South-East angle of the map, it is possible to compute the GPS coordinates
of the other two angles (North-East and South-West) and the distances in meters
between these four corners. Using this data, we build a grid of the image, in which
each edge of a square represents a distance equal to π. Furthermore, the resized
binary map is always a binary matrix, in which each element, 1 or 0, represents a
square of the grid.
The Aβ algorithm, implemented in MATLAB, takes as input the resized binary
image, a starting node (ππ , ππ ) in the image matrix and a destination node (ππ , ππ )
in the image matrix. The resulting path is a sequence of nodes in the image matrix
(ππ , ππ ), (π1 , π1 ), . . . , (ππ , ππ ). An example can be observed in Figures 5.1 and 5.2.
For the heuristic distance β, it is reasonable to take the distance from the current
element (π, π) to the destination element (ππ , ππ ) multiplied by π, that is:
β
β(π, π) = π β (ππ β π)2 + (ππ β π)2 (5.2.2)
The transition costs are instead simply computed in the following way:
β cost of an horizontal movement = π,
β cost of a vertical movement = π,
β
β cost of a diagonal movement = π β 2.
The distance of the path is then computed by summing the costs of the made
transitions from the starting point (ππ , ππ ) to the ο¬nal point (ππ , ππ ). It is possible
then to build the distance matrix π·ππ π‘.
35. 30 The A* algorithm Chapter 5
β β
0 0 0 0 0 0 0
β0 1 1 0 1 1 0β
β β
β0 1 0 1 0 1 0β
β β
β0 1 0 1 0 1 0β
β β =β
β0 1 0 1 0 1 0β
β β
β0 1 1 1 1 1 0β
β β
β0 1 1 0 1 1 0β
0 0 0 0 0 0 0
Figure 5.1. An example of a binary matrix and the corresponding image
β β
0 0 0 0 0 0 0
β0
β β β
1 1 0 1 1 0ββ
β0
β β 0
1 1 0 1 0ββ
β0
β β 0
1 1 0 β 0β
1 β =β
β0
β β 0
1 1 0 β 0β
1 β
β0
β β β
1 1 β β β 0β
1 1 1 β
β0 1 1 0 1 1 0β
0 0 0 0 0 0 0
Figure 5.2. Example of the shortest path from the starting node (2, 3)
to the ο¬nal node (4, 6). The resulting path is: (2, 3),(2, 2),(3, 2),(4.2),
(5, 2),(6, 2),(6, 3),(6, 4),(6, 5),(6, 6),(5, 6),(4, 6), that corresponds to the red line in
the image.
36. Chapter 6
ANALYSES
6.1 Computation of the distance matrix and of the ellipsoids
To evaluate the performance of the Aβ algorithm and of the ellipsoid method taking
into account the only reachable locations, two diο¬erent datasets have been consid-
ered. The ο¬rst comes from the ο¬rst ο¬oor of the Hansa Mall in MalmΒ¨ and it consists
o
of 38 locations, with 160 Wiο¬ measurements for the locations π with π = 1, . . . , 35
and with 80 Wiο¬ measurements for the locations π with π = 36, . . . , 38. The sec-
ond comes from the ground ο¬oor of the Hansa Mall in MalmΒ¨ and it consists of 30
o
locations, each with 160 Wiο¬ measurements.
In Figure 6.1 it is possible to observe the two maps, of the ground ο¬oor and of
the ο¬rst ο¬oor with the plotted locations.
(a) (b)
Figure 6.1. (a) First ο¬oor (b) Ground ο¬oor
After resizing the map, Aβ has been applied, in order to obtain the distance
matrix π·ππ π‘, where π·ππ π‘(π, π) is the real distance between the location π and the
location π taking into account the walls. In the Figures 6.2-6.4, it is possible to
31
37. 32 Analyses Chapter 6
observe a comparison between the distances computed from some locations π to
other survey sites without taking into account the walls and the distances computed
taking into account the walls.
In Figure 6.5, it is clearly visible how the distance from the red location to the
yellow one, if walls are taken into account, is much higher (44π) than the case in
which walls are not taken into account (16π). After, for each location π only those
sites that are reachable are selected. Here, we set the parameter π·π π΄π equal to
12π, and form a vector π π (π) that contains only the locations π that has a distance
less than π·π π΄π meters from the location π. The ellipsoids are then computed
for each location π taking into account only the sites π stored in the vector π π (π).
More speciο¬cally, the problem to solve is the following:
β
minimize π,π πππ
β²
subject to zπ (π) Ξ¦π zπ (π) β€ 1 + πππ ,
βπ = 1, . . . , π (π)
zπ (π)β² Ξ¦π zπ (π) β₯ π β πππ ,
βπ β π π (π) π β= π
βπ = 1, . . . , π (π)
Ξ¦π ΰͺ° 0,
πππ β₯ 0. (6.1.1)
where the optimization variables are the slack variables πππ , Ξ¦π and with π (π) with
π = 1, . . . , π we denote the number of measurements at the location π, typically
being diο¬erent for every location. Here, the values of π used are: 1.075, 1.05, 1.025.
The main diο¬erence between the algorithm in (6.1.1) and the one in (4.2.2) is
that the number of the points that have to lie outside the ellipsoid for the location
π are notably less since we now consider only the locations that are reachable.
On the other hand, as it is possible to observe, there are no constraints for the
points of locations further than 12 meters. It can happen, then, that some RSS
measurements of some location π would be classiο¬ed in ellipsoids of far locations.
However, this does not constitute a problem, since the far locations are already cut
out by selecting the only reachable ones.
38. Section 6.1. Computation of the distance matrix and of the ellipsoids 33
(a) (b)
Figure 6.2. (a) With no walls (b) With walls
(a) (b)
Figure 6.3. (a) With no walls (b) With walls
39. 34 Analyses Chapter 6
(a) (b)
Figure 6.4. (a) With no walls (b) With walls
(a) (b)
Figure 6.5. (a) With no walls (b) With walls. In this Figure, it is clearly visible
how the distance from the red location to the yellow one, if walls are taken into
account, is much higher (44π) than the case in which walls are not taken into
account (16π).
40. Section 6.2. Classiο¬cation 35
6.2 Classiο¬cation
The next step in the analysis is the classiο¬cation. For each location π, we compare
the percentage of its measurements that are correctly mapped to site π when using:
β The Gaussian likelihood taking into account all the locations.
β The Gaussian likelihood taking into account only the reachable locations in
π π (π).
β The ellipsoid distance taking into account only the reachable locations.
More speciο¬cally, setting the initial counters π ππβπ‘1(π) = 0, π ππβπ‘2(π) = 0 and
π ππβπ‘3(π) = 0, for all the measurements q = qπ (π) at the location π, the following
quantities are computed:
Λ Λ
β π1 = πππ maxπ=1,...,π P(qβ£F(pπ )).
Λ Λ
β π2 = πππ maxπβπ π (π) P(qβ£F(pπ )).
( )
Λ3 = πππ minπβπ π (π) (q 1) β Ξ¦π β q
β π
1
Λ Λ Λ
When π1 = π, π2 = π or π3 = π, we increment π ππβπ‘1(π), π ππβπ‘2(π) or π ππβπ‘3(π).
We perform this for all the locations π with π = 1, . . . , π . Then, it is possible
to obtain the percentage of points, for each method, that are mapped in the right
location. The ο¬nal results are available in the next tables, both for the ground ο¬oor
and for the ο¬rst ο¬oor.
β First column: number of the location.
β Second column: percentage of points mapped in the right location using the
Gaussian likelihood and taking into account all the locations.
β Third column: percentage of points mapped in the right location using the
Gaussian likelihood and taking into account only the reachable ones.
β Fourth column: percentage of points mapped in the right location using the
Ellipsoid distance and taking into account only the reachable ones.
The expectations for the ellipsoid method taking into account only the reachable
locations are pretty high. In fact, as it is possible to observe in all the tables, the
percentages of points that are mapped in the right survey site using this method
are much larger than using both the Gaussian method with all the locations and
the Gaussian method with only the reachable locations.
It is also worth to notice that the percentages of points mapped with the Gaus-
sian method with only the reachable locations is higher than the percentages of
points mapped with the Gaussian method with all the locations, since we do not
consider far locations.