SlideShare a Scribd company logo
1 of 11
Download to read offline
1
Accurate Network Measurement Environment
Eid Araache, Feras Tanan and Ousama Esbel
Abstract—Studying network performance is vital to provide
better service and equality to consumers where da_sense system is
a platform that is used to collect network information to measure
and study them further. In this paper, improvements to da_sense
system are made where we will present the approach and the
implemented API to attain the new structure of the coverage
points of the da_sense system. The API authenticates, validates
and processes the request without affecting the current system,
yet it should be capable to process the newly structured JSON
request. In addition, the system should daily convert the newly
structured data schema to the aggregated data for consistency
with the current system’s API and the network map visualization.
Index Terms—Coverage Points; API; da_sense; PHP Laravel;
Postgres
I. MOTIVATION
NOWADAYS, consumers expect a faster and more reliable
mobile experience. The carriers are in a continuous
race to add network capacity, introduce new technologies
and expand their coverage areas in order to meet growing
consumer expectations and offer a better mobile experience to
their subscribers. Therefore, network performance monitoring
is key to identify the behavior of the network and how to
optimize the system’s performance and that is one of da_sense
functionality.
More in depth, the coverage points uploaded and stored
in the da_sense system treat each location separately from
other related location, though they share the same coverage
area which will cause performance bottlenecks as it stores
unnecessary data. In addition, da_sense does not take into
consideration the upload throughput speed as well as it was
only possible to transmit a subset of the collected data to the
server.
On the other hand, the current system API is not built in
a solid documented platform where system complication and
authentication is matter of concern.
II. INTRODUCTION
da_sense is a platform that collects sensor data from various
sources in high quantity and quality as the coverage measure-
ments are collected from different cellular network providers
in Germany. These data are provided to other platform by
utilizing its API. The data collected comes from:
• Fixed Sensor Infrastructure - stationary wired sensors.
• Wireless Sensor Networks - environmental sensors places
in trams.
• Participatory sensing - mobile phones.
In other words, da_sense mission is to process and capture
various types of data such as temperature and humidity, noise
and network coverage and provide it for a larger audience.
Technically, da_sense is hosted in LAPP stack server that
consists of Linux, Apache2, PostgreSQL and PHP. Moreover,
System logic and backend is developed in Vanilla PHP 5.3.*.
The data is stored in PostgreSQL database that consists of
multiple schemas. Periodically, the system runs scripts that
filter the collected raw data into processed data that can be
used for visualizing.
In this paper our focus is on network coverage data that has
been specifically gathered from participatory sensors where
first we will talk about the challenges and options we had that
eventually led to our implementation then the implementation
will be discussed to cover the following aspects:
• Database Remodeling - that is compatible with the new
JSON structure.
• Platform - Create an API that is compatible with new
Model.
• Testing - How the API has been tested.
• Filtering - Filter the new schema to the aggregated
schema that has been used in MoNa server [1] .
III. SYSTEM DESIGN AND ALTERNATIVES
In this section we will enumerate through the limitations
that the current system is facing and the possible solutions
that can be applied.
A. Platform
As mentioned earlier, the goal of this paper is to improve
the structure of the coverage network data and make it more
robust. Therefore, we have the ability to modify the current
API to make it adapt to the new structure. However, the
current API is too complicated where it is written in plain PHP
with no framework or documented structure and the whole
API is based on a single endpoint that runs multiple nested
if/else expressions to map the parameters to the right directory,
class then to a function respectively to process the request.
In addition, to process a request, the requester should do the
following steps:
• Send login request.
• Send data.
• Send logout request. (optional)
Three requests are made to upload a file which consumes time
and many resources. With that being said, creating a new API
is considered inevitable in order to solve these issues from the
root so we have the choice to whether stick with PHP or use
a faster, lighter language such as Nodejs.
The Advantages of using PHP will allow us to maintain the
server stack without further addition to server packages also
PHP utilizes better with SQL, whereas Nodejs utilizes better
with NoSQL [2] since it is built around JSON.
2
B. Schema
Changing the structure of the uploaded file will require
us to modify the schema to accommodate for the changes.
Extending the current schema will most likely break the
current API and since the scope of the lab is to cover the
endpoints regarding the coverage points only then in this case
we will still need to maintain the old API.
However on the other hand, creating a new schema that
will only deal with storing the new coverage point values
will permit us to only focus on deploying the new API
without thinking about backward compatibility thus the current
API will not be affected by the changes and therefore the
filtering job will be able to perform its tasks normally without
modification.
C. Data Storage
The uploaded data is in JSON format and the structure is
not consistent and tends to change which require to modify the
schema regularly. Therefore, it might be beneficial to change
the Database Storage to NoSQL such as MongoDB. MongoDB
is document-oriented database designed to store JSON-like
documents [3]. Each record stored in its document can have
different structure from previous one which is considered a
suitable solution to both mentioned issues. However, a new
issue will rise as the filtered data is stored in PostgreSQL
schema. As a result, it will force the new API to connect
to multiple databases which is huge bottleneck to the system
regardless to the need to install another stack to support
MongoDB in the server.
D. Chosen Approach
After studying the options on hand, it has been found
efficient to use PHP as base platform to maintain the server
stack and keep both current and new platforms in same lan-
guage for ease of development. However, the new platform’s
functionality will be extended by using Laravel framework.
Laravel is a powerful and secured MVC framework with
expensive, elegant syntax that it is very structured and easy
to use [5]. It has many convenient built-in components that
suit the outcome such as, built-in Authentication system that
can be customized to adopt to any structure and therefore no
need to re-register the users again, API functionality, request
validation and unit testing. The reason why laravel is a better
choice over other frameworks such as, Yii, Slim and CakePHP
is that laravel can extend its core functionality as much as your
project requires, it also has clean and simple routing. Not to
mention, Laravel has active and growing community where it
becomes one of the most popular PHP frameworks according
to [4]. Finally, Laravel supports unit testing out of the box
where it makes easier to write the test cases.
Regarding schema and data storage, the data storage will
be used is PostgreSQL as the processed schema is hosted in
PostgreSQL however instead of extending and modifying the
schema, we will create a new schema that will only contain
the modified tables to accommodate to the structure changes.
While for filtering data, the current filter job will not be
affected by the chosen approach, it will be able to run ag-
gregation on the data collected from the current API. Besides
the chosen approach will be beneficial for filtering data as
Laravel has built-in scheduler that allows the application to
run administrative tasks periodically on the server without the
need to go through server’s configuration.
IV. IMPLEMENTATION
The API development is divided into two sections, remod-
eling the new schema and develop the platform.
A. Remodeling Schema
In order to understand the changes that must be made, a
comparison between the current uploaded data and the new
structured data must be made. Figure 1 demonstrates the
current data and the new attributes. From Figure 1 the updated
schema decouples locations and cells from values where each
can have multiple sets of different data. Also, the updated
schema adds on extra attributes such as throughput and alters
the structure of existing ones such as ping.
1) Entity Relation Diagrams: The previous data set are
stored and maintained in data schema, the structure and
attributes is illustrated as in Figure 2.
The uploaded schema will change the series table and every
table that has a relationship with. Figure 3 demonstrate the
new schema after applying the changes. The tables labeled
with the color black are the tables that are affected by the
changes, the white colored tables are the tables that have slight
changes while the grey tables are the tables that maintained
their previous structure and are not affected by the changes.
The white and grey tables will not be added to the new
schema for the time being because they are populated by
API endpoints that are out of this lab’s scope. However, the
migrations have been written to help with the future work.
2) Attributes Mapping: Table I shows the new schema’s
attribute fields mapping to the old and current schemas.
B. Platform
To solve the mentioned issue from the root and adapt to the
new structure, we have decided to create a new API using PHP
with Laravel framework as mentioned before. In this section,
the deployment and the main documentation points will be
discussed as follows:
• Migrations, seeds and models for the new Schema.
• API Authentication.
• Routes, Controllers and Managers.
• JSON Validation.
• HTTP request format.
• Logging requests and responses.
1) Migrations: Laravel provides expressive migration that
acts like a version control for the database. The migrations
for the updated schema can be found in CoveragePoints-
>database->migrations. Migration classes within contains all
the necessary tables that are marked in black, white and grey
in Figure 3 . However for this lab, only tables marked in black
3
will be created as not all API endpoints have been added to
the API. For next increments, as we add more API endpoints,
then we will create their related migrations. Here is a sample
of creating coverage_value_cells table:
public function up()
{
Schema::create(’coverage_value_cells’
, function (Blueprint $table) {
$table->increments(’id’);
$table->integer(’sensor_type_id’)->unsigned();
$table->integer(’cell_id’)->unsigned();
$table->integer(’lac’)->unsigned();
$table->string(’network_type’, 32);
$table->string(’network_provider’, 32);
$table->integer(’asu’);
$table->integer(’signal_strength_db’);
$table->boolean(’is_active’);
$table->timestamp(’update_timestamp’);
$table->integer(’coverage_value_id’)
->unsigned();
$table->foreign(’coverage_value_id’)
->references(’id’)->on(’coverage_values’);
$table->foreign(’sensor_type_id’)
->references(’id’)->on(’sensor_types’);
});
}
The above snippet is the default syntax for all tables, the only
differences are the number of columns and formats in each
table as well as the relationships among them. The main points
to consider:
• Schema is class for the database schema that is creating
a table with the name of coverage_value_cells.
• The increments function is to make id a primary and
auto-incremental.
• Unsigned method is used to make the integer column
always positive. It should be set to the columns that will
have a foreign relationship with other tables.
• coverage_value_id and sensor_type_id are both foreign
keys that references id of coverage_values and sen-
sor_types.
The database format and syntax follow the standard convention
of SQL [6], which is:
• Snake case for columns name.
• A foreign key column should be in syntax of name-
OfTheRelatedTable_TheReferencedColumn.
In Order to run migrations and create tables, in the terminal
of the project please run the following:
php artisan migrate
2) Tables Seeding: Some tables have a pre-populated data
such sensor_types and device_types, so to prepare the schema
we need to populate the seeds on launch by running the
following commands in project’s directory terminal:
php artisan db:seed
The seeders can be found in CoveragePoints->database-
>seeds.
3) Models: Each migration should have a corresponding
Model that will be used along with Eloquent ORM to perform
create, read, update and delete (CRUD) operations on the
tables. Models are available in CoveragePoints->app->API-
>Models. Each model should define the following:
• Fillable fields - fields that are inserted by the user.
• Casts - cast some fields to Boolean or Timestamp.
• Relationships - define relations with other tables.
Based on Figure 3, table coverage_values has a one-to-
many relationship with series, this can be defined in Model
CoverageValue as following:
public function series(){
return $this->belongsTo(Series::class);
}
While in Model series:
public function coverageValues(){
return $this->hasMany(CoverageValue::class);
}
Naming conventions in Laravel is very crucial as if the table
name does not match foreign key column then a second and
a third argument must be passed to hasMany method.
4) API Authentication: The current API authenticates with
normal login and logout mechanism as explained previously.
In the new API, an Authorization token should be sent along
with the HTTP headers. The header format is:
Authorization:
Username:password_md5:password_sha
The reason behind this format is that there are many users in
the system that are still using the old password md5 which
cannot be neglected.
The authentication is verified in CoveragePoints->app-
>Http->Middleware ->BasicAuthentication.php. The follow-
ing code snippet demonstrates the main logic behind the
authentication.
public function handle($request, Closure
$next)
{
$autherization =
$request->header(’Authorization’);
//check if authiration is fullfilled
if(is_null($autherization))
return response()->json([’success’ =>
false, ’message’ => ’Invalid
credentials’], 401);
$creds = explode(’:’, $autherization);
if(count($creds) != 3)
return response()->json([’success’ =>
false, ’message’ => ’Invalid format’],
401);
$user =
User::whereUsername($creds[0])->first();
if(is_null($user))
return response()->json([’success’ =>
false, ’message’ => ’Invalid
username’], 401);
4
if($creds[1] == $user->password_md5 ||
$creds[2] == $user->password_sha){
$request->attributes->add([’user’ =>
$user]); //add user to the request
return $next($request, $user);
}
return response()->json([’success’ =>
false, ’message’ => ’Invalid
credentials’], 401);
}
BasicAuthorization does the following:
• Get the authorization from request header.
• If authorization header is empty, send a 401 with false
success JSON response.
• Otherwise, split it into 3 pieces where the first piece is
username, md5 pass then sha pass respectively.
• Get the user with the same username. if not found send
401 status code.
• If user is found, check both password against their related
fields.
• if any was true then proceed to next step. Otherwise, send
401 status code with invalid credentials JSON response.
5) Routes, Controllers and Managers: After authentication
has been verified and accepted, the request will be sent to the
corresponding route. All API routes are wrapped within api
group that implements API group middleware.
Route::group([’middleware’ => ’api’, ’prefix’
=> ’api/v2’], function(){
Route::post(’/coverage-value’,
’CoverageValuesController@store’);
});
The api group middleware consists of:
• BasicAuthentication - class to verify the request authen-
tication.
• LoggingRequest - class to log request and response, more
about it later.
After the route has been matched, the request will be sent to
the corresponding controller@method. For example, from the
previous snippet, the route /api/v2/coverage-value will be di-
rected to method store of controller CoverageValueController
The controller function must add the responsible manager as
dependency injection to follow the D in SOLID design pattern
[7]. Then the manager should create, update, show or delete
the record. For our case, we are uploading a coverage point,
Thus the controller should inject the CoverageValueManager
and create the request as shown:
public function store(CoverageValueRequest
$request, CoverageValueManager $manager){
$manager->create($request);
return response()->json([
’success’=> $manager->isSuccessful(),
’message’=> $manager->getErrorMessage()
]);
}
Any manager should extend the abstract class APIManager
that consists of the main API funcitonalities such as:
• CRUD abstract functions.
• Response abstract function.
• set success and error messages.
6) Request - JSON validation: Request validation is ini-
tiated when the request is passed to the controller, the con-
troller‘s method should inject the validation in its parameters
again to follow the D in SOLID design principles as shown
in the above Code snippet. In general, the request vali-
dation classes are available in CoveragePoints->app->Http-
>Requests, where all created request classes extends Request
class. For coverage points, the uploaded JSON is directed to
CoverageValueRequest where it will be checked against:
• If the device identity sent is stored in devices table.
• If the measurement type is stored in sensor_types table.
• The format of the JSON.
The format of the JSON is verified using Laravel built-
in validation where fields will be checked for their type,
importance and range.
7) HTTP request Format: The design principles to suc-
cessfully send a request to the API is that the header should
consists of the following:
• Authorization: username:password md5:password sha
• Content-Type: application/json.
• Accept: application/json.
The JSON data should be appended to the HTTP request as
RAW POST data. The response of the request is in JSON
format with a success attribute to indicate the status of the
request.
8) Logging Requests and Responses: Inevitably, any API
requires continuous monitoring to the requests made to its
endpoints. One of the way is to log each API request and re-
sponse so we can capture and determine important information
about endpoints.
As mentioned before, the api group middleware consists of
a LoggingRequest class that will be triggered when response
is sent to requester and the call has been terminated. The log
files are stored in daily log files in CoveragePoints->Storage-
>api->logs where it logs the failed request paramters sent plus
the response generated from the controller and manager. Here
is the code snippet:
public function terminate($request,
$response){
//log the failure responses only
$responseArray =
json_decode($response->getContent(),
true);
if(!$responseArray[’success’]){
//create a daily files in the specified
path
Log::useDailyFiles(storage_path() .
’/api/logs/results.log’);
//store a request and response
Log::info([’request’ => $request->all(),
’response’ => $response]);
}
}
5
V. TESTING AND EVALUATION
For determination of the robustness and functionality of the
API, unit tests have been created. Unit testing is specialized
form of automated testing [8]. Laravel is integrated with
PHPUnit package out of the box along with many helper
methods that allows the developer to expressively test the
application [9].
To better define the desired test behaviors, a general abstract
class is created under name of APITester that will aid extender
test classes to inherit common functionality. The abstract class
and test classes are located in CoveragePoints->tests. One of
the most important attributes in APITester is Faker which will
be used to generate possible fake values of the API request for
the tests. The tests results are stored in a separated database
called DaSenseTest that should hold the same schemas as the
development database. For the coverage value endpoint, the
following test cases are created:
• Check status 401 and see JSON response when sending
a request with false credentials.
• Check status 422 and see JSON response when sending
a request with no cells or Wi-Fi points.
• Check status 200 and see JSON with success true when
pushing cells.
• Check status 200 and see JSON with success true when
pushing Wi-Fi points.
• Check status 200 and see JSON with success true when
pushing throughput along with either Wi-Fi or cell.
As mentioned earlier, the JSON is generated with very mini-
mal fixed values, here is a snippet of generating the location
points for the tested JSON:
protected function addLocation(){
$howMany = $this->fake
->numberBetween(1, 4);
for($i = 0; $i < $howMany; $i++){
$this->coverageValue["locations"][] = [
"longitude"=> $this->fake->longitude(),
"latitude"=> $this->fake->latitude(),
"altitude"=> $this->fake
->numberBetween(-100, 100),
"accuracy"=> $this->fake
->randomFloat(2, -100, 100),
"speed"=> $this->fake
->randomFloat(2, -100, 100),
"timestamp"=> $this->fake
->dateTimeThisYear()
->format("Y-m-d H:i:s"),
];
}
return $this;
}
First, the system will randomly determine how many locations
points can be added then the Faker can generate real life data.
In order to run the test, simply run phpunit on the terminal of
the project directory.
VI. FILTERING DATA
As mentioned earlier, the coverage points collected are raw
data and they need to be filtered and aggregated in order to be
beneficial. Therefore, on daily basis, the system runs a script
to filter bad points, aggregate the rest and store them in a
different schema.
A. Overview
The filtering hugely depends on measurement types where
in the current system, there are four types that are taken
into consideration. these are, ASU, Signal Strength, Ping and
Download speed. In addition to the measurement type, data
are filtered based on different set of network providers and
network types. Each coverage point is filtered by a network
provider such as Telekom, O2, Vodafone and others, and also
filtered by network technologies like 2G, 3G and 4G. The
combination of network provider and network type produces
a cluster value where it can be used to identify provider’s
pros and cons on each network technology. Thus, it is used to
distinguish different datasets.
Besides the applied general filtering, each type has a unique
filter to remove bad readings when storing into the aggregated
schema. Here are the filters applied on each type:
• ASU - filtered where ASU value is smaller than 32 and
bigger than -1.
• Signal strength - filtered where SSID is empty to filter
out Wi-Fi points.
• Ping, - filtered where SSID is empty and 5 folds of
minimum ping value becomes bigger that maximum ping
value.
• Download Speed - filtered where SSID is empty and
download rate is smaller than 300K and bigger than 0.
Also, accuracy should always be valid where it must be
unsigned integer with value smaller than 100.
After data has been filtered, it will be recorded
in data_processed.data_values_cleaned_for_coverage table,
where each type will be stored along with one and only one
location of the main coverage point.
B. Key challenges
By introducing the new schema, each coverage point now
includes multiple cells, Wi-Fi points, pings, download rate
and the newly added upload rate. Furthermore, the device
is also recording different locations where the readings have
been occurred to make them more reliable and accurate
which unfortunately form the main challenge because the
data_processed schema is designed to store a single location
for each type. Therefore, we need to find the approximate
location to each type.
Another thing to consider is the new structure of throughput
and ping where each one can have multiple samples as can
be seen Figure 1. These samples should be either recorded
individually where location can be determined by the sam-
ple’s timestamp or calculate the average for all samples in a
ping/throughput and use timeStart and timeEnd to determine
the location area of the reading.
In Addition, two new types must be introduced where one
will indicate the upload speed and the latter will determine
the measurement type cell_id and since the filtering is based
on network type and provider then this lab will only focus on
filtering coverage values with cells.
6
C. Implementation
Basically, each cell, throughput or ping in their respective
array will be stored individually. For example, if cells has
three groups then each group will be a separate record in
the database table. To determine its location, we need to
take into account the first location that is recorded directly
after the group’s timestamp and the last location recorded
before the group’s timestamp, then these two locations will be
interpolated to produce a single location that is relatively close
to the measurement type’s reading. Algorithm 1 demonstrates
how interpolation works.
Data: type.timestamp of the measurement type needed to
be stored
Result: single aggregated dataset consists of location,
speed, accuracy and altitude
1- Initializing
1: locationi ← locations().where(timestamp >=
type.timestamp).last();
2: locationf ← locations().where(timestamp <=
type.timestamp).first();
2- Calculate location’s dataset
3: Xi ← Xi +
δtime
∆time
(∆X) // Xi can be speed,
geopoint, accuracy or altitude
Algorithm 1: Interpolate two locations
The general idea of the algorithm is after determining the
two locations based on the group’s timestamp as shown in 1,
the rest of location’s information such as, longitude, altitude,
speed, accuracy and latitude, they will be calculated using
weighted average approach based on timestamp as demon-
strated in step 2 in the algorithm. First, we will divide the
difference of time between the group’s time and the location
recorded before, δtime, over the difference of the overall
time between two locations, ∆time. Afterwards, it will be
multiplied against the difference in data. Finally it is added
to the locationi‘s data to give more weight to the closest
location. There are some certain cases where interpolating
locations is not necessary such as when:
• locationi and locationf both have same timestamp as the
measurement type, then one location will be considered.
• locationi is not found, then locationf will be the type’s
location.
• locationf is not found. then locationi will be the type’s
location.
For ping and throughput data, each sample within will be
stored as a separate record where sample’s timestamp will
be used to determine the locations that have been recorded
directly after and before the sample for interpolation.
The created job is located in CoveragePoints->app->Http-
>Console->Commands->FilterData.php. The job can be trig-
gered manually by running the following in the terminal of
the project’s directory
php artisan filter:data
The filter job is configured in Laravel as it provides a suit-
able way to schedule cron jobs. The cron job is scheduled
in CoveragePoints->app->Console->Kernel.php where it has
been configured to run the previous command as following:
$schedule->command(’filter:data’)
->dailyAt($time)
->sendOutputTo(storage_path(CRON_LOG_PATH));
The time can be configured through sending a RESTful PUT
request to API endpoint /api/v2/scheduler/update with JSON
data as RAW containing time to determine on what time to
run the scheduler. Also, the result of the filter will be stored
in the path: /Storage/logs/cron_results.log.
To recapitulate, Filtering the collected coverage values can
be executed via three approaches:
• In terminal by running php artisan filter:data
• Cron job that will run daily based on the scheduled time.
• Manually by sending a RESTful GET request to
/api/v2/filter/run with authorization header that have ad-
ministrative role.
D. Testing
The filtering can be tested without applying the changes in
the database by modifying .env’s API_ENV attribute to testing.
If you would like to store the filtering test results then just
map DB_DATABASE to the DaSenseTest database without the
need to change the environment to testing, environment can
be either local or production.
VII. CONCLUSION
Collecting coverage points in da_sense is insufficient as a
new structure must be introduced that enhances the process
of collecting and filtering these points. The current API is
not capable to process the new structured data and therefore
it should be modified. However, because of its flaws and
disadvantages it has been found to better develop a new API
that will adapt to the new structure. The API functionality such
as authentication, validation processing and logging has been
implemented where only Coverage points endpoints have been
added to the API. A data-driven unit tests has been used to test
the API functionality in general and coverage point endpoints
specifically.
The collected coverage points are just a collection of raw
data that needs to be filtered and manipulated in order to
produce meaningful information. Daily, the system will run
a job that will process these data based on measurement
types, network providers and network technologies and store
them in another database schema where it can be used for
studying, monitoring and visualizing. Unfortunately, the new
structured data is not suitable for this schema and therefore
the filter job must be modified to map the new structured data
to the aggregated scheme’s fields. The main obstacle was that
the aggregated schema accepts only single location for each
record. On contrary, the new structure has multiple locations
7
for each coverage point. Thus, in order to tackle this issue,
interpolating the nearest two locations of the coverage point’s
type based on its timestamp and produce a single location that
is relatively close to it.
VIII. FUTURE WORK
The API is far from done to be able to replace the current
API, the rest of the current API endpoints should be added
to the new API along with generating a unit tests for each
endpoint. Besides, The API can be enhanced defensively by
limiting the rate at which any individual requester can make
requests. In other words, throttle requester that hit a particular
API endpoint in short period of time this will help preventing
DDoS attacks and make sure the application stays alive.
Regarding filtering, the current schema where the aggre-
gated data is stored is not well accommodated to the new
coverage point structure and therefore, it would better to
come up with a better structure and schema that can make
the best out of the newly modified structure. In addition,
coverage values with Wi-Fi points are not filtered as these
values have no indication to what network provider it is using.
Therefore, updating Wi-Fi readings to be able to fetch the
network provider and then add it to the filter job should be
considered for the next step.
ACKNOWLEDGMENT
We would like to express our gratitude to our supervisor
Fabian Kaup. His guidance and dedicated involvement in each
step throughout this lab was the key for this paper to be
accomplished.
REFERENCES
[1] da_sense, MoNa. [Online]. Available: http://mona.ps.e-technik.tu-
darmstadt.de/ [Accessed: 6-May-2016].
[2] R. Aghi, S. Mehta, R. Chauhan, S. Chaudhary, and N. Bohra, “A com-
prehensive comparison of SQL and MongoDB databases,“ International
Journal of Scientific and Research Publications, vol. 5, no. 2, Feb. 2015.
[3] “MongoDB,“ Wikipedia. [Online]. Available:
https://en.wikipedia.org/wiki/mongodb. [Accessed: 17-May-2016].
[4] “The great PHP MVC Framework Showdown of 2016 â ˘A¸S (CakePHP
3 vs Symfony 2 vs Laravel 5 vs Zend 2) |“ zen of coding. [Online].
Available: http://zenofcoding.com/2015/11/16/the-great-php-mvc-
framework-showdown-of-2016-cakephp-3-vs-symfony-2-vs-laravel-5-vs-
zend-2/. [Accessed: 22-Jun-2016].
[5] “Introduction,“ - Laravel. [Online]. Available:
https://laravel.com/docs/4.2/introduction#laravel-philosophy. [Accessed:
20-Jun-2016].
[6] S. Sarkuni, “How I Write SQL, Part 1: Naming Con-
ventions,“ Launch by Lunch RSS. [Online]. Available:
https://launchbylunch.com/posts/2014/feb/16/sql-naming-conventions/
[Accessed: 20-Jun-2016].
[7] Paikens, A. and Arnicans, G., 2008. Use of design patterns in PHP-
based web application frameworks. Scientific Papers University of Latvia,
Computer Science and Information Technologies, 733, pp.53-71.
[8] “Why Is Unit Testing Important?,“ Excella Consulting, 2013.
[Online]. Available: https://www.excella.com/insights/why-is-unit-testing-
important [Accessed: 20-May-2016].
[9] “Testing,“ - Laravel. [Online]. Available:
https://laravel.com/docs/master/testing [Accessed: 22-May-2016].
8
APPENDIX A
FIGURES
Fig. 1. Current data vs New data
9
Fig. 2. Current data Schema
10
Fig. 3. new data Schema
11
APPENDIX B
TABLES
TABLE I
NEW STRUCTURE MAPPING TO CURRENT AND NEW SCHEMA
JSON Field Old Schema New Schema
deviceIdent devices.identifier devices.identifier
measurementType Senors.typeID Sensors.sensor_type_id
Series.name Series.name Series.name
Series.visibility Series.visibility Series.visibility
Series.timestamp Series.timestamp Series.timestamp
Series.values.timestamp Coverage_values.timestamp Coverage_values.timestamp
Series.values.app_version –not supported– Coverage_values.app_version
Series.values.locations.longitude
Coverage_values.center Coverage_values.center
Series.values.locations.latitude
Series.values.locations.altitude Coverage_values.alt Coverage_value_location.altitude
Series.values.locations.accuracy Coverage_values.acc Coverage_value_location.accuracy
Series.values.locations.speed Coverage_values.speed Coverage_value_location.speed
Series.values.locations.timestamp –not supported– Coverage_value_location.timestamp
Series.values.cells.measurementType –not supported– Coverage_value_cells.measurement_type
Series.values.cells.cellId Coverage_values.cellID Coverage_value_cells.cell_id
Series.values.cells.lac Coverage_values.lac Coverage_value_cells.lac
Series.values.cells.networkType Coverage_values.netwokType Coverage_value_cells.network_type
Series.values.cells.networkProvider Coverage_values.networkProvider Coverage_value_cells.network_provider
Series.values.cells.signalStrengthDB Coverage_values.signalstrengthdb Coverage_value_cells.signal_strength_db
Series.values.cells.isActive –not supported– Coverage_value_cells.is_active
Series.values.cells.updateTimestamp –not supported– Coverage_value_cells.update_timestamp
Series.values.ping.timeStart –not supported– Coverage_value_pings.time_start
Series.values.ping.timeEnd –not supported– Coverage_value_pings.time_end
Series.values.ping.remoteServer –not supported– Coverage_value_pings.remote_server
Series.values.ping.samples.sample –not supported– Ping_samples.sample
Series.values.ping.samples.timestamp –not supported– Ping_samples.timestamp
Series.values.ping.receivedPingCount –not supported– Coverage_value_pings.received_ping_count
Series.values.ping.pingCount Coverage_values_ping.pingCount Coverage_value_pings.ping_count
Series.values.throughput.direction –not supported– Coverage_value_throughput.direction
Series.values.throughput.benchmarkType –not supported– Coverage_value_throughput.benchmark_type
Series.values.throughput.remoteServer –not supported– Coverage_value_throughput.remote_server
Series.values.throughput.timeStart –not supported– Coverage_value_throughput.time_start
Series.values.throughput.timeEnd –not supported– Coverage_value_throughput.time_end
Series.values.throughput.errorCode –not supported– Coverage_value_throughput.error_code
Series.values.throughput.samples.sample –not supported– Throughput_samples.sample
Series.values.throughput.sample.timestamp –not supported– Throughput_samples.timestamp
Series.values.wifi.signalStrength Coverage_values_wifi.signalStrength Coverage_value_wifi.signal_strength
Series.values.wifi.ssid Coverage_values_wifi.ssid Coverage_value_wifi.ssid
Series.values.wifi.bssid Coverage_values_wifi.bssid Coverage_value_wifi.bssid
Series.values.wifi.capabilities Coverage_values_wifi.capabilities Coverage_value_wifi.capabilities
Series.values.wifi.frequency Coverage_values_wifi.frequency Coverage_value_wifi.frequency
Series.values.wifi.level Coverage_values_wifi.level Coverage_value_wifi.level
Series.values.wifi.isActive –not supported– Coverage_value_wifi.is_active
Series.values.wifi.updateTimestamp –not supported– Coverage_value_wifi.update_timestamp
Series.values.tags.key Tag_keys.name Tags.name
Series.values.tags.value Tags.value Tags.value

More Related Content

What's hot

Online Datastage training
Online Datastage trainingOnline Datastage training
Online Datastage trainingchpriyaa1
 
10 Steps Optimize Share Point Performance
10 Steps Optimize Share Point Performance10 Steps Optimize Share Point Performance
10 Steps Optimize Share Point PerformanceChristopher Bunn
 
The Database Environment Chapter 13
The Database Environment Chapter 13The Database Environment Chapter 13
The Database Environment Chapter 13Jeanie Arnoco
 
Sql server 2008 r2 performance and scale
Sql server 2008 r2 performance and scaleSql server 2008 r2 performance and scale
Sql server 2008 r2 performance and scaleKlaudiia Jacome
 
Strongly Consistent Global Indexes for Apache Phoenix
Strongly Consistent Global Indexes for Apache PhoenixStrongly Consistent Global Indexes for Apache Phoenix
Strongly Consistent Global Indexes for Apache PhoenixYugabyteDB
 
Whitepaper tableau for-the-enterprise-0
Whitepaper tableau for-the-enterprise-0Whitepaper tableau for-the-enterprise-0
Whitepaper tableau for-the-enterprise-0alok khobragade
 
Data stage interview questions and answers|DataStage FAQS
Data stage interview questions and answers|DataStage FAQSData stage interview questions and answers|DataStage FAQS
Data stage interview questions and answers|DataStage FAQSBigClasses.com
 

What's hot (9)

Online Datastage training
Online Datastage trainingOnline Datastage training
Online Datastage training
 
10 Steps Optimize Share Point Performance
10 Steps Optimize Share Point Performance10 Steps Optimize Share Point Performance
10 Steps Optimize Share Point Performance
 
The Database Environment Chapter 13
The Database Environment Chapter 13The Database Environment Chapter 13
The Database Environment Chapter 13
 
Olap
OlapOlap
Olap
 
Integrating SSRS with SharePoint
Integrating SSRS with SharePointIntegrating SSRS with SharePoint
Integrating SSRS with SharePoint
 
Sql server 2008 r2 performance and scale
Sql server 2008 r2 performance and scaleSql server 2008 r2 performance and scale
Sql server 2008 r2 performance and scale
 
Strongly Consistent Global Indexes for Apache Phoenix
Strongly Consistent Global Indexes for Apache PhoenixStrongly Consistent Global Indexes for Apache Phoenix
Strongly Consistent Global Indexes for Apache Phoenix
 
Whitepaper tableau for-the-enterprise-0
Whitepaper tableau for-the-enterprise-0Whitepaper tableau for-the-enterprise-0
Whitepaper tableau for-the-enterprise-0
 
Data stage interview questions and answers|DataStage FAQS
Data stage interview questions and answers|DataStage FAQSData stage interview questions and answers|DataStage FAQS
Data stage interview questions and answers|DataStage FAQS
 

Similar to Accurate Networks Measurements Environment

Improvement from proof of concept into the production environment cater for...
Improvement from proof of concept into the production environment   cater for...Improvement from proof of concept into the production environment   cater for...
Improvement from proof of concept into the production environment cater for...Conference Papers
 
Database project edi
Database project ediDatabase project edi
Database project ediRey Jefferson
 
Aucfanlab Datalake - Big Data Management Platform -
Aucfanlab Datalake - Big Data Management Platform -Aucfanlab Datalake - Big Data Management Platform -
Aucfanlab Datalake - Big Data Management Platform -Aucfan
 
Orca: A Modular Query Optimizer Architecture for Big Data
Orca: A Modular Query Optimizer Architecture for Big DataOrca: A Modular Query Optimizer Architecture for Big Data
Orca: A Modular Query Optimizer Architecture for Big DataEMC
 
Java Abs Dynamic Server Replication
Java Abs   Dynamic Server ReplicationJava Abs   Dynamic Server Replication
Java Abs Dynamic Server Replicationncct
 
Sybase IQ ile Analitik Platform
Sybase IQ ile Analitik PlatformSybase IQ ile Analitik Platform
Sybase IQ ile Analitik PlatformSybase Türkiye
 
Building an analytical platform
Building an analytical platformBuilding an analytical platform
Building an analytical platformDavid Walker
 
Database Integrated Analytics using R InitialExperiences wi
Database Integrated Analytics using R InitialExperiences wiDatabase Integrated Analytics using R InitialExperiences wi
Database Integrated Analytics using R InitialExperiences wiOllieShoresna
 
2017 IEEE Projects 2017 For Cse ( Trichy, Chennai )
2017 IEEE Projects 2017 For Cse ( Trichy, Chennai )2017 IEEE Projects 2017 For Cse ( Trichy, Chennai )
2017 IEEE Projects 2017 For Cse ( Trichy, Chennai )SBGC
 
IEEE 2014 JAVA CLOUD COMPUTING PROJECTS Performance and cost evaluation of an...
IEEE 2014 JAVA CLOUD COMPUTING PROJECTS Performance and cost evaluation of an...IEEE 2014 JAVA CLOUD COMPUTING PROJECTS Performance and cost evaluation of an...
IEEE 2014 JAVA CLOUD COMPUTING PROJECTS Performance and cost evaluation of an...IEEEGLOBALSOFTSTUDENTPROJECTS
 
2014 IEEE JAVA CLOUD COMPUTING PROJECT Performance and cost evaluation of an ...
2014 IEEE JAVA CLOUD COMPUTING PROJECT Performance and cost evaluation of an ...2014 IEEE JAVA CLOUD COMPUTING PROJECT Performance and cost evaluation of an ...
2014 IEEE JAVA CLOUD COMPUTING PROJECT Performance and cost evaluation of an ...IEEEFINALSEMSTUDENTPROJECTS
 
Tuning database performance
Tuning database performanceTuning database performance
Tuning database performanceBinay Acharya
 
Data Partitioning in Mongo DB with Cloud
Data Partitioning in Mongo DB with CloudData Partitioning in Mongo DB with Cloud
Data Partitioning in Mongo DB with CloudIJAAS Team
 
Amplitude wave architecture - Test
Amplitude wave architecture - TestAmplitude wave architecture - Test
Amplitude wave architecture - TestKiran Naiga
 
SPARJA: a Distributed Social Graph Partitioning and Replication Middleware
SPARJA: a Distributed Social Graph Partitioning and Replication MiddlewareSPARJA: a Distributed Social Graph Partitioning and Replication Middleware
SPARJA: a Distributed Social Graph Partitioning and Replication MiddlewareMaria Stylianou
 
IRJET- ALPYNE - A Grid Computing Framework
IRJET- ALPYNE - A Grid Computing FrameworkIRJET- ALPYNE - A Grid Computing Framework
IRJET- ALPYNE - A Grid Computing FrameworkIRJET Journal
 

Similar to Accurate Networks Measurements Environment (20)

Final paper
Final paperFinal paper
Final paper
 
rscript_paper-1
rscript_paper-1rscript_paper-1
rscript_paper-1
 
Improvement from proof of concept into the production environment cater for...
Improvement from proof of concept into the production environment   cater for...Improvement from proof of concept into the production environment   cater for...
Improvement from proof of concept into the production environment cater for...
 
Database project edi
Database project ediDatabase project edi
Database project edi
 
Aucfanlab Datalake - Big Data Management Platform -
Aucfanlab Datalake - Big Data Management Platform -Aucfanlab Datalake - Big Data Management Platform -
Aucfanlab Datalake - Big Data Management Platform -
 
Database project
Database projectDatabase project
Database project
 
Orca: A Modular Query Optimizer Architecture for Big Data
Orca: A Modular Query Optimizer Architecture for Big DataOrca: A Modular Query Optimizer Architecture for Big Data
Orca: A Modular Query Optimizer Architecture for Big Data
 
Java Abs Dynamic Server Replication
Java Abs   Dynamic Server ReplicationJava Abs   Dynamic Server Replication
Java Abs Dynamic Server Replication
 
Sybase IQ ile Analitik Platform
Sybase IQ ile Analitik PlatformSybase IQ ile Analitik Platform
Sybase IQ ile Analitik Platform
 
Building an analytical platform
Building an analytical platformBuilding an analytical platform
Building an analytical platform
 
Job portal
Job portalJob portal
Job portal
 
Database Integrated Analytics using R InitialExperiences wi
Database Integrated Analytics using R InitialExperiences wiDatabase Integrated Analytics using R InitialExperiences wi
Database Integrated Analytics using R InitialExperiences wi
 
2017 IEEE Projects 2017 For Cse ( Trichy, Chennai )
2017 IEEE Projects 2017 For Cse ( Trichy, Chennai )2017 IEEE Projects 2017 For Cse ( Trichy, Chennai )
2017 IEEE Projects 2017 For Cse ( Trichy, Chennai )
 
IEEE 2014 JAVA CLOUD COMPUTING PROJECTS Performance and cost evaluation of an...
IEEE 2014 JAVA CLOUD COMPUTING PROJECTS Performance and cost evaluation of an...IEEE 2014 JAVA CLOUD COMPUTING PROJECTS Performance and cost evaluation of an...
IEEE 2014 JAVA CLOUD COMPUTING PROJECTS Performance and cost evaluation of an...
 
2014 IEEE JAVA CLOUD COMPUTING PROJECT Performance and cost evaluation of an ...
2014 IEEE JAVA CLOUD COMPUTING PROJECT Performance and cost evaluation of an ...2014 IEEE JAVA CLOUD COMPUTING PROJECT Performance and cost evaluation of an ...
2014 IEEE JAVA CLOUD COMPUTING PROJECT Performance and cost evaluation of an ...
 
Tuning database performance
Tuning database performanceTuning database performance
Tuning database performance
 
Data Partitioning in Mongo DB with Cloud
Data Partitioning in Mongo DB with CloudData Partitioning in Mongo DB with Cloud
Data Partitioning in Mongo DB with Cloud
 
Amplitude wave architecture - Test
Amplitude wave architecture - TestAmplitude wave architecture - Test
Amplitude wave architecture - Test
 
SPARJA: a Distributed Social Graph Partitioning and Replication Middleware
SPARJA: a Distributed Social Graph Partitioning and Replication MiddlewareSPARJA: a Distributed Social Graph Partitioning and Replication Middleware
SPARJA: a Distributed Social Graph Partitioning and Replication Middleware
 
IRJET- ALPYNE - A Grid Computing Framework
IRJET- ALPYNE - A Grid Computing FrameworkIRJET- ALPYNE - A Grid Computing Framework
IRJET- ALPYNE - A Grid Computing Framework
 

Recently uploaded

UNIT-III FMM. DIMENSIONAL ANALYSIS
UNIT-III FMM.        DIMENSIONAL ANALYSISUNIT-III FMM.        DIMENSIONAL ANALYSIS
UNIT-III FMM. DIMENSIONAL ANALYSISrknatarajan
 
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and workingUNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and workingrknatarajan
 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVRajaP95
 
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Christo Ananth
 
Booking open Available Pune Call Girls Koregaon Park 6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Koregaon Park  6297143586 Call Hot Ind...Booking open Available Pune Call Girls Koregaon Park  6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Koregaon Park 6297143586 Call Hot Ind...Call Girls in Nagpur High Profile
 
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Dr.Costas Sachpazis
 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)Suman Mia
 
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130Suhani Kapoor
 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Dr.Costas Sachpazis
 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )Tsuyoshi Horigome
 
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxpurnimasatapathy1234
 
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
the ladakh protest in leh ladakh 2024 sonam wangchuk.pptx
the ladakh protest in leh ladakh 2024 sonam wangchuk.pptxthe ladakh protest in leh ladakh 2024 sonam wangchuk.pptx
the ladakh protest in leh ladakh 2024 sonam wangchuk.pptxhumanexperienceaaa
 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCall Girls in Nagpur High Profile
 

Recently uploaded (20)

UNIT-III FMM. DIMENSIONAL ANALYSIS
UNIT-III FMM.        DIMENSIONAL ANALYSISUNIT-III FMM.        DIMENSIONAL ANALYSIS
UNIT-III FMM. DIMENSIONAL ANALYSIS
 
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and workingUNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
 
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINEDJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
 
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
 
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
 
Booking open Available Pune Call Girls Koregaon Park 6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Koregaon Park  6297143586 Call Hot Ind...Booking open Available Pune Call Girls Koregaon Park  6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Koregaon Park 6297143586 Call Hot Ind...
 
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
 
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
 
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)
 
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )
 
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptx
 
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
 
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
 
the ladakh protest in leh ladakh 2024 sonam wangchuk.pptx
the ladakh protest in leh ladakh 2024 sonam wangchuk.pptxthe ladakh protest in leh ladakh 2024 sonam wangchuk.pptx
the ladakh protest in leh ladakh 2024 sonam wangchuk.pptx
 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
 

Accurate Networks Measurements Environment

  • 1. 1 Accurate Network Measurement Environment Eid Araache, Feras Tanan and Ousama Esbel Abstract—Studying network performance is vital to provide better service and equality to consumers where da_sense system is a platform that is used to collect network information to measure and study them further. In this paper, improvements to da_sense system are made where we will present the approach and the implemented API to attain the new structure of the coverage points of the da_sense system. The API authenticates, validates and processes the request without affecting the current system, yet it should be capable to process the newly structured JSON request. In addition, the system should daily convert the newly structured data schema to the aggregated data for consistency with the current system’s API and the network map visualization. Index Terms—Coverage Points; API; da_sense; PHP Laravel; Postgres I. MOTIVATION NOWADAYS, consumers expect a faster and more reliable mobile experience. The carriers are in a continuous race to add network capacity, introduce new technologies and expand their coverage areas in order to meet growing consumer expectations and offer a better mobile experience to their subscribers. Therefore, network performance monitoring is key to identify the behavior of the network and how to optimize the system’s performance and that is one of da_sense functionality. More in depth, the coverage points uploaded and stored in the da_sense system treat each location separately from other related location, though they share the same coverage area which will cause performance bottlenecks as it stores unnecessary data. In addition, da_sense does not take into consideration the upload throughput speed as well as it was only possible to transmit a subset of the collected data to the server. On the other hand, the current system API is not built in a solid documented platform where system complication and authentication is matter of concern. II. INTRODUCTION da_sense is a platform that collects sensor data from various sources in high quantity and quality as the coverage measure- ments are collected from different cellular network providers in Germany. These data are provided to other platform by utilizing its API. The data collected comes from: • Fixed Sensor Infrastructure - stationary wired sensors. • Wireless Sensor Networks - environmental sensors places in trams. • Participatory sensing - mobile phones. In other words, da_sense mission is to process and capture various types of data such as temperature and humidity, noise and network coverage and provide it for a larger audience. Technically, da_sense is hosted in LAPP stack server that consists of Linux, Apache2, PostgreSQL and PHP. Moreover, System logic and backend is developed in Vanilla PHP 5.3.*. The data is stored in PostgreSQL database that consists of multiple schemas. Periodically, the system runs scripts that filter the collected raw data into processed data that can be used for visualizing. In this paper our focus is on network coverage data that has been specifically gathered from participatory sensors where first we will talk about the challenges and options we had that eventually led to our implementation then the implementation will be discussed to cover the following aspects: • Database Remodeling - that is compatible with the new JSON structure. • Platform - Create an API that is compatible with new Model. • Testing - How the API has been tested. • Filtering - Filter the new schema to the aggregated schema that has been used in MoNa server [1] . III. SYSTEM DESIGN AND ALTERNATIVES In this section we will enumerate through the limitations that the current system is facing and the possible solutions that can be applied. A. Platform As mentioned earlier, the goal of this paper is to improve the structure of the coverage network data and make it more robust. Therefore, we have the ability to modify the current API to make it adapt to the new structure. However, the current API is too complicated where it is written in plain PHP with no framework or documented structure and the whole API is based on a single endpoint that runs multiple nested if/else expressions to map the parameters to the right directory, class then to a function respectively to process the request. In addition, to process a request, the requester should do the following steps: • Send login request. • Send data. • Send logout request. (optional) Three requests are made to upload a file which consumes time and many resources. With that being said, creating a new API is considered inevitable in order to solve these issues from the root so we have the choice to whether stick with PHP or use a faster, lighter language such as Nodejs. The Advantages of using PHP will allow us to maintain the server stack without further addition to server packages also PHP utilizes better with SQL, whereas Nodejs utilizes better with NoSQL [2] since it is built around JSON.
  • 2. 2 B. Schema Changing the structure of the uploaded file will require us to modify the schema to accommodate for the changes. Extending the current schema will most likely break the current API and since the scope of the lab is to cover the endpoints regarding the coverage points only then in this case we will still need to maintain the old API. However on the other hand, creating a new schema that will only deal with storing the new coverage point values will permit us to only focus on deploying the new API without thinking about backward compatibility thus the current API will not be affected by the changes and therefore the filtering job will be able to perform its tasks normally without modification. C. Data Storage The uploaded data is in JSON format and the structure is not consistent and tends to change which require to modify the schema regularly. Therefore, it might be beneficial to change the Database Storage to NoSQL such as MongoDB. MongoDB is document-oriented database designed to store JSON-like documents [3]. Each record stored in its document can have different structure from previous one which is considered a suitable solution to both mentioned issues. However, a new issue will rise as the filtered data is stored in PostgreSQL schema. As a result, it will force the new API to connect to multiple databases which is huge bottleneck to the system regardless to the need to install another stack to support MongoDB in the server. D. Chosen Approach After studying the options on hand, it has been found efficient to use PHP as base platform to maintain the server stack and keep both current and new platforms in same lan- guage for ease of development. However, the new platform’s functionality will be extended by using Laravel framework. Laravel is a powerful and secured MVC framework with expensive, elegant syntax that it is very structured and easy to use [5]. It has many convenient built-in components that suit the outcome such as, built-in Authentication system that can be customized to adopt to any structure and therefore no need to re-register the users again, API functionality, request validation and unit testing. The reason why laravel is a better choice over other frameworks such as, Yii, Slim and CakePHP is that laravel can extend its core functionality as much as your project requires, it also has clean and simple routing. Not to mention, Laravel has active and growing community where it becomes one of the most popular PHP frameworks according to [4]. Finally, Laravel supports unit testing out of the box where it makes easier to write the test cases. Regarding schema and data storage, the data storage will be used is PostgreSQL as the processed schema is hosted in PostgreSQL however instead of extending and modifying the schema, we will create a new schema that will only contain the modified tables to accommodate to the structure changes. While for filtering data, the current filter job will not be affected by the chosen approach, it will be able to run ag- gregation on the data collected from the current API. Besides the chosen approach will be beneficial for filtering data as Laravel has built-in scheduler that allows the application to run administrative tasks periodically on the server without the need to go through server’s configuration. IV. IMPLEMENTATION The API development is divided into two sections, remod- eling the new schema and develop the platform. A. Remodeling Schema In order to understand the changes that must be made, a comparison between the current uploaded data and the new structured data must be made. Figure 1 demonstrates the current data and the new attributes. From Figure 1 the updated schema decouples locations and cells from values where each can have multiple sets of different data. Also, the updated schema adds on extra attributes such as throughput and alters the structure of existing ones such as ping. 1) Entity Relation Diagrams: The previous data set are stored and maintained in data schema, the structure and attributes is illustrated as in Figure 2. The uploaded schema will change the series table and every table that has a relationship with. Figure 3 demonstrate the new schema after applying the changes. The tables labeled with the color black are the tables that are affected by the changes, the white colored tables are the tables that have slight changes while the grey tables are the tables that maintained their previous structure and are not affected by the changes. The white and grey tables will not be added to the new schema for the time being because they are populated by API endpoints that are out of this lab’s scope. However, the migrations have been written to help with the future work. 2) Attributes Mapping: Table I shows the new schema’s attribute fields mapping to the old and current schemas. B. Platform To solve the mentioned issue from the root and adapt to the new structure, we have decided to create a new API using PHP with Laravel framework as mentioned before. In this section, the deployment and the main documentation points will be discussed as follows: • Migrations, seeds and models for the new Schema. • API Authentication. • Routes, Controllers and Managers. • JSON Validation. • HTTP request format. • Logging requests and responses. 1) Migrations: Laravel provides expressive migration that acts like a version control for the database. The migrations for the updated schema can be found in CoveragePoints- >database->migrations. Migration classes within contains all the necessary tables that are marked in black, white and grey in Figure 3 . However for this lab, only tables marked in black
  • 3. 3 will be created as not all API endpoints have been added to the API. For next increments, as we add more API endpoints, then we will create their related migrations. Here is a sample of creating coverage_value_cells table: public function up() { Schema::create(’coverage_value_cells’ , function (Blueprint $table) { $table->increments(’id’); $table->integer(’sensor_type_id’)->unsigned(); $table->integer(’cell_id’)->unsigned(); $table->integer(’lac’)->unsigned(); $table->string(’network_type’, 32); $table->string(’network_provider’, 32); $table->integer(’asu’); $table->integer(’signal_strength_db’); $table->boolean(’is_active’); $table->timestamp(’update_timestamp’); $table->integer(’coverage_value_id’) ->unsigned(); $table->foreign(’coverage_value_id’) ->references(’id’)->on(’coverage_values’); $table->foreign(’sensor_type_id’) ->references(’id’)->on(’sensor_types’); }); } The above snippet is the default syntax for all tables, the only differences are the number of columns and formats in each table as well as the relationships among them. The main points to consider: • Schema is class for the database schema that is creating a table with the name of coverage_value_cells. • The increments function is to make id a primary and auto-incremental. • Unsigned method is used to make the integer column always positive. It should be set to the columns that will have a foreign relationship with other tables. • coverage_value_id and sensor_type_id are both foreign keys that references id of coverage_values and sen- sor_types. The database format and syntax follow the standard convention of SQL [6], which is: • Snake case for columns name. • A foreign key column should be in syntax of name- OfTheRelatedTable_TheReferencedColumn. In Order to run migrations and create tables, in the terminal of the project please run the following: php artisan migrate 2) Tables Seeding: Some tables have a pre-populated data such sensor_types and device_types, so to prepare the schema we need to populate the seeds on launch by running the following commands in project’s directory terminal: php artisan db:seed The seeders can be found in CoveragePoints->database- >seeds. 3) Models: Each migration should have a corresponding Model that will be used along with Eloquent ORM to perform create, read, update and delete (CRUD) operations on the tables. Models are available in CoveragePoints->app->API- >Models. Each model should define the following: • Fillable fields - fields that are inserted by the user. • Casts - cast some fields to Boolean or Timestamp. • Relationships - define relations with other tables. Based on Figure 3, table coverage_values has a one-to- many relationship with series, this can be defined in Model CoverageValue as following: public function series(){ return $this->belongsTo(Series::class); } While in Model series: public function coverageValues(){ return $this->hasMany(CoverageValue::class); } Naming conventions in Laravel is very crucial as if the table name does not match foreign key column then a second and a third argument must be passed to hasMany method. 4) API Authentication: The current API authenticates with normal login and logout mechanism as explained previously. In the new API, an Authorization token should be sent along with the HTTP headers. The header format is: Authorization: Username:password_md5:password_sha The reason behind this format is that there are many users in the system that are still using the old password md5 which cannot be neglected. The authentication is verified in CoveragePoints->app- >Http->Middleware ->BasicAuthentication.php. The follow- ing code snippet demonstrates the main logic behind the authentication. public function handle($request, Closure $next) { $autherization = $request->header(’Authorization’); //check if authiration is fullfilled if(is_null($autherization)) return response()->json([’success’ => false, ’message’ => ’Invalid credentials’], 401); $creds = explode(’:’, $autherization); if(count($creds) != 3) return response()->json([’success’ => false, ’message’ => ’Invalid format’], 401); $user = User::whereUsername($creds[0])->first(); if(is_null($user)) return response()->json([’success’ => false, ’message’ => ’Invalid username’], 401);
  • 4. 4 if($creds[1] == $user->password_md5 || $creds[2] == $user->password_sha){ $request->attributes->add([’user’ => $user]); //add user to the request return $next($request, $user); } return response()->json([’success’ => false, ’message’ => ’Invalid credentials’], 401); } BasicAuthorization does the following: • Get the authorization from request header. • If authorization header is empty, send a 401 with false success JSON response. • Otherwise, split it into 3 pieces where the first piece is username, md5 pass then sha pass respectively. • Get the user with the same username. if not found send 401 status code. • If user is found, check both password against their related fields. • if any was true then proceed to next step. Otherwise, send 401 status code with invalid credentials JSON response. 5) Routes, Controllers and Managers: After authentication has been verified and accepted, the request will be sent to the corresponding route. All API routes are wrapped within api group that implements API group middleware. Route::group([’middleware’ => ’api’, ’prefix’ => ’api/v2’], function(){ Route::post(’/coverage-value’, ’CoverageValuesController@store’); }); The api group middleware consists of: • BasicAuthentication - class to verify the request authen- tication. • LoggingRequest - class to log request and response, more about it later. After the route has been matched, the request will be sent to the corresponding controller@method. For example, from the previous snippet, the route /api/v2/coverage-value will be di- rected to method store of controller CoverageValueController The controller function must add the responsible manager as dependency injection to follow the D in SOLID design pattern [7]. Then the manager should create, update, show or delete the record. For our case, we are uploading a coverage point, Thus the controller should inject the CoverageValueManager and create the request as shown: public function store(CoverageValueRequest $request, CoverageValueManager $manager){ $manager->create($request); return response()->json([ ’success’=> $manager->isSuccessful(), ’message’=> $manager->getErrorMessage() ]); } Any manager should extend the abstract class APIManager that consists of the main API funcitonalities such as: • CRUD abstract functions. • Response abstract function. • set success and error messages. 6) Request - JSON validation: Request validation is ini- tiated when the request is passed to the controller, the con- troller‘s method should inject the validation in its parameters again to follow the D in SOLID design principles as shown in the above Code snippet. In general, the request vali- dation classes are available in CoveragePoints->app->Http- >Requests, where all created request classes extends Request class. For coverage points, the uploaded JSON is directed to CoverageValueRequest where it will be checked against: • If the device identity sent is stored in devices table. • If the measurement type is stored in sensor_types table. • The format of the JSON. The format of the JSON is verified using Laravel built- in validation where fields will be checked for their type, importance and range. 7) HTTP request Format: The design principles to suc- cessfully send a request to the API is that the header should consists of the following: • Authorization: username:password md5:password sha • Content-Type: application/json. • Accept: application/json. The JSON data should be appended to the HTTP request as RAW POST data. The response of the request is in JSON format with a success attribute to indicate the status of the request. 8) Logging Requests and Responses: Inevitably, any API requires continuous monitoring to the requests made to its endpoints. One of the way is to log each API request and re- sponse so we can capture and determine important information about endpoints. As mentioned before, the api group middleware consists of a LoggingRequest class that will be triggered when response is sent to requester and the call has been terminated. The log files are stored in daily log files in CoveragePoints->Storage- >api->logs where it logs the failed request paramters sent plus the response generated from the controller and manager. Here is the code snippet: public function terminate($request, $response){ //log the failure responses only $responseArray = json_decode($response->getContent(), true); if(!$responseArray[’success’]){ //create a daily files in the specified path Log::useDailyFiles(storage_path() . ’/api/logs/results.log’); //store a request and response Log::info([’request’ => $request->all(), ’response’ => $response]); } }
  • 5. 5 V. TESTING AND EVALUATION For determination of the robustness and functionality of the API, unit tests have been created. Unit testing is specialized form of automated testing [8]. Laravel is integrated with PHPUnit package out of the box along with many helper methods that allows the developer to expressively test the application [9]. To better define the desired test behaviors, a general abstract class is created under name of APITester that will aid extender test classes to inherit common functionality. The abstract class and test classes are located in CoveragePoints->tests. One of the most important attributes in APITester is Faker which will be used to generate possible fake values of the API request for the tests. The tests results are stored in a separated database called DaSenseTest that should hold the same schemas as the development database. For the coverage value endpoint, the following test cases are created: • Check status 401 and see JSON response when sending a request with false credentials. • Check status 422 and see JSON response when sending a request with no cells or Wi-Fi points. • Check status 200 and see JSON with success true when pushing cells. • Check status 200 and see JSON with success true when pushing Wi-Fi points. • Check status 200 and see JSON with success true when pushing throughput along with either Wi-Fi or cell. As mentioned earlier, the JSON is generated with very mini- mal fixed values, here is a snippet of generating the location points for the tested JSON: protected function addLocation(){ $howMany = $this->fake ->numberBetween(1, 4); for($i = 0; $i < $howMany; $i++){ $this->coverageValue["locations"][] = [ "longitude"=> $this->fake->longitude(), "latitude"=> $this->fake->latitude(), "altitude"=> $this->fake ->numberBetween(-100, 100), "accuracy"=> $this->fake ->randomFloat(2, -100, 100), "speed"=> $this->fake ->randomFloat(2, -100, 100), "timestamp"=> $this->fake ->dateTimeThisYear() ->format("Y-m-d H:i:s"), ]; } return $this; } First, the system will randomly determine how many locations points can be added then the Faker can generate real life data. In order to run the test, simply run phpunit on the terminal of the project directory. VI. FILTERING DATA As mentioned earlier, the coverage points collected are raw data and they need to be filtered and aggregated in order to be beneficial. Therefore, on daily basis, the system runs a script to filter bad points, aggregate the rest and store them in a different schema. A. Overview The filtering hugely depends on measurement types where in the current system, there are four types that are taken into consideration. these are, ASU, Signal Strength, Ping and Download speed. In addition to the measurement type, data are filtered based on different set of network providers and network types. Each coverage point is filtered by a network provider such as Telekom, O2, Vodafone and others, and also filtered by network technologies like 2G, 3G and 4G. The combination of network provider and network type produces a cluster value where it can be used to identify provider’s pros and cons on each network technology. Thus, it is used to distinguish different datasets. Besides the applied general filtering, each type has a unique filter to remove bad readings when storing into the aggregated schema. Here are the filters applied on each type: • ASU - filtered where ASU value is smaller than 32 and bigger than -1. • Signal strength - filtered where SSID is empty to filter out Wi-Fi points. • Ping, - filtered where SSID is empty and 5 folds of minimum ping value becomes bigger that maximum ping value. • Download Speed - filtered where SSID is empty and download rate is smaller than 300K and bigger than 0. Also, accuracy should always be valid where it must be unsigned integer with value smaller than 100. After data has been filtered, it will be recorded in data_processed.data_values_cleaned_for_coverage table, where each type will be stored along with one and only one location of the main coverage point. B. Key challenges By introducing the new schema, each coverage point now includes multiple cells, Wi-Fi points, pings, download rate and the newly added upload rate. Furthermore, the device is also recording different locations where the readings have been occurred to make them more reliable and accurate which unfortunately form the main challenge because the data_processed schema is designed to store a single location for each type. Therefore, we need to find the approximate location to each type. Another thing to consider is the new structure of throughput and ping where each one can have multiple samples as can be seen Figure 1. These samples should be either recorded individually where location can be determined by the sam- ple’s timestamp or calculate the average for all samples in a ping/throughput and use timeStart and timeEnd to determine the location area of the reading. In Addition, two new types must be introduced where one will indicate the upload speed and the latter will determine the measurement type cell_id and since the filtering is based on network type and provider then this lab will only focus on filtering coverage values with cells.
  • 6. 6 C. Implementation Basically, each cell, throughput or ping in their respective array will be stored individually. For example, if cells has three groups then each group will be a separate record in the database table. To determine its location, we need to take into account the first location that is recorded directly after the group’s timestamp and the last location recorded before the group’s timestamp, then these two locations will be interpolated to produce a single location that is relatively close to the measurement type’s reading. Algorithm 1 demonstrates how interpolation works. Data: type.timestamp of the measurement type needed to be stored Result: single aggregated dataset consists of location, speed, accuracy and altitude 1- Initializing 1: locationi ← locations().where(timestamp >= type.timestamp).last(); 2: locationf ← locations().where(timestamp <= type.timestamp).first(); 2- Calculate location’s dataset 3: Xi ← Xi + δtime ∆time (∆X) // Xi can be speed, geopoint, accuracy or altitude Algorithm 1: Interpolate two locations The general idea of the algorithm is after determining the two locations based on the group’s timestamp as shown in 1, the rest of location’s information such as, longitude, altitude, speed, accuracy and latitude, they will be calculated using weighted average approach based on timestamp as demon- strated in step 2 in the algorithm. First, we will divide the difference of time between the group’s time and the location recorded before, δtime, over the difference of the overall time between two locations, ∆time. Afterwards, it will be multiplied against the difference in data. Finally it is added to the locationi‘s data to give more weight to the closest location. There are some certain cases where interpolating locations is not necessary such as when: • locationi and locationf both have same timestamp as the measurement type, then one location will be considered. • locationi is not found, then locationf will be the type’s location. • locationf is not found. then locationi will be the type’s location. For ping and throughput data, each sample within will be stored as a separate record where sample’s timestamp will be used to determine the locations that have been recorded directly after and before the sample for interpolation. The created job is located in CoveragePoints->app->Http- >Console->Commands->FilterData.php. The job can be trig- gered manually by running the following in the terminal of the project’s directory php artisan filter:data The filter job is configured in Laravel as it provides a suit- able way to schedule cron jobs. The cron job is scheduled in CoveragePoints->app->Console->Kernel.php where it has been configured to run the previous command as following: $schedule->command(’filter:data’) ->dailyAt($time) ->sendOutputTo(storage_path(CRON_LOG_PATH)); The time can be configured through sending a RESTful PUT request to API endpoint /api/v2/scheduler/update with JSON data as RAW containing time to determine on what time to run the scheduler. Also, the result of the filter will be stored in the path: /Storage/logs/cron_results.log. To recapitulate, Filtering the collected coverage values can be executed via three approaches: • In terminal by running php artisan filter:data • Cron job that will run daily based on the scheduled time. • Manually by sending a RESTful GET request to /api/v2/filter/run with authorization header that have ad- ministrative role. D. Testing The filtering can be tested without applying the changes in the database by modifying .env’s API_ENV attribute to testing. If you would like to store the filtering test results then just map DB_DATABASE to the DaSenseTest database without the need to change the environment to testing, environment can be either local or production. VII. CONCLUSION Collecting coverage points in da_sense is insufficient as a new structure must be introduced that enhances the process of collecting and filtering these points. The current API is not capable to process the new structured data and therefore it should be modified. However, because of its flaws and disadvantages it has been found to better develop a new API that will adapt to the new structure. The API functionality such as authentication, validation processing and logging has been implemented where only Coverage points endpoints have been added to the API. A data-driven unit tests has been used to test the API functionality in general and coverage point endpoints specifically. The collected coverage points are just a collection of raw data that needs to be filtered and manipulated in order to produce meaningful information. Daily, the system will run a job that will process these data based on measurement types, network providers and network technologies and store them in another database schema where it can be used for studying, monitoring and visualizing. Unfortunately, the new structured data is not suitable for this schema and therefore the filter job must be modified to map the new structured data to the aggregated scheme’s fields. The main obstacle was that the aggregated schema accepts only single location for each record. On contrary, the new structure has multiple locations
  • 7. 7 for each coverage point. Thus, in order to tackle this issue, interpolating the nearest two locations of the coverage point’s type based on its timestamp and produce a single location that is relatively close to it. VIII. FUTURE WORK The API is far from done to be able to replace the current API, the rest of the current API endpoints should be added to the new API along with generating a unit tests for each endpoint. Besides, The API can be enhanced defensively by limiting the rate at which any individual requester can make requests. In other words, throttle requester that hit a particular API endpoint in short period of time this will help preventing DDoS attacks and make sure the application stays alive. Regarding filtering, the current schema where the aggre- gated data is stored is not well accommodated to the new coverage point structure and therefore, it would better to come up with a better structure and schema that can make the best out of the newly modified structure. In addition, coverage values with Wi-Fi points are not filtered as these values have no indication to what network provider it is using. Therefore, updating Wi-Fi readings to be able to fetch the network provider and then add it to the filter job should be considered for the next step. ACKNOWLEDGMENT We would like to express our gratitude to our supervisor Fabian Kaup. His guidance and dedicated involvement in each step throughout this lab was the key for this paper to be accomplished. REFERENCES [1] da_sense, MoNa. [Online]. Available: http://mona.ps.e-technik.tu- darmstadt.de/ [Accessed: 6-May-2016]. [2] R. Aghi, S. Mehta, R. Chauhan, S. Chaudhary, and N. Bohra, “A com- prehensive comparison of SQL and MongoDB databases,“ International Journal of Scientific and Research Publications, vol. 5, no. 2, Feb. 2015. [3] “MongoDB,“ Wikipedia. [Online]. Available: https://en.wikipedia.org/wiki/mongodb. [Accessed: 17-May-2016]. [4] “The great PHP MVC Framework Showdown of 2016 â ˘A¸S (CakePHP 3 vs Symfony 2 vs Laravel 5 vs Zend 2) |“ zen of coding. [Online]. Available: http://zenofcoding.com/2015/11/16/the-great-php-mvc- framework-showdown-of-2016-cakephp-3-vs-symfony-2-vs-laravel-5-vs- zend-2/. [Accessed: 22-Jun-2016]. [5] “Introduction,“ - Laravel. [Online]. Available: https://laravel.com/docs/4.2/introduction#laravel-philosophy. [Accessed: 20-Jun-2016]. [6] S. Sarkuni, “How I Write SQL, Part 1: Naming Con- ventions,“ Launch by Lunch RSS. [Online]. Available: https://launchbylunch.com/posts/2014/feb/16/sql-naming-conventions/ [Accessed: 20-Jun-2016]. [7] Paikens, A. and Arnicans, G., 2008. Use of design patterns in PHP- based web application frameworks. Scientific Papers University of Latvia, Computer Science and Information Technologies, 733, pp.53-71. [8] “Why Is Unit Testing Important?,“ Excella Consulting, 2013. [Online]. Available: https://www.excella.com/insights/why-is-unit-testing- important [Accessed: 20-May-2016]. [9] “Testing,“ - Laravel. [Online]. Available: https://laravel.com/docs/master/testing [Accessed: 22-May-2016].
  • 8. 8 APPENDIX A FIGURES Fig. 1. Current data vs New data
  • 9. 9 Fig. 2. Current data Schema
  • 10. 10 Fig. 3. new data Schema
  • 11. 11 APPENDIX B TABLES TABLE I NEW STRUCTURE MAPPING TO CURRENT AND NEW SCHEMA JSON Field Old Schema New Schema deviceIdent devices.identifier devices.identifier measurementType Senors.typeID Sensors.sensor_type_id Series.name Series.name Series.name Series.visibility Series.visibility Series.visibility Series.timestamp Series.timestamp Series.timestamp Series.values.timestamp Coverage_values.timestamp Coverage_values.timestamp Series.values.app_version –not supported– Coverage_values.app_version Series.values.locations.longitude Coverage_values.center Coverage_values.center Series.values.locations.latitude Series.values.locations.altitude Coverage_values.alt Coverage_value_location.altitude Series.values.locations.accuracy Coverage_values.acc Coverage_value_location.accuracy Series.values.locations.speed Coverage_values.speed Coverage_value_location.speed Series.values.locations.timestamp –not supported– Coverage_value_location.timestamp Series.values.cells.measurementType –not supported– Coverage_value_cells.measurement_type Series.values.cells.cellId Coverage_values.cellID Coverage_value_cells.cell_id Series.values.cells.lac Coverage_values.lac Coverage_value_cells.lac Series.values.cells.networkType Coverage_values.netwokType Coverage_value_cells.network_type Series.values.cells.networkProvider Coverage_values.networkProvider Coverage_value_cells.network_provider Series.values.cells.signalStrengthDB Coverage_values.signalstrengthdb Coverage_value_cells.signal_strength_db Series.values.cells.isActive –not supported– Coverage_value_cells.is_active Series.values.cells.updateTimestamp –not supported– Coverage_value_cells.update_timestamp Series.values.ping.timeStart –not supported– Coverage_value_pings.time_start Series.values.ping.timeEnd –not supported– Coverage_value_pings.time_end Series.values.ping.remoteServer –not supported– Coverage_value_pings.remote_server Series.values.ping.samples.sample –not supported– Ping_samples.sample Series.values.ping.samples.timestamp –not supported– Ping_samples.timestamp Series.values.ping.receivedPingCount –not supported– Coverage_value_pings.received_ping_count Series.values.ping.pingCount Coverage_values_ping.pingCount Coverage_value_pings.ping_count Series.values.throughput.direction –not supported– Coverage_value_throughput.direction Series.values.throughput.benchmarkType –not supported– Coverage_value_throughput.benchmark_type Series.values.throughput.remoteServer –not supported– Coverage_value_throughput.remote_server Series.values.throughput.timeStart –not supported– Coverage_value_throughput.time_start Series.values.throughput.timeEnd –not supported– Coverage_value_throughput.time_end Series.values.throughput.errorCode –not supported– Coverage_value_throughput.error_code Series.values.throughput.samples.sample –not supported– Throughput_samples.sample Series.values.throughput.sample.timestamp –not supported– Throughput_samples.timestamp Series.values.wifi.signalStrength Coverage_values_wifi.signalStrength Coverage_value_wifi.signal_strength Series.values.wifi.ssid Coverage_values_wifi.ssid Coverage_value_wifi.ssid Series.values.wifi.bssid Coverage_values_wifi.bssid Coverage_value_wifi.bssid Series.values.wifi.capabilities Coverage_values_wifi.capabilities Coverage_value_wifi.capabilities Series.values.wifi.frequency Coverage_values_wifi.frequency Coverage_value_wifi.frequency Series.values.wifi.level Coverage_values_wifi.level Coverage_value_wifi.level Series.values.wifi.isActive –not supported– Coverage_value_wifi.is_active Series.values.wifi.updateTimestamp –not supported– Coverage_value_wifi.update_timestamp Series.values.tags.key Tag_keys.name Tags.name Series.values.tags.value Tags.value Tags.value