08448380779 Call Girls In Greater Kailash - I Women Seeking Men
Hogeschool PXL Smart Mirror
1. 1
Abstract—This paper covers the creation of a smart mirror. By
using a webcam, Raspberry Pi, Walabot sensor, a computer screen
covered with a one sided reflective foil and coding knowledge a
working smart mirror was built. The mirror displays breath rate
and visual features of the person standing in front of it. All data is
stored in a database with timestamps that indicate at what time
the data was measured. A swiping feature that is implemented via
the Walabot allows the user to change the screen output of the
mirror to the news, the weather or the measured data.
I. INTRODUCTION
Y incorporating the principles of the internet of things,
everyday items can be upgraded to machines of the future.
This project focusses on the development of a smart mirror.
This mirror can detect and recognize a face and some of its
descriptors such as age, gender, facial hair and measure the
person’s respiration rate. The data will be displayed on the
mirror after being uploaded to a database.
Currently measuring vital signs with contactless sensing is a hot
topic. Radar systems are used to detect chest and abdominal
movements. The benefit of not having to wear equipment is the
main drive for research in this field. By not using wearable
devices it removes possible discomfort to the user. Also,
wearing an actual device might influence the breathing pattern
of the user as well[1]. Not having to deal with wearable
hardware can also be an advantage to the elderly users. One of
the biggest interferences in accurately detecting vital signs is
random body movements (RBM)[2]. The Walabot is the radar
system used in this paper to detect those vital signs.
II. WALABOT
In this project, the Walabot pro version is used. This version
has access to a bigger antenna array. The Walabot is a three-
dimensional radio-frequency based sensor. The waves sent by
the sensor have a frequency between 6.3-8.3 GHz. The multiple
antennas transmit, receive and record signals and make it
possible for the Walabot to sense its environment[3]. The
Walabot comes with an application programming interface
(API) that is compatible with C++ and Python.
Two functions that are responsible for received data
processing are getImageEnergy and getSensorTargets.
getImageEnergy is responsible for returning the sum of the
image’s pixels’ signal power while getSensorTargets returns a
list of all the objects visible for the Walabot and the number of
objects it has detected[4]. An object or target is defined by its
coordinates relative to the Walabot and its signal power.
This paper was submitted on 15/05/2017. This project was supported by
PXL University College.
R. Debien student Electronics and ICT Engineering Technology, UHasselt
(e-mail: ruben.debien@student.uhasselt.be)
III. METHOD
A. Walabot
The Walabot sensor in this project is responsible for two
functions: measuring breathing frequency and swiping between
pages on the user interface. This part of the project can be
recreated with a little experience in C++ or Python, the free
Walabot API, and a Walabot.
The respiratory rate functionality with the Walabot was
implemented first. With the help of the code examples given by
walabot, acquiring the data to find the respiratory rate was not
difficult as the aforementioned getImageEnergy function
returns us that data. The challenge lays in sampling the data at
a frequency with constant intervals instead of merely capturing
the data as fast as possible (which leads to inconsistent intervals
between data points causing an incorrect respiratory rate
calculation).
Detecting heart beats poses more of a challenge than detecting
respiratory rate because the changes in imageEnergy are
smaller and happen more frequently than breath intakes. The
heart rate could possibly be filtered out of the acquired rate with
frequency filtering as heart rate is normally between one and
three hertz while the respiratory frequency is lower than one
hertz. In this project, we didn’t pursue the heartbeat monitor
module because of equipment limitations which made it
impossible to acquire the required sampling rate. More
sophisticated and powerful equipment, however, would make it
possible to expand on this in another project.
Finally, the swipe functionality was added. This makes it
possible to detect a sweeping motion in front of the sensor and
thereby control what the smart mirror will display as there is no
mouse or keyboard. The getSensorTargets function of the
WalabotAPI returns all the objects seen by the sensor. By
determining if one of these targets makes a swiping motion, it
is possible to track the movement of the hand and so determine
whether a swipe motion was performed.
B. Image processing
Extracting information from an image is a very large and
complex field. We first attempted to create our own classifiers
based on datasets we composed ourselves to extract data points
such as facial hair. But it quickly became obvious this wasn’t in
the scope of our project. It would take months or years of
studying and tweaking classifiers to come to an acceptable
result. Instead, we opted to use an API provided by Microsoft
that has most of the functionalities we were looking for. All of
R. DeHaven student Electronics and ICT Engineering Technology, UHasselt
(e-mail: robbert.dehaven@student.uhasselt.be)
V. Claes lector / researcher Smart-ICT, Hogeschool PXL (e-mail:
vincent.claes@pxl.be)
Smart Mirror
R. Debien, R. DeHaven, V. Claes
B
2. 2
the intensive computing work is done on their servers which is
convenient considering the Raspberry Pi´s limited resources.
C. Smart mirror API and Website
To show the results on the screen we made a website hosted on
a server provided by the Hogeschool PXL. The application, run
on the Raspberry Pi, communicates with the website via our
smart mirror API. On top of just showing the measured result,
we added a news feed and weather updates. These parts of the
web page can be accessed through swiping.
The smart mirror API provides the Raspberry Pi application a
place to upload the data to. It stores the data and can retrieve it
if a request is made.
IV. CODE EXPLANATION
A. Walabot
The Walabot code is written in C++. This code consists of two
threads that work simultaneously. The first thread called
“waitTh” regulates the sampling frequency and consists of a
wait function and a Boolean variable that shows whether
enough time has passed or not. The other thread possesses one
function that configures and calibrates the sensor and then starts
the measurements. This is achieved by having a while loop that
loops as long as you want to acquire data. In this loop, the
Walabot will be triggered. This means it will start measuring
and processing this data. In this project, there are two different
data processing functions: PrintBreathingEnergy and Swipe.
The PrintBreathingEnergy function will acquire the
imageEnergy data from the Walabot measurement if enough
time has passed which is indicated by the previously mentioned
waitTh. When waitTh gives the ok signal, the function will
store the energy value in a deque. A deque is used for its first in
first out (FIFO) behavior and has the possibility to return the
value of all its elements as opposed to a regular queue where it
is only possible to access the first element. The values are stored
at regular intervals which means that by knowing the sampling
frequency it is possible to determine how much time has passed
between elements. With this information, the size of the deque
can be limited to 30*sample frequency to get a deque of 30
seconds long. Then the different values will be compared with
the average of the entire deque and a certain offset, above and
below the average. If there is a value below the lower threshold
and afterward there is one above the upper threshold, the
number of measured inhalations is incremented. After
comparing all the elements of the deque, the frequency is found
by multiplying the number of inhalations by two to obtain the
respiratory rate per minute.
Swipe makes use of the getSensorTarget function of the
walabot API. This function returns all objects and its
coordinates as seen by the sensor. If the z-coordinate of a target
is closer than 30 cm and the signal strength is higher than a
certain value, the target number is stored. When there is another
target closer than 30 cm but with a higher signal strength, the
previous target number will be overwritten with the latest target
number. Once the strongest target is found, the x-coordinate of
that target will be stored in another deque. This deque has a
maximum size of 6. When the deque is full the data within will
be checked. If the first 3 elements are negative and the last 3 are
positive, a swipe to the right is detected and vice-versa a swipe
to the left. The page numbers will be changed if it is allowed.
Swiping further to the left at the most left page will have no
effect.
B. OpenCV
OpenCV is an open source library for computer vision and
machine learning that can be used in real-time applications that
require an image or video stream processing. In this project, it
is used to detect faces and acknowledge if there is a person in
front of the camera. This initiates all the other code. Face
detection is done in OpenCV with the Viola-Jones object
detection framework[5]. Based on a dataset of positive and
negative images, Haar features are computed and a cascade
classifier is trained. It can be used to detect objects using the
dectectMultiScale function in OpenCV. OpenCV already
provides some classifiers for face and eye detection. After
detecting a face a frame is captured from the webcam’s video
stream. This is done with the VideoCapture class which outputs
an OpenCV Mat array. It is then converted to a more usable
.png file and stored for future use.
C. Microsoft cognitive services vision API
This collection of APIs allows users to upload images and
receive a detailed description of what is in the image, this can
include faces, emotions, monuments, and objects.
The face API is the most valuable for the smart mirror. The
detect faces function is called and after analyzing the image it
returns the following information: age, gender, facial hair
(mustache, beard, and sideburns), smile, type of glasses, and
several emotions. The response is structured in JSON
(JavaScript Object Notation) format which makes it easy to
upload to our own Restful API later.
The face API can also identify faces. For this to work, several
other API calls have to be made first.
1. create a person group - a person group in which
persons are stored.
2. add a person – a unique personId string is created for
the new person.
3. add a person's face - upload a picture of the face of the
person
4. train person group - the identifier needs to be retrained
every time a new picture is added to a person,
otherwise the new person can’t be identified.
5. status training - training the identifier requires more
resources and may not execute immediately. This
returns the status of the training.
6. identify face - finally, the face can be identified. This
returns a unique personId which is later used in the
database.
It takes approximately 6-10 seconds from detecting a face to
complete one loop of the program.
The API keeps a permanent record of persons created and the
person’s faces uploaded, making it unnecessary to keep records
of all users locally.
To facilitate connecting and interacting with API services
Microsoft’s C++ REST SDK was installed. With this SDK, it is
possible to easily send HTTP-requests in native C++. JSON
formatting can also easily be encoded and decoded.
3. 3
D. Smartmirror API
To restrict a client's access to the database containing the users
and measurements a REST (REpresentational State Transfer)
API architecture is implemented. All communication with the
database is handled by this API. It can be accessed through the
URL (Uniform Resource Locator) http://smartmirror.pxl-ea-
ict.be/api.
All URIs (Uniform Resource Identifier) that have this base
URL are redirected by a .htaccess file. It redirects the browser
to a PHP file that parses the URIs and handles them
accordingly. There are three URIs available.
• /activeUser will accept GET and POST requests,
which retrieve and set the active user respectfully.
When POSTing the active user, the database is queried
to determine if the user exists, if not a new user is
added using the unique personId that is provided by
the identify face API call by Microsoft. Next, the
measurement is added.
• /activeUserImage accepts only a PUT request as it
only necessary to upload the image via the API, it can
be retrieved normally through the server.
• /page will accept and relay the page to be displayed,
determined by the swipe function in the Walabot
code.
E. Website
The website consists of three main views. The first one showing
the data that is collected with the Walabot and the image where
the data was extracted from. When running the application, and
a person is standing in front of the mirror, the image and
corresponding data are continuously updated in the database.
On the web page, a short JavaScript script is run that updates
the data and image every 5 seconds.
Another view showcases weather updates spanning over the
next 24 hours in 3-hour intervals. This is achieved by using the
5-day weather forecast API[6]. A specific HTTP request is
made towards the openweathermap website which returns a
JSON formatted string with all the needed information of the
requested city.
The last view displays the latest news items of the Belgian front
page news. This is achieved by an RSS (Rich Site Summary)
feed from the Belgian version of google news[7]. By using the
code structure given by surfing waves[8], a news widget is
formed. The benefit of using this API is that it allows for a very
easy modification of the widget layout by changing the
parameters. This widget shows the article title and a short
description of the article. As there is no mouse with the mirror,
the scroll option is enabled in the API. This allows the widget
to scroll down the news feed every so often.
V. CODE FLOW
The figure below is a representation of how the different code
blocks interact with each other to create the final product.
Figure 1: Smart Mirror flowchart of communication between code
blocks
On the left side of the figure are 3 blocks that represent the code
on the local side of the mirror. The Raspberry Pi contains and
runs the code of the Walabot and face detection application. It
gets its data from the Walabot sensor and the webcam. The
picture taken with the webcam is sent to the Microsoft API who
then returns a JSON string with the calculated information of
that picture. The Raspberry Pi will then forward all the gathered
data to the SmartMirror API. This block will then store the data
in the appropriate tables in the database and send the data to the
SmartMirror website. This website consist of the measured
data, weather update given by the 5-day weather forecast
API[6] (in the picture referred to as weather API) and the latest
news headlines by the API given by surfing waves[8] (referred
to as News API in the figure above).
F. Appendix
This project was developed in the Smart-ICT Research group
from Hogeschool PXL (Belgium, Hasselt-Diepenbeek) in the
SmaCos (Smart Connected Services) research project[9].
VI. REFERENCES
[1] C. Gu and C. Li, “Assessment of human respiration
patterns via noncontact sensing using Doppler multi-
radar system,” Sensors (Basel)., vol. 15, no. 3, pp. 6383–
6398, 2015.
[2] For, “Report Information from ProQuest,” Organ. Dev.
J., no. May, 2012.
[3] T. Specs, “Walabot-Tech-Brief-416,” pp. 1–11, 2016.
[4] “Walabot API: WalabotAPI.h File Reference.” [Online].
Available:
http://api.walabot.com/_walabot_a_p_i_8h.html.
[Accessed: 14-May-2017].
[5] P. Viola and M. J. Jones, “Robust Real-Time Face
Detection,” Int. J. Comput. Vis., vol. 57, no. 2, pp. 137–
154, 2004.
[6] “5-day weather forecast- OpenWeatherMap.” [Online].
Available: https://openweathermap.org/forecast5.
[Accessed: 14-May-2017].
[7] “Google News.” [Online]. Available:
https://news.google.com/. [Accessed: 14-May-2017].
[8] “Free Feed Widget for displaying valid RSS &
XML feeds, as used on Surfing Waves.” [Online].
Available: http://www.surfing-waves.com/feed.htm.
[Accessed: 14-May-2017].
[9] https://www.pxl.be/SmartICT.html
[10] https://www.pxl.be/Pub/onderzoek/Projecten/Projecten-
Smart-ICT/Smart-Connected-Services-(SmaCoS).html