[Ubicomp'15]SakuraSensor: Quasi-Realtime Cherry-Lined Roads Detection through Participatory Video Sensing by Cars
Sep. 15, 2015•0 likes
2 likes
Be the first to like this
Show More
•30,502 views
views
Total views
0
On Slideshare
0
From embeds
0
Number of embeds
0
Download to read offline
Report
Mobile
SakuraSensor, a system which senses and shares the information of roads with flowering cherries by leveraging car-mounted smart-phones.
Honorable Mention Award of UbiComp2015.
Scenic route search
Problems of existing services
• Information is edited manually
– Small number of scenic spots
– Low update frequency
• Scenery information consists of
only texts and images
– insufficient for users
3
Our approach
• Use participatory sensing by cars
• Collect and share videos of scenic spots
Example of scenic spot info.
Related work
4
Method
Proposed
method
ParkNet [12] SignalGuru [15] Nericell [3]
Participatory
sensing
○ ○
Cooperative
sensing
○
Real-time ○ ○ ○ ○
Information
detection from
videos
○
× (ultrasound
signals)
△(traffic
signals)
× (horn
sounds)
[12] ParkNet: Drive-by Sensing of Road-Side Parking Statistics, MobiSys’10
[15] SignalGuru: Leveraging Mobile Phones for Collaborative Traffic Signal Schedule Advisory, MobiSys’11
[11] Nericell: Rich Monitoring of Road and Traffic Conditions using Mobile Smartphones, SenSys’08
Many existing studies on participatory sensing (PS) by cars
No studies use both PS and real-time video sensing
SakuraSensor: automatically identifies scenic
spots location and collects videos using PS
・ we target cherry-lined roads
・ automatically collect and update scenic information
・ gathering videos of scenic location
The best period of flowering cherries
is short and uncertain from year to
year and from place to place
SakuraSensor App for iOS devices
6Full size video - https://youtu.be/2pRfDS7DeAc Demo at Hall C No.20
Key Idea
7
CloudCars with Smartphone
Too much cost for
cellular bandwidth
& computation
resource at cloud
Recording
video
Analyzing & sharing
video with cherriesUpload whole recorded video
Recording
video
Analyzing
video
Upload only
video with flowering cherries
Sharing
video with cherries
TC1: Real-time Cherry Detection
• Employ simple computer vision techniques
– Smart phone has lower computation power than PC/Cloud
Basic approach
• Count cherry-like color pixels
in each image
• Identify amount of flowering
cherry as cherry intensity
Problem to solve
• Artificial objects with
similar color must be removed
9
Step1: Removing Artificial Objects
An input image Binary image after edge detection
box counting
method [5]
fractal
dimensions10
• Employ fractal analysis
– Note: natural objects has higher fractal dimension
Step2: Detecting Cherry by Color Analysis
12
Used 148 regions extracted
from various scenes
• Created color histogram of flowering cherry in
HSV color space
HSV color space
• H(Hue)
• S(Saturation)
• V(Value of Brightness)
From 「http://en.wikipedia.org/wiki/HSL_and_HSV」
characterizes the color
significantly varies depending on
the lighting condition
13
Our approach
used only H-S color space
H-S histogram for flowering cherry
H
S
0 179
0
255
• Created from total of
148 cherry regions
• The value at each
coordinate is
normalized between
0 and 1
14
Step3: Calculating cherry intensity of an image
H
S
0
0
Pixel’s
(H, S)=(30, 20)
The value of (30, 20) is
0.816
An input image
Cherry intensity =
average value of all pixels
Use Backprojection
method [6]
TC2: Load Distribution among Cars
When all cars always conduct image analysis & uploads
too much cost (battery consumption, bandwidth, etc)
Possible approach
• each car senses at a
fixed interval
may miss PoI
(cherry locations)
17
k-stage sensing
18
location where sensing
is performed
Narrows sensing interval step-by-step when new PoI is found
Fixed interval
(1st stage)
PoI is detected!
The preceding car
k-stage sensing
19
Shorter Interval
(2nd stage)
PoI is detected!
Sensing is
performed in
this Radius
PoI detected by
preceding car
The following car
traveling the same road
Narrows sensing interval step-by-step when new PoI is found
location where sensing
is performed
Evaluation of SakuraSensor
• Investigate effectiveness of cherry intensity
– Compare the results of manual classification and
automatic classification by cherry intensity
Videos
manual classification
(used as ground truth)
classification
by cherry intensity
Compute accuracy by comparison
20
Videos used for experiments
• Recorded videos in 8 different scenes (routes) using
SakuraSensor app for iOS by multiple cars
scene name date vehicle area Length (min.)
S1 Mar. 31 V1 Aichi Pref. 17
S2 Apr. 5 V2 Nara Pref. 12
S3 Apr. 10 V2 Nara Pref. 66
S4 Apr. 10 V3 Nara Pref. 261
S5 Apr. 10 V4 Nara Pref. 186
S6 Apr. 11 V1 Gifu Pref. 72
S7 Apr. 12 V2 Osaka Pref. 137
S8 Apr. 18 V1 Aichi Pref. 89
extracted 1-second videos at random starting time from each scene
21
1-Second Videos Manual Classification
Class name Criteria
C1 cherry ratio (in image) < 5%
C2 5% ≤ cherry ratio < 25%
C3 25% ≤ cherry ratio
Scene C1 C2 C3
S1 79 17 10
S2 93 10 17
S3 372 43 3
S4 1613 96 45
S5 1167 6 0
S6 261 47 72
S7 888 1 0
S8 521 10 7
Total 4994 230 154 22
• Classification results with
the same decision by two
persons were used
Videos of each class
23
C1 (ratio < 5%) C2 (5% ≤ ratio < 25%) C3 (25% ≤ ratio)
Evaluation Methodology
𝐶1 𝐶1
𝐶2 𝐶2
𝐶3 𝐶3
Dividing videos
of each class
into halves
Training set Test set
24
Set of 1
second videos
Set of videos
of class 𝐶1
Set of videos
of 𝐶2
Set of videos
of 𝐶3
Manual
classification
by human
Evaluation Methodology
𝐶1
𝐶1
𝐶2
𝐶2
𝐶3
𝐶3
Training set
Test set
Median of cherry intensity: M1 (0.00033)
Median of cherry intensity: M2 (0.00791)
Median of cherry intensity: M3 (0.03326)
Vi
Cherry
intensity
A video
25
𝐷(𝑉𝑖)
𝑀1
𝑀2
𝑀3
𝑉𝑖 is classified into the class that has smallest
distance between 𝐷 𝑉𝑖 and its median
Classification Accuracy (1-second videos)
• 𝑪 𝟏 and 𝑪 𝟑: good results
• 𝑪 𝟐: not enough 26
𝐶1 𝐶2 𝐶3
precision
recall
0.97
0.90
0.74
0.83
0.24
0.65
Evaluation of k-stage sensing
27
• Simulation by 600 cars (k=3, 300m150m50m)
smaller sensing times similar PoI discovery rate
Conclusions
• SakuraSensor
– Participatory video sensing system by cars
– Consisting of two key techniques
• Flowering cherry detection by in-vehicle smartphone
– Color histogram analysis for identifying cherry-blossoms
– Fractal dimension analysis for removing artificial objects other than
flowering cherry
– Cherry detection accuracy (C3) with 0.7 of Precision and 0.8 of Recall
• k-stage sensing
– Distribute sensing load among cars
– Similar PoI discovery rate with about half sensing times compared
with the fixed interval sensing method
28
Thank you chairperson.
Good afternoon, everyone.
My name is Shigeya Morishita from Nara Institute of Science and Technology.
I am very happy to see all of you today.
Today, I would like to present our research named sakura sensor.
Latest car navigation systems help drivers with comfortable and efficient driving.
With these systems, we can search routes by various (ベアリアス) criteria.
Among these criteria, we focus on scenic beauty.
However, existing scenic route search services have some problems.
First, information is edited manually.
Second, scenery information consists of only texts and images.
To solve these problems, approach, we use participatory sensing by cars and
automatically collect and share videos (大きな声で強調) of scenic spots.
There are many existing studies on participatory sensing by cars.
However, as long as we know, no studies use both participatory sensing and real-time video sensing.
提案手法の新規性や工夫について.
We propose Sakura Sensor, which automatically identifies scenic spots location and collects videos using participatory sensing.
SakuraSensor targets flowering cherries called SAKURA in Japanese, since
the best period of flowering cherries is short and uncertain from year to year and from place to place.
So, up-to-date information is mandatory.
We have developed SakuraSensor application for iOS devices.
I’ll show a demo video of sakurasensor.
We are also demonstrating SakuraSensor at hall C number twenty.
One possible approach to realize SakuraSensor is as follows.
Cars with smartphone record videos and upload the whole recorded video to cloud server for analysis and sharing.
However, this approach takes too much cost for cellular bandwidth and computation resource at cloud.
The key idea of SakuraSensor is analyzing video at smartphones so that only video with flowering cherries are uploaded to the cloud and shared.
Challenges and key ideas of SakuraSensor
We have two technical challenges to realize SakuraSensor.
First challenge is how to realize real-time flowering cherry detection by smartphone.
Second challenge is how to realize efficient load distribution among cars.
For the first challenge, we employ simple computer vision techniques since smart phone has lower computation power.
So, our basic approach is just to count chery-like color pixels in each image and identify amount of flowering cherry
In each image called cherry intensity (強調).
Here, the problem to solve is that artificial objects with similar color must be removed.
To remove artificial objects in each image, we employ fractal dimension analysis.
Here, note that natural objects has higher fractal dimension.
So, to an input image, we apply edge detection algorithm, and box counting method
to calculate fractal dimension of each square region.
This is Real-time fractal dimension calculation.
Here red color regions show natural objects.
Then we detect flowering cherry by color analysis.
We created color histogram of flowering cherry in HSV color space.
Here, we used 148 regions extracted from various scenes.
These are part of the regions.
HSV color space consists of Hue, Saturation and Value of brightness.
From preliminary experiment, we found that V significantly varies (ヴェアリーズ) depending on the lighting condition.
So, we used only H-S color space.
This is the H-S histogram created from 148 cherry regions.
The value at each coordinate is normalized between 0 and 1.
Then we calculate cherry intensity of an image by using backprojection method.
For each pixel, the value is retrieved in the H-S histogram
Finally, cherry intensity of the image is calculated as the average value of all pixels.
This is real-time cherry intensity calculation.
Here, red boxes show high cherry intensity regions.
The second technical challenge is load distribution among cars.
When all cars always conduct image analysis and uploads of videos,
the cars will take too much cost.
Possible approach is that each car senses at a fixed interval.
However, it may miss PoI.
So we propose k-stage sensing which narrows sensing interval step-by-step when new PoI is found by preceding cars.
This is the example of k-stage sensing.The preceding car travels and sensing is performed at an initial fixed interval.
After that, when a following car enters the same road.
The car narrows its sensing interval and radius, respectively, because a PoI is found on the road.
Then, this car performs sensing at the shorter interval while the car is in the circle centered at the PoI with radius.
We conducted some experiments to evaluate Sakura Sensor.
The first experiment is to investigate the accuracy of cherry intensity.
We compare the result of manual classification and automatic classification by cherry intensity.
We recorded videos in 8 different scenes using SakuraSensor application for iOS devices by multiple cars.
We extracted 1-second videos at randomly selected starting time from each scene.
We defined three classes where C1’s cherry ratio in image is less than 5%, C2 between 5 and 25%, C3 more than 25%.
Here, only classification results with the same decision by two persons were used.
These are example Videos of class C1
First, we divided the set of classified videos in each class to the training set and the test set.
Then, from training set, we calculated median of cherry intensity for each class.
Using the median values, 1-second videos in the test set are automatically classified.
This figure shows the classification result by Sakura Sensor.
We see that a good classification result is obtained for class C1 and C3 videos.
On the other hand, for class C2 videos result is not so good.The main reason is that many videos included in class C1 were classified to class C2.
We also evaluated the effectiveness of 3-stage sensing method.
Evaluation was done with simulation by 600 cars.
These results show the k-stage sensing achieves good PoI discovery rate with smaller sensing times.
We just adopted hulistic. Or empilically
Conclusions
We proposed sakura sensor which is a Participatory video sensing system by cars.
As two key techniques, we proposed flowering cherry detection by in-vehicle smart phone and
K-stage sensing.
(以降は話さない)
this system consisting of two key techniques
First is flowering cherry detection by in-vehicle smartphone.
Color histogram analysis for identifying cherry-blossoms
Fractal dimension analysis for removing artificial objects other than flowering cherry
Cherry detection accuracy with 0.7 of Precision and 0.8 of Recall
Second is k-stage sensing
this method distributes sensing load among cars.
Similar PoI discovery rate with about half sensing times compared with the fixed interval sensing method.
We also demonstrate our system.
Please wat
どれだけリアルタイムで桜センサの計算ができるかという説明のスライドを追加
実装上の工夫