SlideShare a Scribd company logo
1 of 258
Download to read offline
Human Computer 
Interaction 
Laboratory 
makeability lab 
CHARACTERIZING PHYSICAL WORLD ACCESSIBILITY AT SCALE 
USING CROWDSOURCING, COMPUTER VISION, & MACHINE LEARNING
My Group Started in 2012
Human-Computer Interaction Lab
BenShneiderman 
BenBederson 
JonFroehlich 
JenGolbeck 
LeahFindlater 
MarshiniChetty 
JennyPreece 
AllisonDruin 
MonaLeighGuha 
TammyClegg 
JuneAhn 
EvanGolub 
TimClausner 
KentNorman 
IraChinoy 
KariKraus 
AnneRose 
CatherinePlaisant 
computer science 
hcil 
JessicaVitak 
NiklasElmqvist 
NicholasDiakopoulos
HCIL Hackerspace 
Founded in 2012
HCIL Hackerspace 
Looking North
HCIL Hackerspace 
Looking South
Three Soldering Stations 
HCIL Hackerspace
Craft/Textile Station 
HCIL Hackerspace
Two Mannequins 
HCIL Hackerspace
Wall of Electronic Components HCIL Hackerspace
Quadcopters 
HCIL Hackerspace
Two 3D-Printers 
HCIL Hackerspace
One CNC Machine 
HCIL Hackerspace
Physical Making 
HCIL Student Leyla Norooz
Electronics Making 
HCIL student Tansy McBurnie
E-Textile Design 
HCIL Student Michael Gubbels showing SFF
Collaborative Working 
HCIL students Joseph, Cy, Matt, and Jonah
Student Sewing HCIL student Matt sewing
Fun! 
HCIL students Kotaro Hara and Allan Fong
More Fun! 
HCIL students Sean, Michael, Alexa, and me
Human-Computer Interaction Lab
Human Computer 
Interaction 
Laboratory 
makeability lab 
CHARACTERIZING PHYSICAL WORLD ACCESSIBILITY AT SCALE 
USING CROWDSOURCING, COMPUTER VISION, & MACHINE LEARNING
30.6 
million U.S. adults with mobility impairment
15.2 
million use an assistive aid
Incomplete Sidewalks 
Physical Obstacles 
Surface Problems 
No Curb Ramps 
Stairs/Businesses
The National Council on Disability noted that there is no comprehensive information on “the degree to which sidewalks are accessible” in cities. 
National Council on Disability, 2007 
The impact of the Americans with Disabilities Act: Assessing the progress toward achieving the goals of the ADA
The lack of street-level accessibility information can have a significant impact on the independence and mobility of citizens 
cf. Nuernberger, 2008; Thapar et al., 2004
I usually don’t go where I don’t know [about accessible routes] 
-P3, congenital polyneuropathy
“Man in Wheelchair Hit By Vehicle Has Died From Injuries” 
-The Aurora, May 9, 2013
http://youtu.be/gWuryTNRFzw
http://accesscore.org 
This is a mockup interface based on walkscore.com and walkshed.com
http://accesscore.org 
This is a mockup interface based on walkscore.com and walkshed.com
How might a tool like AccessScore: 
Change the way people think about and understand their neighborhoods 
Influence property values 
Impact where people choose to live 
Change how governments/citizens make decisions about infrastructural investments
AccessScore would not change how people navigate the city, for this we need a different tool…
NAVIGATION TOOLS ARE NOT ACCESSIBILITY AWARE
Routing for: Manual Wheelchair 
1st of 3 Suggested Routes 16 minutes, 0.7 miles, 1 obstacle 
! 
! 
! 
! 
A 
B 
Route 1 
Route 2 
Surface Problem 
Avg Severity: 3.6 (Hard to Pass) 
Recent Comments: 
“Obstacle is passable in a manual chair but not in a motorized chair” 
Routing for: Manual Wheelchair 
A 
1st of 3 Suggested Routes 16 minutes, 0.7 miles, 1 obstacle 
! 
ACCESSIBILITY AWARE NAVIGATION SYSTEMS
Where is this data going to come from?
Safe Routes to School Walkability Audit 
Rock Hill, South Carolina 
Walkability Audit Wake County, North Carolina 
Walkability Audit Wake County, North Carolina 
TRADITIONAL WALKABILITY AUDITS
Safe Routes to School Walkability Audit 
Rock Hill, South Carolina 
Walkability Audit 
Wake County, North Carolina 
Walkability Audit 
Wake County, North Carolina 
TRADITIONAL WALKABILITY AUDITS
http://www1.nyc.gov/311/index.page 
MOBILE REPORTING SOLUTIONS
http://www1.nyc.gov/311/index.page 
MOBILE REPORTING SOLUTIONS
Similar to physical audits, these tools are built for in situ reporting and do not support remote, virtual inquiry—which limits scalability 
Not designed for accessibility data collection
MARK & FIND ACCESSIBLE BUSINESSES 
wheelmap.org 
axsmap.com
MARK & FIND ACCESSIBLE BUSINESSES 
wheelmap.org 
axsmap.com 
Focuses on businesses rather than streets & sidewalks 
Model is still to report on places you’ve visited
Our Approach: Use Google Street View (GSV) as a massive data source for scalably finding and characterizing street-level accessibility
HIGH-LEVEL RESEARCH QUESTIONS 
1.Can we use Google Street View (GSV) to find street- level accessibility problems? 
2.Can we create interactive systems to allow minimally trained crowdworkers to quickly and accurately perform remote audit tasks? 
3.Can we use computer vision and machine learning to scale our approach?
ASSETS’12 Poster 
Feasibility study + labeling interface evaluation 
HCIC’13 Workshop 
Exploring early solutions to computer vision (CV) 
HCOMP’13 Poster 
1st investigation of CV + crowdsourced verification 
CHI’13 Large-scale turk study + label validation with wheelchair users 
ASSETS’13 
Applied to new domain: bus stop accessibility for visually impaired 
UIST’14 
Crowdsourcing + CV + “smart” work allocation 
TOWARDS SCALABLE ACCESSIBILITY DATA COLLECTION
ASSETS’12 Poster Feasibility study + labeling interface evaluation 
HCIC’13 Workshop 
Exploring early solutions to computer vision (CV) 
HCOMP’13 Poster 
1st investigation of CV + crowdsourced verification 
CHI’13 
Large-scale turk study + label validation with wheelchair users 
ASSETS’13 Applied to new domain: bus stop accessibility for visually impaired 
UIST’14 
Crowdsourcing + CV + “smart” work allocation 
TODAY’S TALK
ASSETS’12 Poster 
Feasibility study + labeling interface evaluation 
HCIC’13 Workshop Exploring early solutions to computer vision (CV) 
HCOMP’13 Poster 
1st investigation of CV + crowdsourced verification 
CHI’13 
Large-scale turk study + label validation with wheelchair users 
ASSETS’13 
Applied to new domain: bus stop accessibility for visually impaired 
UIST’14 Crowdsourcing + CV + “smart” work allocation 
TODAY’S TALK
ASSETS’12 GOALS: 
1.Investigate viability of reapproprating online map imagery to determine sidewalk accessibility via crowd workers 
2.Examine the effect of three different interactive labeling interfaces on task accuracy and duration
WEB-BASED LABELING INTERFACE
WEB-BASED LABELING INTERFACE 
FOUR STEP PROCESS 
1. Find and mark accessibility problem 
2. Select problem category 
3. Rate problem severity 
4. Submit completed image
WEB-BASED LABELING INTERFACE VIDEO 
Video shown to crowd workers before they labeled their first image 
http://youtu.be/aD1bx_SikGo
WEB-BASED LABELING INTERFACE VIDEO 
http://youtu.be/aD1bx_SikGo
THREE LABELING INTERFACES 
Point-and-click 
Rectangular Outline 
Polygonal Outline 
Pixel Granularity
Los Angeles 
DATASET: 100 IMAGES 
New York 
Baltimore 
Washington DC
DATASET BREAKDOWN 
34 
29 
27 
11 
19 
0 
10 
20 
30 
40 
No Curb Ramp 
Surface Problem 
Object in Path 
Sidewalk Ending 
No Sidewalk Accessibility Issues 
Manually curated 100 images from urban neighborhoods in LA, Baltimore, Washington DC, and NYC
DATASET BREAKDOWN 
34 
29 
27 
11 
19 
0 
10 
20 
30 
40 
No Curb Ramp 
Surface Problem 
Object in Path 
Sidewalk Ending 
No Sidewalk Accessibility Issues 
Manually curated 100 images from urban neighborhoods in LA, Baltimore, Washington DC, and NYC 
Used to evaluate false positive labeling activity
DATASET BREAKDOWN 
34 
29 
27 
11 
19 
0 
10 
20 
30 
40 
No Curb Ramp 
Surface Problem 
Object in Path 
Sidewalk Ending 
No Sidewalk Accessibility Issues 
Manually curated 100 images from urban neighborhoods in LA, Baltimore, Washington DC, and NYC 
Used to evaluate false positive labeling activity 
This breakdown based on majority vote data from 3 independent researcher labels
Our ground truth process
What accessibility problems exist in this image?
R1 
R2 
R3 
Researcher Label Table 
Object in Path 
Curb Ramp Missing 
Sidewalk Ending 
Surface Problem 
Other
Object in Path 
Curb Ramp Missing 
R1 
R2 
R3 
Researcher Label Table
Object in Path 
Curb Ramp Missing 
R1 
R2 
R3 
Researcher Label Table 
x2 
Researcher 1
x4 
Object in Path 
Curb Ramp Missing 
R1 
R2 
R3 
Researcher Label Table 
Researcher 2
Researcher 3 
Object in Path 
Curb Ramp Missing 
R1 
R2 
R3 
Researcher Label Table 
x8
Researcher 1 
Researcher 2 
Researcher 3
There are multiple ways to examine the labels.
Object in Path 
Curb Ramp Missing 
R1 
R2 
R3 
Researcher Label Table 
Image Level Analysis 
This table tells us what accessibility problems exist in the image
Pixel Level Analysis 
Labeled pixels tell us where the accessibility problems exist in the image.
Why do we care about image level vs. pixel level?
Coarse 
Precise 
Point Location Level 
Sub-block Level 
Block Level 
(Pixel Level) 
(Image Level)
Coarse 
Precise 
Point Location Level 
Sub-block Level 
Block Level 
(Pixel Level) 
(Image Level)
Coarse 
Precise 
Point Location Level 
Sub-block Level 
Block Level 
(Pixel Level) 
(Image Level) 
Pixel level labels could be used for training machine learning algorithms for detection and recognition tasks
Coarse 
Precise 
Localization Spectrum 
Point Location Level 
Sub-block Level 
Block Level 
Class Spectrum 
Multiclass 
Object in Path 
Curb Ramp Missing 
Prematurely Ending Sidewalk 
Surface Problem 
Binary 
Problem 
No Problem 
(Pixel Level) 
(Image Level) 
TWO ACCESSIBILITY PROBLEM SPECTRUMS 
Different ways of thinking about accessibility problem labels in GSV 
Coarse 
Precise
Object in Path 
Curb Ramp Missing 
R1 
R2 
R3 
Researcher Label Table 
Problem 
Multiclass label 
Binary Label 
Sidewalk Ending 
Surface Problem 
Other
To produce a single ground truth dataset, we used majority vote.
R1 
R2 
R3 
Maj. Vote 
Researcher Label Table 
Object in Path 
Curb Ramp Missing 
Sidewalk Ending 
Surface Problem 
Other
R1 
R2 
R3 
Maj. Vote 
Researcher Label Table 
Object in Path 
Curb Ramp Missing 
Sidewalk Ending 
Surface Problem 
Other
R1 
R2 
R3 
Maj. Vote 
Researcher Label Table 
Object in Path 
Curb Ramp Missing 
Sidewalk Ending 
Surface Problem 
Other
ASSETS’12 MTURK STUDY METHOD 
Independently posted 3 labeling interfaces to MTurk. Crowdworkers could work with only one interface. 
For training, turkers required to watch first 1.5 mins of 3-min instructional video. 
Hired ~7 workers per image to explore avg accuracy 
Turkers paid ~3-5 cents per HIT. We varied number of images/HIT from 1-10.
ASSETS’12 MTURK DESCRIPTIVE RESULTS 
Hired 132 unique workers 
Worked on 2,325 assignments 
Provided a total of 4,309 labels (AVG=1.9/image)
MAIN FINDINGS: IMAGE-LEVEL ANALYSIS 
0% 
20% 
40% 
60% 
80% 
100% 
Point-and-click 
Outline 
Rectangle 
AVERAGE ACCURACY 
Higher is better 
0 
10 
20 
30 
40 
50 
Point-and-click 
Outline 
Rectangle 
MEDIAN TASK TIME (SECS) 
Lower is better
MAIN FINDINGS: IMAGE-LEVEL ANALYSIS 
83.0% 
82.6% 
79.2% 
0% 
20% 
40% 
60% 
80% 
100% 
Point-and-click 
Outline 
Rectangle 
AVERAGE ACCURACY 
All three interfaces performed similarly. This is without quality control. 
0 
10 
20 
30 
40 
50 
Point-and-click 
Outline 
Rectangle 
MEDIAN TASK TIME (SECS) 
Higher is better 
Lower is better
MAIN FINDINGS: IMAGE-LEVEL ANALYSIS 
83.0% 
82.6% 
79.2% 
0% 
20% 
40% 
60% 
80% 
100% 
Point-and-click 
Outline 
Rectangle 
32.9 
41.5 
43.3 
0 
10 
20 
30 
40 
50 
Point-and-click 
Outline 
Rectangle 
AVERAGE ACCURACY 
MEDIAN TASK TIME (SECS) 
All three interfaces performed similarly. This is without quality control. 
Point-and-click is the fastest; 26% faster than Outline & 32% faster than Rectangle 
Higher is better 
Lower is better
ASSETS’12 CONTRIBUTIONS: 
1.Demonstrated that minimally trained crowd workers could locate and categorize sidewalk accessibility problems in GSV images with > 80% accuracy 
2.Showed that point-and-click fastest labeling interface but that outline faster than rectangle
ASSETS’12 Poster 
Feasibility study + labeling interface evaluation 
HCIC’13 Workshop 
Exploring early solutions to computer vision (CV) 
HCOMP’13 Poster 
1st investigation of CV + crowdsourced verification 
CHI’13 
Large-scale turk study + label validation with wheelchair users 
ASSETS’13 
Applied to new domain: bus stop accessibility for visually impaired 
UIST’14 
Crowdsourcing + CV + “smart” work allocation 
TODAY’S TALK
ASSETS’12 Poster Feasibility study + labeling interface evaluation 
HCIC’13 Workshop 
Exploring early solutions to computer vision (CV) 
HCOMP’13 Poster 
1st investigation of CV + crowdsourced verification 
CHI’13 
Large-scale turk study + label validation with wheelchair users 
ASSETS’13 
Applied to new domain: bus stop accessibility for visually impaired 
UIST’14 
Crowdsourcing + CV + “smart” work allocation 
TODAY’S TALK
CHI’13 GOALS: 
1.Expand ASSETS’12 study with larger sample. 
•Examine accuracy as function of turkers/image 
•Evaluate quality control mechanisms 
•Gain qualitative understanding of failures/successes 
2.Validate researcher ground truth with labels from three wheelchair users
Los Angeles 
DATASET: EXPANDED TO 229 IMAGES 
New York 
Baltimore 
Washington DC
CHI’13 GOALS: 
1.Expand ASSETS’12 study with larger sample. 
•Examine accuracy as function of turkers/image 
•Evaluate quality control mechanisms 
•Gain qualitative understanding of failures/successes 
2.Validate researcher ground truth with labels from three wheelchair users
GROUND TRUTH: MAJORITY VOTE 3 RESEARCHER LABELS 
How “good” is our ground truth?
IN-LAB STUDY METHOD 
Three wheelchair participants 
Independently labeled 75 of 229 GSV images 
Used think-aloud protocol. Sessions were video recorded 
30-min post-study interview 
We used Fleiss’ kappa to measure agreement between wheelchair users and researchers
Here is an example recording from the study session
IN-LAB STUDY RESULTS 
Strong agreement (κmulticlass=0.74) between wheelchair participants and researcher labels (ground truth) 
In interviews, one participant mentioned using GSV to explore areas prior to travel
CHI’13 GOALS: 
1.Expand ASSETS’12 study with larger sample. 
•Examine accuracy as function of turkers/image 
•Evaluate quality control mechanisms 
•Gain qualitative understanding of failures/successes 
2.Validate researcher ground truth with labels from three wheelchair users
CHI’13 GOALS: 
1.Expand ASSETS’12 study with larger sample. 
•Examine accuracy as function of turkers/image 
•Evaluate quality control mechanisms 
•Gain qualitative understanding of failures/successes 
2.Validate researcher ground truth with labels from three wheelchair users
CHI’13 MTURK STUDY METHOD 
Similar to ASSETS’12 but more images (229 vs. 100) and more turkers (185 vs. 132) 
Added crowd verification quality control 
Recruited 28+ turkers per image to investigate accuracy as function of workers
University of Maryland: Help make our sidewalks more accessible for wheelchair users with Google Maps 
Kotaro Hara 
Timer: 00:07:00 of 3 hours 
10 
3 hours 
Labeling Interface
Kotaro Hara 
Timer: 00:07:00 of 3 hours 
University of Maryland: Help make our sidewalks more accessible for wheelchair users with Google Maps 
3 hours 
10 
Verification Interface
Kotaro Hara 
Timer: 00:07:00 of 3 hours 
University of Maryland: Help make our sidewalks more accessible for wheelchair users with Google Maps 
3 hours 
10 
Verification Interface
CHI’13 MTURK LABELING STATS 
Hired 185 unique workers Worked on 7,517 labeling tasks (AVG=40.6/turker) Provided a total of 13,379 labels (AVG=1.8/image) 
Hired 273 unique workers 
Provided a total of 19,189 verifications 
CHI’13 MTURK VERIFICATION STATS
CHI’13 MTURK LABELING STATS 
Hired 185 unique workers 
Worked on 7,517 labeling tasks (AVG=40.6/turker) 
Provided a total of 13,379 labels (AVG=1.8/image) 
CHI’13 MTURK VERIFICATION STATS 
Hired 273 unique workers 
Provided a total of 19,189 verifications 
Median image labeling time vs. verification time: 35.2s vs. 10.5s
CHI’13 MTURK KEY FINDINGS 
81% accuracy without quality control 
93% accuracy with quality control
Some turker labeling successes...
TURKER LABELING EXAMPLES 
Curb Ramp Missing
TURKER LABELING EXAMPLES 
Curb Ramp Missing
TURKER LABELING EXAMPLES 
Object in Path
TURKER LABELING EXAMPLES 
Object in Path
TURKER LABELING EXAMPLES 
Prematurely Ending Sidewalk
TURKER LABELING EXAMPLES 
Prematurely Ending Sidewalk
TURKER LABELING EXAMPLES 
Surface Problems
TURKER LABELING EXAMPLES 
Surface Problems
TURKER LABELING EXAMPLES 
Surface Problems 
Object in Path
TURKER LABELING EXAMPLES 
Surface Problems 
Object in Path
And now some turker failures…
TURKER LABELING ISSUES 
Overlabeling 
Some Turkers Prone to High False Positives 
No Curb Ramp
No Curb Ramp 
TURKER LABELING ISSUES 
Overlabeling 
Some Turkers Prone to High False Positives 
Incorrect Object in Path label. Stop sign is in grass.
TURKER LABELING ISSUES 
Overlabeling 
Some Turkers Prone to High False Positives 
Surface Problems
TURKER LABELING ISSUES 
Overlabeling 
Some Turkers Prone to High False Positives 
Surface Problems 
Tree not actually an obstacle
TURKER LABELING ISSUES 
Overlabeling 
Some Turkers Prone to High False Positives 
No problems in this image
TURKER LABELING ISSUES 
Overlabeling 
Some Turkers Prone to High False Positives
T1 
T2 
T3 
Maj. Vote 
3 Turker Majority Vote Label 
Object in Path 
Curb Ramp Missing 
Sidewalk Ending 
Surface Problem 
Other 
T3 provides a label of low quality
To look into the effect of turker majority vote on accuracy, we had 28 turkers label each image
28 groups of 1: 
We had 28 turkers 
label each image:
28 groups of 1: 
We had 28 turkers 
label each image: 
9 groups of 3:
28 groups of 1: 
We had 28 turkers 
label each image: 
9 groups of 3: 
5 groups of 5:
28 groups of 1: 
We had 28 turkers 
label each image: 
9 groups of 3: 
5 groups of 5:
28 groups of 1: 
We had 28 turkers 
label each image: 
9 groups of 3: 
5 groups of 5: 
4 groups of 7: 
3 groups of 9:
78.3% 
83.8% 
86.8% 
86.6% 
87.9% 
50% 
60% 
70% 
80% 
90% 
100% 
1 turker (N=28) 
3 turkers (N=9) 
5 turkers (N=5) 
7 turkers (N=4) 
9 turkers (N=3) 
Average Image-level Accuracy (%) 
Error bars: standard error 
Image-Level Accuracy 
Object in Path 
Curb Ramp Missing 
Sidewalk Ending 
Surface Problem 
4 Labels 
Multiclass Accuracy
78.3% 
83.8% 
86.8% 
86.6% 
87.9% 
50% 
60% 
70% 
80% 
90% 
100% 
1 turker (N=28) 
3 turkers (N=9) 
5 turkers (N=5) 
7 turkers (N=4) 
9 turkers (N=3) 
Average Image-level Accuracy (%) 
Error bars: standard error 
Image-Level Accuracy 
Multiclass Accuracy 
Object in Path 
Curb Ramp Missing 
Sidewalk Ending 
Surface Problem 
4 Labels
78.3% 
83.8% 
86.8% 
86.6% 
87.9% 
50% 
60% 
70% 
80% 
90% 
100% 
1 turker (N=28) 
3 turkers (N=9) 
5 turkers (N=5) 
7 turkers (N=4) 
9 turkers (N=3) 
Average Image-level Accuracy (%) 
Error bars: standard error 
Image-Level Accuracy 
Multiclass Accuracy 
Object in Path 
Curb Ramp Missing 
Sidewalk Ending 
Surface Problem 
4 Labels 
Accuracy saturates after 5 turkers
78.3% 
83.8% 
86.8% 
86.6% 
87.9% 
50% 
60% 
70% 
80% 
90% 
100% 
1 turker (N=28) 
3 turkers (N=9) 
5 turkers (N=5) 
7 turkers (N=4) 
9 turkers (N=3) 
Average Image-level Accuracy (%) 
Error bars: standard error 
Image-Level Accuracy 
Multiclass Accuracy 
Object in Path 
Curb Ramp Missing 
Sidewalk Ending 
Surface Problem 
4 Labels 
Stderr: 0.2% 
Stderr=0.2%
78.3% 
83.8% 
86.8% 
86.6% 
87.9% 
50% 
60% 
70% 
80% 
90% 
100% 
1 turker (N=28) 
3 turkers (N=9) 
5 turkers (N=5) 
7 turkers (N=4) 
9 turkers (N=3) 
Average Image-level Accuracy (%) 
Error bars: standard error 
Image-Level Accuracy 
Multiclass Accuracy 
Object in Path 
Curb Ramp Missing 
Sidewalk Ending 
Surface Problem 
4 Labels 
Binary Accuracy
78.3% 
83.8% 
86.8% 
86.6% 
87.9% 
50% 
60% 
70% 
80% 
90% 
100% 
1 turker (N=28) 
3 turkers (N=9) 
5 turkers (N=5) 
7 turkers (N=4) 
9 turkers (N=3) 
Average Image-level Accuracy (%) 
Error bars: standard error 
Image-Level Accuracy 
Multiclass Accuracy 
Object in Path 
Curb Ramp Missing 
Sidewalk Ending 
Surface Problem 
4 Labels 
Binary Accuracy 
1 Label 
Object in Path 
Curb Ramp Missing 
Sidewalk Ending 
Surface Problem 
Problem
78.3% 
83.8% 
86.8% 
86.6% 
87.9% 
80.6% 
86.9% 
89.7% 
90.6% 
90.2% 
50% 
60% 
70% 
80% 
90% 
100% 
1 turker (N=28) 
3 turkers (N=9) 
5 turkers (N=5) 
7 turkers (N=4) 
9 turkers (N=3) 
Average Image-level Accuracy (%) 
Error bars: standard error 
Image-Level Accuracy 
Multiclass Accuracy 
Object in Path 
Curb Ramp Missing 
Sidewalk Ending 
Surface Problem 
4 Labels 
Binary Accuracy 
1 Label 
Problem
81.2% 
85.8% 
88.1% 
89.3% 
91.8% 
92.7% 
90.7% 
50% 
60% 
70% 
80% 
90% 
100% 
EVALUATING QUALITY CONTROL MECHANISMS 
Image-Level, Binary Classification 
1 labeler 
1 labeler, 
3 verifiers 
(majority vote) 
1 labeler, 
3 verifiers 
(zero tolerance) 
3 labelers 
(majority vote) 
3 labelers 
(majority vote) 
3 verifiers 
(majority vote) 
3 labelers 
(majority vote) 
3 verifiers 
(zero tolerance) 
5 labelers (majority vote)
81.2% 
85.8% 
88.1% 
89.3% 
91.8% 
92.7% 
90.7% 
50% 
60% 
70% 
80% 
90% 
100% 
EVALUATING QUALITY CONTROL MECHANISMS 
Image-Level, Binary Classification 
1 labeler 
1 labeler, 
3 verifiers 
(majority vote) 
1 labeler, 
3 verifiers 
(zero tolerance) 
3 labelers 
(majority vote) 
3 labelers 
(majority vote) 
3 verifiers 
(majority vote) 
3 labelers 
(majority vote) 
3 verifiers 
(zero tolerance) 
5 labelers (majority vote) 
3 labelers + 3 verifiers = 93%
CHI’13 CONTRIBUTIONS: 
1.Extended and reaffirmed findings from ASSETS’12 about viability of GSV and crowd work for locating and categorizing accessibility problems 
2.Validated our ground truth labeling approach 
3.Assessed simple quality control approaches
ASSETS’12 Poster 
Feasibility study + labeling interface evaluation 
HCIC’13 Workshop 
Exploring early solutions to computer vision (CV) 
HCOMP’13 Poster 
1st investigation of CV + crowdsourced verification 
CHI’13 
Large-scale turk study + label validation with wheelchair users 
ASSETS’13 
Applied to new domain: bus stop accessibility for visually impaired 
UIST’14 
Crowdsourcing + CV + “smart” work allocation 
TODAY’S TALK
ASSETS’12 Poster Feasibility study + labeling interface evaluation 
HCIC’13 Workshop 
Exploring early solutions to computer vision (CV) 
HCOMP’13 Poster 
1st investigation of CV + crowdsourced verification 
CHI’13 
Large-scale turk study + label validation with wheelchair users 
ASSETS’13 
Applied to new domain: bus stop accessibility for visually impaired 
UIST’14 
Crowdsourcing + CV + “smart” work allocation 
TODAY’S TALK
All of the approaches so far relied purely on manual labor, which limits scalability
& 
Manual Labor 
Computation
Automatic Workflow Adaptation for Crowdsourcing 
Lin et al. 2012; Dai et al. 2011 ; Kamar et al. 2012
Computer Vision & Streetview 
Goodfellow et al., 2014; Chu et al., 2014; Naik et al., 2014
Tohme 
遠目 
Remote Eye 
・
svCrawl 
Web Scraper 
Dataset 
svDetect 
Automatic Curb Ramp Detection 
svControl 
Automatic Task Allocation 
svVerify 
Manual Label Verification 
svLabel 
Manual Labeling 
Tohme 
遠目 
Remote Eye 
・ 
Design Principles 
1.Computer vision is cheap (zero cost) 
2.Manual verification is far cheaper than manual labeling 
3.Automatic curb ramp detection is hard and error prone 
4.Fixing a false positive is easy, fixing a false negative is hard (requires manual labeling).
The “lack of curb cuts is a primary obstacle to the smooth integration of those with disabilities into the commerce of daily life.” 
Kinney et al. vs. Yerusalim & Hoskins, 1993 
3rd Circuit Court of Appeals
“Without curb cuts, people with ambulatory disabilities simply cannot navigate the city” 
Kinney et al. vs. Yerusalim & Hoskins, 1993 
3rd Circuit Court of Appeals
Dataset 
svDetect 
Automatic Curb Ramp Detection 
svCrawl 
Web Scraper 
Tohme 
遠目 
Remote Eye 
・
svCrawl 
Web Scraper 
svDetect 
Automatic Curb Ramp Detection 
Tohme 
遠目 
Remote Eye 
・ 
Curb Ramp Detection on Street View image 
False positives 
False negatives = missed curb ramps
svCrawl 
Web Scraper 
Dataset 
svDetect 
Automatic Curb Ramp Detection 
svControl 
Automatic Task Allocation 
Tohme 
遠目 
Remote Eye 
・
svCrawl 
Web Scraper 
Dataset 
svDetect 
Automatic Curb Ramp Detection 
svControl 
Automatic Task Allocation 
svVerify 
Manual Label Verification 
Tohme 
遠目 
Remote Eye 
・
svCrawl 
Web Scraper 
Dataset 
svDetect 
Automatic Curb Ramp Detection 
svControl Automatic Task Allocation 
svVerify 
Manual Label Verification 
Tohme 
遠目 
Remote Eye 
・ 
svVerify can only fix false positives, not false negatives! That is, there is no way for a worker to add new labels at this stage!
svCrawl 
Web Scraper 
Dataset 
svDetect 
Automatic Curb Ramp Detection 
svControl 
Automatic Task Allocation 
svVerify 
Manual Label Verification 
svLabel 
Manual Labeling 
Tohme 
遠目 
Remote Eye 
・
svCrawl 
Web Scraper 
Dataset 
svDetect 
Automatic Curb Ramp Detection 
svControl 
Automatic Task Allocation 
svVerify 
Manual Label Verification 
svLabel 
Manual Labeling 
Tohme 
遠目 
Remote Eye 
・ 
.
svCrawl 
Web Scraper 
Dataset 
svDetect 
Automatic Curb Ramp Detection 
svControl Automatic Task Allocation 
svVerify Manual Label Verification 
svLabel 
Manual Labeling 
Tohme 
遠目 
Remote Eye 
・
svCrawl 
Web Scraper 
Dataset 
svDetect 
Automatic Curb Ramp Detection 
svVerify 
Manual Label Verification 
svLabel 
Manual Labeling 
Tohme 
遠目 
Remote Eye 
・ 
Complexity: 
Cardinality: 
Depth: 
CV: 
0.14 
0.33 
0.21 
0.22
svVerify 
Manual Label Verification 
Tohme 
遠目 
Remote Eye 
・ 
Complexity: 
Cardinality: 
Depth: 
CV: 
0.14 
0.33 
0.21 
0.22 
Predict presence of false negatives with linear SVM and Lasso regression 
svCrawl 
Web Scraper 
Dataset 
svLabel 
Manual Labeling
svCrawl 
Web Scraper 
Dataset 
svDetect 
Automatic Curb Ramp Detection 
svVerify 
Manual Label Verification 
Tohme 
遠目 
Remote Eye 
・ 
Complexity: 
Cardinality: 
Depth: 
CV: 
0.82 
0.25 
0.96 
0.54
svCrawl 
Web Scraper 
Dataset 
svDetect 
Automatic Curb Ramp Detection 
svControl Automatic Task Allocation 
svVerify Manual Label Verification 
svLabel 
Manual Labeling 
Tohme 
遠目 
Remote Eye 
・
svCrawl 
Web Scraper 
Dataset 
svDetect 
Automatic Curb Ramp Detection 
svControl 
Automatic Task Allocation 
svVerify 
Manual Label Verification 
svLabel 
Manual Labeling 
Tohme 
遠目 
Remote Eye 
・
Google Street View Intersection Panoramas and GIS Metadata 
3D Point-cloud Data 
Top-down Google Maps Imagery 
Scraper & Dataset
Saskatoon 
Los Angeles 
Baltimore 
Washington D.C. 
Washington D.C. 
Baltimore 
Los Angeles 
Saskatoon
Washington D.C. 
Dense urban area 
Semi-urban residential areas 
Scraper & Dataset
Washington D.C. 
Baltimore 
Los Angeles 
Saskatoon 
* At the time of downloading data in summer 2013 
Scraper & Dataset 
Total Area: 11.3 km2 
Intersections: 1,086 
Curb Ramps: 2,877 
Missing Curb Ramps: 647 
Avg. GSV Data Age: 2.2 yrs
How well does GSV data reflect the current state of the physical world?
Vs.
Washington D.C. 
Baltimore 
Physical Audit Areas 
GSV and Physical World > 97.7% agreement 
273 Intersections 
Dataset | Validating Dataset 
Small disagreement due to construction.
Washington D.C. 
Baltimore 
Physical Audit Areas 
273 Intersections 
> 97.7% agreement 
Dataset 
Key Takeaway 
Google Street View is a viable source of curb ramp data
svCrawl 
Web Scraper 
Dataset 
svDetect 
Automatic Curb Ramp Detection 
svControl Automatic Task Allocation 
svVerify Manual Label Verification 
svLabel 
Manual Labeling 
Tohme 
遠目 
Remote Eye 
・
svCrawl 
Web Scraper 
Dataset 
svDetect 
Automatic Curb Ramp Detection 
svControl 
Automatic Task Allocation 
svVerify 
Manual Label Verification 
svLabel 
Manual Labeling 
Tohme 
遠目 
Remote Eye 
・
AUTOMATIC CURB RAMP DETECTION 
1.Curb Ramp Detection 
2.Post-Processing Output 
3.SVM-Based Classification
Deformable Part Models 
Felzenszwalb et al. 2008 
Automatic Curb Ramp Detection 
http://www.cs.berkeley.edu/~rbg/latent/
Deformable Part Models 
Felzenszwalb et al. 2008 
Automatic Curb Ramp Detection 
http://www.cs.berkeley.edu/~rbg/latent/ 
Root filter 
Parts filter 
Displacement cost
Automatic Curb Ramp Detection 
Multiple redundant detection boxes 
Detected Labels Stage 1: Deformable Part Model 
Correct 
1 
False Positive 
12 
Miss 
0
Automatic Curb Ramp Detection 
Curb ramps shouldn’t be in the sky or on roofs 
Correct 
1 
False Positive 
12 
Miss 
0 
Detected Labels Stage 1: Deformable Part Model
Automatic Curb Ramp Detection 
Detected Labels Stage 2: Post-processing
Automatic Curb Ramp Detection 
Detected Labels Stage 3: SVM-based Refinement 
Filter out labels based on their size, color, and position. 
Correct 
1 
False Positive 
5 
Miss 
0
Automatic Curb Ramp Detection 
Correct 
1 
False Positive 
3 
Miss 
0 
Detected Labels Stage 3: SVM-based Refinement
Automatic Curb Ramp Detection 
Correct 
6 
False Positive 
11 
Miss 
1 
Detected Labels Stage 1: Deformable Part Model
Automatic Curb Ramp Detection 
Correct 
6 
False Positive 
6 
Miss 
1 
Detected Labels Stage 2: Post-processing
Automatic Curb Ramp Detection 
Correct 
6 
False Positive 
4 
Miss 
1 
Detected Labels Stage 3: SVM-based Refinement
Some curb ramps never get detected 
False positive detections 
Automatic Curb Ramp Detection 
Correct 
6 
False Positive 
4 
Miss 
1
Some curb ramps never get detected 
False positive detections 
Automatic Curb Ramp Detection 
Correct 
6 
False Positive 
4 
Miss 
1 
These false negatives are expensive to correct!
Used two-fold cross validation to evaluate CV sub-system
0% 
20% 
40% 
60% 
80% 
100% 
0% 
20% 
40% 
60% 
80% 
100% 
Precision (%) 
Recall (%) 
Automatic Curb Ramp Detection 
COMPUTER VISION SUB-SYSTEM RESULTS 
Precision Higher, less false positives 
Recall Higher, less false negatives
0% 
20% 
40% 
60% 
80% 
100% 
0% 
20% 
40% 
60% 
80% 
100% 
Precision (%) 
Recall (%) 
Automatic Curb Ramp Detection 
COMPUTER VISION SUB-SYSTEM RESULTS 
Goal: maximize area under curve
0% 
20% 
40% 
60% 
80% 
100% 
0% 
20% 
40% 
60% 
80% 
100% 
Precision (%) 
Recall (%) 
Stage 1: DPM 
Stage 2: Post-Processing 
Stage 3: SVM 
Automatic Curb Ramp Detection 
COMPUTER VISION SUB-SYSTEM RESULTS
0% 
20% 
40% 
60% 
80% 
100% 
0% 
20% 
40% 
60% 
80% 
100% 
Precision (%) 
Recall (%) 
Stage 1: DPM 
Stage 2: Post-Processing 
Stage 3: SVM 
Automatic Curb Ramp Detection 
COMPUTER VISION SUB-SYSTEM RESULTS
0% 
20% 
40% 
60% 
80% 
100% 
0% 
20% 
40% 
60% 
80% 
100% 
Precision (%) 
Recall (%) 
Stage 1: DPM 
Stage 2: Post-Processing 
Stage 3: SVM 
Automatic Curb Ramp Detection 
COMPUTER VISION SUB-SYSTEM RESULTS 
More than 20% of curb ramps were missed
0% 
20% 
40% 
60% 
80% 
100% 
0% 
20% 
40% 
60% 
80% 
100% 
Precision (%) 
Recall (%) 
Stage 1: DPM 
Stage 2: Post-Processing 
Stage 3: SVM 
Automatic Curb Ramp Detection 
COMPUTER VISION SUB-SYSTEM RESULTS 
Confidence threshold of -0.99, which results in 26% precision and 67% recall
Occlusion 
Illumination 
Scale 
Viewpoint Variation 
Structures Similar to Curb Ramps 
Curb Ramp Design Variation 
Automatic Curb Ramp Detection 
CURB RAMP DETECTION IS A HARD PROBLEM
Can we predict difficult intersections & CV performance?
svCrawl 
Web Scraper 
Dataset 
svDetect 
Automatic Curb Ramp Detection 
svControl 
Automatic Task Allocation 
svVerify 
Manual Label Verification 
svLabel 
Manual Labeling 
Tohme 
遠目 
Remote Eye 
・
svCrawl 
Web Scraper 
Dataset 
svDetect 
Automatic Curb Ramp Detection 
svControl 
Automatic Task Allocation 
svVerify 
Manual Label Verification 
svLabel 
Manual Labeling 
Tohme 
遠目 
Remote Eye 
・
Automatic Task Allocation | Features to Assess Scene Difficulty for CV 
Number of connected streets from metadata 
Depth information for intersection complexity analysis 
Top-down images to assess complexity of an intersection 
Number of detections and confidence values
svCrawl 
Web Scraper 
Dataset 
svDetect 
Automatic Curb Ramp Detection 
svControl 
Automatic Task Allocation 
svVerify 
Manual Label Verification 
svLabel 
Manual Labeling 
Tohme 
遠目 
Remote Eye 
・
svCrawl 
Web Scraper 
Dataset 
svDetect 
Automatic Curb Ramp Detection 
svControl 
Automatic Task Allocation 
svVerify 
Manual Label Verification 
svLabel 
Manual Labeling 
Tohme 
遠目 
Remote Eye 
・
3x 
Manual Labeling | Labeling Interface
svCrawl 
Web Scraper 
Dataset 
svDetect 
Automatic Curb Ramp Detection 
svControl 
Automatic Task Allocation 
svVerify 
Manual Label Verification 
svLabel 
Manual Labeling 
Tohme 
遠目 
Remote Eye 
・
svCrawl 
Web Scraper 
Dataset 
svDetect 
Automatic Curb Ramp Detection 
svControl 
Automatic Task Allocation 
svVerify 
Manual Label Verification 
svLabel 
Manual Labeling 
Tohme 
遠目 
Remote Eye 
・
2x 
Manual Label Verification
2x 
Manual Label Verification
Automatic Detection and Manual Verification 
Automatic Task Allocation 
Can Tohme achieve equivalent or better accuracy at a lower time cost compared to a completely manual approach?
STUDY METHOD: CONDITIONS 
Manual labeling without smart task allocation 
& 
vs. 
CV + Verification without smart task allocation 
Tohme 
遠目 
Remote Eye 
・ 
vs. 
Evaluation
Accuracy 
Task Completion Time 
Evaluation 
STUDY METHOD: MEASURES
Recruited workers from Mturk 
Used 1,046 GSV images (40 used for golden insertion) 
Evaluation 
STUDY METHOD: APPROACH
RESULTS 
Labeling Tasks 
Verification Tasks 
# of distinct turkers: 
242 
161 
1,270 
582 
# of HITs completed: 
# of tasks completed: 
6,350 
4,820 
# of tasks allocated: 
769 
277 
Evaluation 
We used Monte Carlo simulations for evaluation
84% 
88% 
86% 
0% 
20% 
40% 
60% 
80% 
100% 
Accuracy Measures (%) 
Precision 
Recall 
F-measure 
Manual Labeling 
CV and Manual Verification 
& 
94 
0 
20 
40 
60 
80 
100 
Task Completion Time / Scene (s) 
Manual Labeling 
CV and Manual Verification 
& 
Tohme 
遠目 
Remote Eye 
・ 
Tohme 
遠目 
Remote Eye 
・ 
Evaluation | Labeling Accuracy and Time Cost 
Error bars are standard deviations. 
ACCURACY 
COST (TIME)
84% 
68% 
88% 
58% 
86% 
63% 
0% 
20% 
40% 
60% 
80% 
100% 
Accuracy Measures (%) 
Precision 
Recall 
F-measure 
Manual Labeling 
CV and Manual Verification 
& 
94 
42 
0 
20 
40 
60 
80 
100 
Task Completion Time / Scene (s) 
Manual Labeling 
CV and Manual Verification 
& 
Tohme 
遠目 
Remote Eye 
・ 
Tohme 
遠目 
Remote Eye 
・ 
Evaluation | Labeling Accuracy and Time Cost 
Error bars are standard deviations. 
ACCURACY 
COST (TIME)
84% 
68% 
83% 
88% 
58% 
86% 
86% 
63% 
84% 
0% 
20% 
40% 
60% 
80% 
100% 
Accuracy Measures (%) 
Precision 
Recall 
F-measure 
Manual Labeling 
CV and Manual Verification 
& 
94 
42 
81 
0 
20 
40 
60 
80 
100 
Task Completion Time / Scene (s) 
Manual Labeling 
CV and Manual Verification 
& 
Tohme 
遠目 
Remote Eye 
・ 
Tohme 
遠目 
Remote Eye 
・ 
Evaluation | Labeling Accuracy and Time Cost 
Error bars are standard deviations. 
ACCURACY 
COST (TIME)
84% 
68% 
83% 
88% 
58% 
86% 
86% 
63% 
84% 
0% 
20% 
40% 
60% 
80% 
100% 
Accuracy Measures (%) 
Precision 
Recall 
F-measure 
Manual Labeling 
CV and Manual Verification 
& 
94 
42 
81 
0 
20 
40 
60 
80 
100 
Task Completion Time / Scene (s) 
Manual Labeling 
CV and Manual Verification 
& 
Tohme 
遠目 
Remote Eye 
・ 
Tohme 
遠目 
Remote Eye 
・ 
Evaluation | Labeling Accuracy and Time Cost 
Error bars are standard deviations. 
13% reduction in cost 
ACCURACY 
COST (TIME)
svControl 
Automatic Task Allocation 
svVerify 
Manual Label Verification 
svLabel 
Manual Labeling 
Evaluation | Smart Task Allocator 
~80% of svVerify tasks were correctly routed 
~50% of svLabel tasks were correctly routed
svControl 
Automatic Task Allocation 
svVerify 
Manual Label Verification 
svLabel 
Manual Labeling 
Evaluation | Smart Task Allocator 
If svControl worked perfectly, Tohme’s cost would drop to 28% of a manually labelling approach alone.
Example Labels from Manual Labeling
Evaluation | Example Labels from Manual Labeling
Evaluation | Example Labels from Manual Labeling
Evaluation | Example Labels from Manual Labeling
Evaluation | Example Labels from Manual Labeling
Evaluation | Example Labels from Manual Labeling
This is a driveway. Not a curb ramp. 
Evaluation | Example Labels from Manual Labeling
Evaluation | Example Labels from Manual Labeling
Evaluation | Example Labels from Manual Labeling
Examples Labels from CV + Verification
Raw Street View Image 
Evaluation | Example Labels from CV + Verification
False detection 
Automatic Detection 
Evaluation | Example Labels from CV + Verification
Automatic Detection + Human Verification 
Evaluation | Example Labels from CV + Verification
Automatic Detection 
Evaluation | Example Labels from CV + Verification
Automatic Detection + Human Verification 
Evaluation | Example Labels from CV + Verification
False verification 
Automatic Detection + Human Verification 
Evaluation | Example Labels from CV + Verification
UIST’14 CONTRIBUTIONS: 
1.First CV system for automatically detecting curb ramps in images 
2.Showed that automated methods could be used to improve labeling efficiency for curb ramps 
3.Validated GSV as a viable curb ramp dataset
TOWARDS SCALABLE ACCESSIBILITY DATA COLLECTION 
ASSETS’12 Poster 
Feasibility study + labeling interface evaluation 
HCIC’13 Workshop 
Exploring early solutions to computer vision (CV) 
HCOMP’13 Poster 1st investigation of CV + crowdsourced verification 
CHI’13 
Large-scale turk study + label validation with wheelchair users 
ASSETS’13 
Applied to new domain: bus stop accessibility for visually impaired 
UIST’14 
Crowdsourcing + CV + “smart” work allocation 
The Future
8,209 
Intersections in DC
8,209 
Intersections in DC 
BACK OF THE ENVELOPE CALCULATIONS 
Manually labeling GSV with our custom interfaces would take 214 hours With Tohme, this drops to 184 hours We think we can do better  Unclear how long a physical audit would take
FUTURE WORK: COMPUTER VISION 
Context integration & scene understanding 3D-data integration Improve training & sample size Mensuration
FUTURE WORK: FASTER LABELING & VERIFICATION INTERFACES
FUTURE WORK: TRACK PHYSICAL ACCESSIBILITY CHANGES OVER TIME
FUTURE WORK: ADDITIONAL SURVEYING TECHNIQUES 
Transmits real-time imagery of physical space along with measurements
THE CROWD-POWERED STREETVIEW ACCESSIBILITY TEAM! 
Kotaro Hara 
Jin Sun 
Victoria Le 
Robert Moore 
Sean Pannella 
Jonah Chazan 
David Jacobs 
Jon Froehlich 
Zachary Lawrence 
Graduate Student 
Undergraduate 
High School 
Professor
Flickr User: Pedro Rocha 
https://www.flickr.com/photos/pedrorocha/3627562740/ 
Flickr User: Brooke Hoyer 
https://www.flickr.com/photos/brookehoyer/14816521847/ 
Flickr User: Jen Rossey 
https://www.flickr.com/photos/jenrossey/3185264564/ 
Flickr User: Steven Vance https://www.flickr.com/photos/jamesbondsv/8642938765 
Flickr User: Jorge Gonzalez 
https://www.flickr.com/photos/macabrephotographer/6225178809/ 
Flickr User: Mike Fraser https://www.flickr.com/photos/67588280@N00/10800029263// 
PHOTO CREDITS 
Flickr User: Susan Sermoneta 
https://www.flickr.com/photos/en321/344387583/
This work is supported by: 
Faculty Research Award 
Human Computer 
Interaction 
Laboratory 
makeability lab
Human Computer 
Interaction 
Laboratory 
makeability lab 
CHARACTERIZING PHYSICAL WORLD ACCESSIBILITY AT SCALE 
USING CROWDSOURCING, COMPUTER VISION, & MACHINE LEARNING

More Related Content

Similar to Characterizing Physical World Accessibility at Scale Using Crowdsourcing, Computer Vision, & Machine Learning

Using Crowdsourcing, Automated Methods and Google Street View to Collect Side...
Using Crowdsourcing, Automated Methods and Google Street View to Collect Side...Using Crowdsourcing, Automated Methods and Google Street View to Collect Side...
Using Crowdsourcing, Automated Methods and Google Street View to Collect Side...Kotaro Hara
 
06-Crowdsourcing-Sidewalk-Accessibility_compressed.pptx
06-Crowdsourcing-Sidewalk-Accessibility_compressed.pptx06-Crowdsourcing-Sidewalk-Accessibility_compressed.pptx
06-Crowdsourcing-Sidewalk-Accessibility_compressed.pptxalmasilva17
 
Knowledge Graphs and Milestone
Knowledge Graphs and MilestoneKnowledge Graphs and Milestone
Knowledge Graphs and MilestoneBarry Norton
 
HILDA 2023 Keynote Bill Howe
HILDA 2023 Keynote Bill HoweHILDA 2023 Keynote Bill Howe
HILDA 2023 Keynote Bill Howedomoritz
 
The Purdue IronHacks
The Purdue IronHacksThe Purdue IronHacks
The Purdue IronHacksPurdue RCODI
 
New wayfinding system for City of Toronto's underground walkway
New wayfinding system for City of Toronto's underground walkwayNew wayfinding system for City of Toronto's underground walkway
New wayfinding system for City of Toronto's underground walkwayAmy Chong
 
Yinhai Wang - Smart Transportation: Research under Smart Cities Context - GCS16
Yinhai Wang - Smart Transportation: Research under Smart Cities Context - GCS16Yinhai Wang - Smart Transportation: Research under Smart Cities Context - GCS16
Yinhai Wang - Smart Transportation: Research under Smart Cities Context - GCS16KC Digital Drive
 
2013 Talk on Informatics tools for public transport re cities and health
2013 Talk on Informatics tools for public transport re cities and health2013 Talk on Informatics tools for public transport re cities and health
2013 Talk on Informatics tools for public transport re cities and healthPatrick Sunter
 
2010 ITE Annual Meeting and Exhibit
2010 ITE Annual Meeting and Exhibit2010 ITE Annual Meeting and Exhibit
2010 ITE Annual Meeting and ExhibitFranz Loewenherz
 
URBAN TRAFFIC DATA HACK - ROLAND MAJOR
URBAN TRAFFIC DATA HACK - ROLAND MAJORURBAN TRAFFIC DATA HACK - ROLAND MAJOR
URBAN TRAFFIC DATA HACK - ROLAND MAJORBig Data Week
 
Intro to accessibility workshop slides
Intro to accessibility workshop slidesIntro to accessibility workshop slides
Intro to accessibility workshop slidesbillcorrigan
 
DEVELOPMENT OF CONTROL SOFTWARE FOR STAIR DETECTION IN A MOBILE ROBOT USING A...
DEVELOPMENT OF CONTROL SOFTWARE FOR STAIR DETECTION IN A MOBILE ROBOT USING A...DEVELOPMENT OF CONTROL SOFTWARE FOR STAIR DETECTION IN A MOBILE ROBOT USING A...
DEVELOPMENT OF CONTROL SOFTWARE FOR STAIR DETECTION IN A MOBILE ROBOT USING A...IAEME Publication
 
Data Science for Online Services: Problems & Frontiers (Changbal Conference 2...
Data Science for Online Services: Problems & Frontiers (Changbal Conference 2...Data Science for Online Services: Problems & Frontiers (Changbal Conference 2...
Data Science for Online Services: Problems & Frontiers (Changbal Conference 2...Jin Young Kim
 
Apps for Local Government - Esri Dev Summit
Apps for Local Government - Esri Dev SummitApps for Local Government - Esri Dev Summit
Apps for Local Government - Esri Dev SummitAzavea
 
Andrea Mocci: Beautiful Design, Beautiful Coding at I T.A.K.E. Unconference 2015
Andrea Mocci: Beautiful Design, Beautiful Coding at I T.A.K.E. Unconference 2015Andrea Mocci: Beautiful Design, Beautiful Coding at I T.A.K.E. Unconference 2015
Andrea Mocci: Beautiful Design, Beautiful Coding at I T.A.K.E. Unconference 2015Mozaic Works
 
Designing Augmented Reality Experiences for Mobile
Designing Augmented Reality Experiences for MobileDesigning Augmented Reality Experiences for Mobile
Designing Augmented Reality Experiences for MobileTryMyUI
 
2010 WA State ITE Conference
2010 WA State ITE Conference2010 WA State ITE Conference
2010 WA State ITE ConferenceFranz Loewenherz
 
Data Visualization.pptx
Data Visualization.pptxData Visualization.pptx
Data Visualization.pptxSriniRao31
 
WDE08 Visualizing Web of Data
WDE08 Visualizing Web of DataWDE08 Visualizing Web of Data
WDE08 Visualizing Web of DataSatoshi Kikuchi
 

Similar to Characterizing Physical World Accessibility at Scale Using Crowdsourcing, Computer Vision, & Machine Learning (20)

Using Crowdsourcing, Automated Methods and Google Street View to Collect Side...
Using Crowdsourcing, Automated Methods and Google Street View to Collect Side...Using Crowdsourcing, Automated Methods and Google Street View to Collect Side...
Using Crowdsourcing, Automated Methods and Google Street View to Collect Side...
 
06-Crowdsourcing-Sidewalk-Accessibility_compressed.pptx
06-Crowdsourcing-Sidewalk-Accessibility_compressed.pptx06-Crowdsourcing-Sidewalk-Accessibility_compressed.pptx
06-Crowdsourcing-Sidewalk-Accessibility_compressed.pptx
 
Knowledge Graphs and Milestone
Knowledge Graphs and MilestoneKnowledge Graphs and Milestone
Knowledge Graphs and Milestone
 
HILDA 2023 Keynote Bill Howe
HILDA 2023 Keynote Bill HoweHILDA 2023 Keynote Bill Howe
HILDA 2023 Keynote Bill Howe
 
The Purdue IronHacks
The Purdue IronHacksThe Purdue IronHacks
The Purdue IronHacks
 
New wayfinding system for City of Toronto's underground walkway
New wayfinding system for City of Toronto's underground walkwayNew wayfinding system for City of Toronto's underground walkway
New wayfinding system for City of Toronto's underground walkway
 
Yinhai Wang - Smart Transportation: Research under Smart Cities Context - GCS16
Yinhai Wang - Smart Transportation: Research under Smart Cities Context - GCS16Yinhai Wang - Smart Transportation: Research under Smart Cities Context - GCS16
Yinhai Wang - Smart Transportation: Research under Smart Cities Context - GCS16
 
Purdue IronHacks
Purdue IronHacksPurdue IronHacks
Purdue IronHacks
 
2013 Talk on Informatics tools for public transport re cities and health
2013 Talk on Informatics tools for public transport re cities and health2013 Talk on Informatics tools for public transport re cities and health
2013 Talk on Informatics tools for public transport re cities and health
 
2010 ITE Annual Meeting and Exhibit
2010 ITE Annual Meeting and Exhibit2010 ITE Annual Meeting and Exhibit
2010 ITE Annual Meeting and Exhibit
 
URBAN TRAFFIC DATA HACK - ROLAND MAJOR
URBAN TRAFFIC DATA HACK - ROLAND MAJORURBAN TRAFFIC DATA HACK - ROLAND MAJOR
URBAN TRAFFIC DATA HACK - ROLAND MAJOR
 
Intro to accessibility workshop slides
Intro to accessibility workshop slidesIntro to accessibility workshop slides
Intro to accessibility workshop slides
 
DEVELOPMENT OF CONTROL SOFTWARE FOR STAIR DETECTION IN A MOBILE ROBOT USING A...
DEVELOPMENT OF CONTROL SOFTWARE FOR STAIR DETECTION IN A MOBILE ROBOT USING A...DEVELOPMENT OF CONTROL SOFTWARE FOR STAIR DETECTION IN A MOBILE ROBOT USING A...
DEVELOPMENT OF CONTROL SOFTWARE FOR STAIR DETECTION IN A MOBILE ROBOT USING A...
 
Data Science for Online Services: Problems & Frontiers (Changbal Conference 2...
Data Science for Online Services: Problems & Frontiers (Changbal Conference 2...Data Science for Online Services: Problems & Frontiers (Changbal Conference 2...
Data Science for Online Services: Problems & Frontiers (Changbal Conference 2...
 
Apps for Local Government - Esri Dev Summit
Apps for Local Government - Esri Dev SummitApps for Local Government - Esri Dev Summit
Apps for Local Government - Esri Dev Summit
 
Andrea Mocci: Beautiful Design, Beautiful Coding at I T.A.K.E. Unconference 2015
Andrea Mocci: Beautiful Design, Beautiful Coding at I T.A.K.E. Unconference 2015Andrea Mocci: Beautiful Design, Beautiful Coding at I T.A.K.E. Unconference 2015
Andrea Mocci: Beautiful Design, Beautiful Coding at I T.A.K.E. Unconference 2015
 
Designing Augmented Reality Experiences for Mobile
Designing Augmented Reality Experiences for MobileDesigning Augmented Reality Experiences for Mobile
Designing Augmented Reality Experiences for Mobile
 
2010 WA State ITE Conference
2010 WA State ITE Conference2010 WA State ITE Conference
2010 WA State ITE Conference
 
Data Visualization.pptx
Data Visualization.pptxData Visualization.pptx
Data Visualization.pptx
 
WDE08 Visualizing Web of Data
WDE08 Visualizing Web of DataWDE08 Visualizing Web of Data
WDE08 Visualizing Web of Data
 

More from Jon Froehlich

Making in the Human-Computer Interaction Lab (HCIL)
Making in the Human-Computer Interaction Lab (HCIL)Making in the Human-Computer Interaction Lab (HCIL)
Making in the Human-Computer Interaction Lab (HCIL)Jon Froehlich
 
A Brief Overview of the HCIL Hackerspace at UMD
A Brief Overview of the HCIL Hackerspace at UMDA Brief Overview of the HCIL Hackerspace at UMD
A Brief Overview of the HCIL Hackerspace at UMDJon Froehlich
 
The Design and Evaluation of Prototype Eco-Feedback Displays for Fixture-Leve...
The Design and Evaluation of Prototype Eco-Feedback Displays for Fixture-Leve...The Design and Evaluation of Prototype Eco-Feedback Displays for Fixture-Leve...
The Design and Evaluation of Prototype Eco-Feedback Displays for Fixture-Leve...Jon Froehlich
 
Moving Beyond Line Graphs: A (Brief) History and Future of Eco-Feedback Design
Moving Beyond Line Graphs: A (Brief) History and Future of Eco-Feedback DesignMoving Beyond Line Graphs: A (Brief) History and Future of Eco-Feedback Design
Moving Beyond Line Graphs: A (Brief) History and Future of Eco-Feedback DesignJon Froehlich
 
Sensing Opportunities and Zero Effort Applications for Mobile Health Persuasion
Sensing Opportunities and Zero Effort Applications for Mobile Health PersuasionSensing Opportunities and Zero Effort Applications for Mobile Health Persuasion
Sensing Opportunities and Zero Effort Applications for Mobile Health PersuasionJon Froehlich
 
The Design of Eco-Feedback Technology
The Design of Eco-Feedback TechnologyThe Design of Eco-Feedback Technology
The Design of Eco-Feedback TechnologyJon Froehlich
 

More from Jon Froehlich (7)

Making in the Human-Computer Interaction Lab (HCIL)
Making in the Human-Computer Interaction Lab (HCIL)Making in the Human-Computer Interaction Lab (HCIL)
Making in the Human-Computer Interaction Lab (HCIL)
 
A Brief Overview of the HCIL Hackerspace at UMD
A Brief Overview of the HCIL Hackerspace at UMDA Brief Overview of the HCIL Hackerspace at UMD
A Brief Overview of the HCIL Hackerspace at UMD
 
Gamifying Green
Gamifying GreenGamifying Green
Gamifying Green
 
The Design and Evaluation of Prototype Eco-Feedback Displays for Fixture-Leve...
The Design and Evaluation of Prototype Eco-Feedback Displays for Fixture-Leve...The Design and Evaluation of Prototype Eco-Feedback Displays for Fixture-Leve...
The Design and Evaluation of Prototype Eco-Feedback Displays for Fixture-Leve...
 
Moving Beyond Line Graphs: A (Brief) History and Future of Eco-Feedback Design
Moving Beyond Line Graphs: A (Brief) History and Future of Eco-Feedback DesignMoving Beyond Line Graphs: A (Brief) History and Future of Eco-Feedback Design
Moving Beyond Line Graphs: A (Brief) History and Future of Eco-Feedback Design
 
Sensing Opportunities and Zero Effort Applications for Mobile Health Persuasion
Sensing Opportunities and Zero Effort Applications for Mobile Health PersuasionSensing Opportunities and Zero Effort Applications for Mobile Health Persuasion
Sensing Opportunities and Zero Effort Applications for Mobile Health Persuasion
 
The Design of Eco-Feedback Technology
The Design of Eco-Feedback TechnologyThe Design of Eco-Feedback Technology
The Design of Eco-Feedback Technology
 

Recently uploaded

Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxRustici Software
 
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfRising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfOrbitshub
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusZilliz
 
Modernizing Legacy Systems Using Ballerina
Modernizing Legacy Systems Using BallerinaModernizing Legacy Systems Using Ballerina
Modernizing Legacy Systems Using BallerinaWSO2
 
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)Samir Dash
 
Six Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal OntologySix Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal Ontologyjohnbeverley2021
 
JohnPollard-hybrid-app-RailsConf2024.pptx
JohnPollard-hybrid-app-RailsConf2024.pptxJohnPollard-hybrid-app-RailsConf2024.pptx
JohnPollard-hybrid-app-RailsConf2024.pptxJohnPollard37
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyKhushali Kathiriya
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingEdi Saputra
 
TEST BANK For Principles of Anatomy and Physiology, 16th Edition by Gerard J....
TEST BANK For Principles of Anatomy and Physiology, 16th Edition by Gerard J....TEST BANK For Principles of Anatomy and Physiology, 16th Edition by Gerard J....
TEST BANK For Principles of Anatomy and Physiology, 16th Edition by Gerard J....rightmanforbloodline
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Jeffrey Haguewood
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodJuan lago vázquez
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FMESafe Software
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAndrey Devyatkin
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
Decarbonising Commercial Real Estate: The Role of Operational Performance
Decarbonising Commercial Real Estate: The Role of Operational PerformanceDecarbonising Commercial Real Estate: The Role of Operational Performance
Decarbonising Commercial Real Estate: The Role of Operational PerformanceIES VE
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDropbox
 
Introduction to use of FHIR Documents in ABDM
Introduction to use of FHIR Documents in ABDMIntroduction to use of FHIR Documents in ABDM
Introduction to use of FHIR Documents in ABDMKumar Satyam
 
Quantum Leap in Next-Generation Computing
Quantum Leap in Next-Generation ComputingQuantum Leap in Next-Generation Computing
Quantum Leap in Next-Generation ComputingWSO2
 

Recently uploaded (20)

Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfRising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with Milvus
 
Modernizing Legacy Systems Using Ballerina
Modernizing Legacy Systems Using BallerinaModernizing Legacy Systems Using Ballerina
Modernizing Legacy Systems Using Ballerina
 
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)
 
Six Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal OntologySix Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal Ontology
 
JohnPollard-hybrid-app-RailsConf2024.pptx
JohnPollard-hybrid-app-RailsConf2024.pptxJohnPollard-hybrid-app-RailsConf2024.pptx
JohnPollard-hybrid-app-RailsConf2024.pptx
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
TEST BANK For Principles of Anatomy and Physiology, 16th Edition by Gerard J....
TEST BANK For Principles of Anatomy and Physiology, 16th Edition by Gerard J....TEST BANK For Principles of Anatomy and Physiology, 16th Edition by Gerard J....
TEST BANK For Principles of Anatomy and Physiology, 16th Edition by Gerard J....
 
Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Decarbonising Commercial Real Estate: The Role of Operational Performance
Decarbonising Commercial Real Estate: The Role of Operational PerformanceDecarbonising Commercial Real Estate: The Role of Operational Performance
Decarbonising Commercial Real Estate: The Role of Operational Performance
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 
Introduction to use of FHIR Documents in ABDM
Introduction to use of FHIR Documents in ABDMIntroduction to use of FHIR Documents in ABDM
Introduction to use of FHIR Documents in ABDM
 
Quantum Leap in Next-Generation Computing
Quantum Leap in Next-Generation ComputingQuantum Leap in Next-Generation Computing
Quantum Leap in Next-Generation Computing
 

Characterizing Physical World Accessibility at Scale Using Crowdsourcing, Computer Vision, & Machine Learning

  • 1. Human Computer Interaction Laboratory makeability lab CHARACTERIZING PHYSICAL WORLD ACCESSIBILITY AT SCALE USING CROWDSOURCING, COMPUTER VISION, & MACHINE LEARNING
  • 3.
  • 5.
  • 6. BenShneiderman BenBederson JonFroehlich JenGolbeck LeahFindlater MarshiniChetty JennyPreece AllisonDruin MonaLeighGuha TammyClegg JuneAhn EvanGolub TimClausner KentNorman IraChinoy KariKraus AnneRose CatherinePlaisant computer science hcil JessicaVitak NiklasElmqvist NicholasDiakopoulos
  • 10. Three Soldering Stations HCIL Hackerspace
  • 12. Two Mannequins HCIL Hackerspace
  • 13. Wall of Electronic Components HCIL Hackerspace
  • 15. Two 3D-Printers HCIL Hackerspace
  • 16. One CNC Machine HCIL Hackerspace
  • 17. Physical Making HCIL Student Leyla Norooz
  • 18. Electronics Making HCIL student Tansy McBurnie
  • 19. E-Textile Design HCIL Student Michael Gubbels showing SFF
  • 20. Collaborative Working HCIL students Joseph, Cy, Matt, and Jonah
  • 21. Student Sewing HCIL student Matt sewing
  • 22. Fun! HCIL students Kotaro Hara and Allan Fong
  • 23. More Fun! HCIL students Sean, Michael, Alexa, and me
  • 25. Human Computer Interaction Laboratory makeability lab CHARACTERIZING PHYSICAL WORLD ACCESSIBILITY AT SCALE USING CROWDSOURCING, COMPUTER VISION, & MACHINE LEARNING
  • 26. 30.6 million U.S. adults with mobility impairment
  • 27. 15.2 million use an assistive aid
  • 28. Incomplete Sidewalks Physical Obstacles Surface Problems No Curb Ramps Stairs/Businesses
  • 29.
  • 30. The National Council on Disability noted that there is no comprehensive information on “the degree to which sidewalks are accessible” in cities. National Council on Disability, 2007 The impact of the Americans with Disabilities Act: Assessing the progress toward achieving the goals of the ADA
  • 31. The lack of street-level accessibility information can have a significant impact on the independence and mobility of citizens cf. Nuernberger, 2008; Thapar et al., 2004
  • 32. I usually don’t go where I don’t know [about accessible routes] -P3, congenital polyneuropathy
  • 33. “Man in Wheelchair Hit By Vehicle Has Died From Injuries” -The Aurora, May 9, 2013
  • 35.
  • 36. http://accesscore.org This is a mockup interface based on walkscore.com and walkshed.com
  • 37. http://accesscore.org This is a mockup interface based on walkscore.com and walkshed.com
  • 38.
  • 39. How might a tool like AccessScore: Change the way people think about and understand their neighborhoods Influence property values Impact where people choose to live Change how governments/citizens make decisions about infrastructural investments
  • 40. AccessScore would not change how people navigate the city, for this we need a different tool…
  • 41. NAVIGATION TOOLS ARE NOT ACCESSIBILITY AWARE
  • 42. Routing for: Manual Wheelchair 1st of 3 Suggested Routes 16 minutes, 0.7 miles, 1 obstacle ! ! ! ! A B Route 1 Route 2 Surface Problem Avg Severity: 3.6 (Hard to Pass) Recent Comments: “Obstacle is passable in a manual chair but not in a motorized chair” Routing for: Manual Wheelchair A 1st of 3 Suggested Routes 16 minutes, 0.7 miles, 1 obstacle ! ACCESSIBILITY AWARE NAVIGATION SYSTEMS
  • 43.
  • 44. Where is this data going to come from?
  • 45. Safe Routes to School Walkability Audit Rock Hill, South Carolina Walkability Audit Wake County, North Carolina Walkability Audit Wake County, North Carolina TRADITIONAL WALKABILITY AUDITS
  • 46. Safe Routes to School Walkability Audit Rock Hill, South Carolina Walkability Audit Wake County, North Carolina Walkability Audit Wake County, North Carolina TRADITIONAL WALKABILITY AUDITS
  • 49.
  • 50. Similar to physical audits, these tools are built for in situ reporting and do not support remote, virtual inquiry—which limits scalability Not designed for accessibility data collection
  • 51. MARK & FIND ACCESSIBLE BUSINESSES wheelmap.org axsmap.com
  • 52. MARK & FIND ACCESSIBLE BUSINESSES wheelmap.org axsmap.com Focuses on businesses rather than streets & sidewalks Model is still to report on places you’ve visited
  • 53. Our Approach: Use Google Street View (GSV) as a massive data source for scalably finding and characterizing street-level accessibility
  • 54. HIGH-LEVEL RESEARCH QUESTIONS 1.Can we use Google Street View (GSV) to find street- level accessibility problems? 2.Can we create interactive systems to allow minimally trained crowdworkers to quickly and accurately perform remote audit tasks? 3.Can we use computer vision and machine learning to scale our approach?
  • 55. ASSETS’12 Poster Feasibility study + labeling interface evaluation HCIC’13 Workshop Exploring early solutions to computer vision (CV) HCOMP’13 Poster 1st investigation of CV + crowdsourced verification CHI’13 Large-scale turk study + label validation with wheelchair users ASSETS’13 Applied to new domain: bus stop accessibility for visually impaired UIST’14 Crowdsourcing + CV + “smart” work allocation TOWARDS SCALABLE ACCESSIBILITY DATA COLLECTION
  • 56. ASSETS’12 Poster Feasibility study + labeling interface evaluation HCIC’13 Workshop Exploring early solutions to computer vision (CV) HCOMP’13 Poster 1st investigation of CV + crowdsourced verification CHI’13 Large-scale turk study + label validation with wheelchair users ASSETS’13 Applied to new domain: bus stop accessibility for visually impaired UIST’14 Crowdsourcing + CV + “smart” work allocation TODAY’S TALK
  • 57. ASSETS’12 Poster Feasibility study + labeling interface evaluation HCIC’13 Workshop Exploring early solutions to computer vision (CV) HCOMP’13 Poster 1st investigation of CV + crowdsourced verification CHI’13 Large-scale turk study + label validation with wheelchair users ASSETS’13 Applied to new domain: bus stop accessibility for visually impaired UIST’14 Crowdsourcing + CV + “smart” work allocation TODAY’S TALK
  • 58. ASSETS’12 GOALS: 1.Investigate viability of reapproprating online map imagery to determine sidewalk accessibility via crowd workers 2.Examine the effect of three different interactive labeling interfaces on task accuracy and duration
  • 60. WEB-BASED LABELING INTERFACE FOUR STEP PROCESS 1. Find and mark accessibility problem 2. Select problem category 3. Rate problem severity 4. Submit completed image
  • 61. WEB-BASED LABELING INTERFACE VIDEO Video shown to crowd workers before they labeled their first image http://youtu.be/aD1bx_SikGo
  • 62. WEB-BASED LABELING INTERFACE VIDEO http://youtu.be/aD1bx_SikGo
  • 63. THREE LABELING INTERFACES Point-and-click Rectangular Outline Polygonal Outline Pixel Granularity
  • 64. Los Angeles DATASET: 100 IMAGES New York Baltimore Washington DC
  • 65. DATASET BREAKDOWN 34 29 27 11 19 0 10 20 30 40 No Curb Ramp Surface Problem Object in Path Sidewalk Ending No Sidewalk Accessibility Issues Manually curated 100 images from urban neighborhoods in LA, Baltimore, Washington DC, and NYC
  • 66. DATASET BREAKDOWN 34 29 27 11 19 0 10 20 30 40 No Curb Ramp Surface Problem Object in Path Sidewalk Ending No Sidewalk Accessibility Issues Manually curated 100 images from urban neighborhoods in LA, Baltimore, Washington DC, and NYC Used to evaluate false positive labeling activity
  • 67. DATASET BREAKDOWN 34 29 27 11 19 0 10 20 30 40 No Curb Ramp Surface Problem Object in Path Sidewalk Ending No Sidewalk Accessibility Issues Manually curated 100 images from urban neighborhoods in LA, Baltimore, Washington DC, and NYC Used to evaluate false positive labeling activity This breakdown based on majority vote data from 3 independent researcher labels
  • 68. Our ground truth process
  • 69. What accessibility problems exist in this image?
  • 70. R1 R2 R3 Researcher Label Table Object in Path Curb Ramp Missing Sidewalk Ending Surface Problem Other
  • 71. Object in Path Curb Ramp Missing R1 R2 R3 Researcher Label Table
  • 72. Object in Path Curb Ramp Missing R1 R2 R3 Researcher Label Table x2 Researcher 1
  • 73. x4 Object in Path Curb Ramp Missing R1 R2 R3 Researcher Label Table Researcher 2
  • 74. Researcher 3 Object in Path Curb Ramp Missing R1 R2 R3 Researcher Label Table x8
  • 75. Researcher 1 Researcher 2 Researcher 3
  • 76. There are multiple ways to examine the labels.
  • 77. Object in Path Curb Ramp Missing R1 R2 R3 Researcher Label Table Image Level Analysis This table tells us what accessibility problems exist in the image
  • 78. Pixel Level Analysis Labeled pixels tell us where the accessibility problems exist in the image.
  • 79. Why do we care about image level vs. pixel level?
  • 80. Coarse Precise Point Location Level Sub-block Level Block Level (Pixel Level) (Image Level)
  • 81. Coarse Precise Point Location Level Sub-block Level Block Level (Pixel Level) (Image Level)
  • 82. Coarse Precise Point Location Level Sub-block Level Block Level (Pixel Level) (Image Level) Pixel level labels could be used for training machine learning algorithms for detection and recognition tasks
  • 83. Coarse Precise Localization Spectrum Point Location Level Sub-block Level Block Level Class Spectrum Multiclass Object in Path Curb Ramp Missing Prematurely Ending Sidewalk Surface Problem Binary Problem No Problem (Pixel Level) (Image Level) TWO ACCESSIBILITY PROBLEM SPECTRUMS Different ways of thinking about accessibility problem labels in GSV Coarse Precise
  • 84. Object in Path Curb Ramp Missing R1 R2 R3 Researcher Label Table Problem Multiclass label Binary Label Sidewalk Ending Surface Problem Other
  • 85. To produce a single ground truth dataset, we used majority vote.
  • 86. R1 R2 R3 Maj. Vote Researcher Label Table Object in Path Curb Ramp Missing Sidewalk Ending Surface Problem Other
  • 87. R1 R2 R3 Maj. Vote Researcher Label Table Object in Path Curb Ramp Missing Sidewalk Ending Surface Problem Other
  • 88. R1 R2 R3 Maj. Vote Researcher Label Table Object in Path Curb Ramp Missing Sidewalk Ending Surface Problem Other
  • 89. ASSETS’12 MTURK STUDY METHOD Independently posted 3 labeling interfaces to MTurk. Crowdworkers could work with only one interface. For training, turkers required to watch first 1.5 mins of 3-min instructional video. Hired ~7 workers per image to explore avg accuracy Turkers paid ~3-5 cents per HIT. We varied number of images/HIT from 1-10.
  • 90. ASSETS’12 MTURK DESCRIPTIVE RESULTS Hired 132 unique workers Worked on 2,325 assignments Provided a total of 4,309 labels (AVG=1.9/image)
  • 91. MAIN FINDINGS: IMAGE-LEVEL ANALYSIS 0% 20% 40% 60% 80% 100% Point-and-click Outline Rectangle AVERAGE ACCURACY Higher is better 0 10 20 30 40 50 Point-and-click Outline Rectangle MEDIAN TASK TIME (SECS) Lower is better
  • 92. MAIN FINDINGS: IMAGE-LEVEL ANALYSIS 83.0% 82.6% 79.2% 0% 20% 40% 60% 80% 100% Point-and-click Outline Rectangle AVERAGE ACCURACY All three interfaces performed similarly. This is without quality control. 0 10 20 30 40 50 Point-and-click Outline Rectangle MEDIAN TASK TIME (SECS) Higher is better Lower is better
  • 93. MAIN FINDINGS: IMAGE-LEVEL ANALYSIS 83.0% 82.6% 79.2% 0% 20% 40% 60% 80% 100% Point-and-click Outline Rectangle 32.9 41.5 43.3 0 10 20 30 40 50 Point-and-click Outline Rectangle AVERAGE ACCURACY MEDIAN TASK TIME (SECS) All three interfaces performed similarly. This is without quality control. Point-and-click is the fastest; 26% faster than Outline & 32% faster than Rectangle Higher is better Lower is better
  • 94. ASSETS’12 CONTRIBUTIONS: 1.Demonstrated that minimally trained crowd workers could locate and categorize sidewalk accessibility problems in GSV images with > 80% accuracy 2.Showed that point-and-click fastest labeling interface but that outline faster than rectangle
  • 95. ASSETS’12 Poster Feasibility study + labeling interface evaluation HCIC’13 Workshop Exploring early solutions to computer vision (CV) HCOMP’13 Poster 1st investigation of CV + crowdsourced verification CHI’13 Large-scale turk study + label validation with wheelchair users ASSETS’13 Applied to new domain: bus stop accessibility for visually impaired UIST’14 Crowdsourcing + CV + “smart” work allocation TODAY’S TALK
  • 96. ASSETS’12 Poster Feasibility study + labeling interface evaluation HCIC’13 Workshop Exploring early solutions to computer vision (CV) HCOMP’13 Poster 1st investigation of CV + crowdsourced verification CHI’13 Large-scale turk study + label validation with wheelchair users ASSETS’13 Applied to new domain: bus stop accessibility for visually impaired UIST’14 Crowdsourcing + CV + “smart” work allocation TODAY’S TALK
  • 97. CHI’13 GOALS: 1.Expand ASSETS’12 study with larger sample. •Examine accuracy as function of turkers/image •Evaluate quality control mechanisms •Gain qualitative understanding of failures/successes 2.Validate researcher ground truth with labels from three wheelchair users
  • 98. Los Angeles DATASET: EXPANDED TO 229 IMAGES New York Baltimore Washington DC
  • 99. CHI’13 GOALS: 1.Expand ASSETS’12 study with larger sample. •Examine accuracy as function of turkers/image •Evaluate quality control mechanisms •Gain qualitative understanding of failures/successes 2.Validate researcher ground truth with labels from three wheelchair users
  • 100. GROUND TRUTH: MAJORITY VOTE 3 RESEARCHER LABELS How “good” is our ground truth?
  • 101.
  • 102. IN-LAB STUDY METHOD Three wheelchair participants Independently labeled 75 of 229 GSV images Used think-aloud protocol. Sessions were video recorded 30-min post-study interview We used Fleiss’ kappa to measure agreement between wheelchair users and researchers
  • 103. Here is an example recording from the study session
  • 104.
  • 105. IN-LAB STUDY RESULTS Strong agreement (κmulticlass=0.74) between wheelchair participants and researcher labels (ground truth) In interviews, one participant mentioned using GSV to explore areas prior to travel
  • 106. CHI’13 GOALS: 1.Expand ASSETS’12 study with larger sample. •Examine accuracy as function of turkers/image •Evaluate quality control mechanisms •Gain qualitative understanding of failures/successes 2.Validate researcher ground truth with labels from three wheelchair users
  • 107. CHI’13 GOALS: 1.Expand ASSETS’12 study with larger sample. •Examine accuracy as function of turkers/image •Evaluate quality control mechanisms •Gain qualitative understanding of failures/successes 2.Validate researcher ground truth with labels from three wheelchair users
  • 108. CHI’13 MTURK STUDY METHOD Similar to ASSETS’12 but more images (229 vs. 100) and more turkers (185 vs. 132) Added crowd verification quality control Recruited 28+ turkers per image to investigate accuracy as function of workers
  • 109. University of Maryland: Help make our sidewalks more accessible for wheelchair users with Google Maps Kotaro Hara Timer: 00:07:00 of 3 hours 10 3 hours Labeling Interface
  • 110. Kotaro Hara Timer: 00:07:00 of 3 hours University of Maryland: Help make our sidewalks more accessible for wheelchair users with Google Maps 3 hours 10 Verification Interface
  • 111. Kotaro Hara Timer: 00:07:00 of 3 hours University of Maryland: Help make our sidewalks more accessible for wheelchair users with Google Maps 3 hours 10 Verification Interface
  • 112. CHI’13 MTURK LABELING STATS Hired 185 unique workers Worked on 7,517 labeling tasks (AVG=40.6/turker) Provided a total of 13,379 labels (AVG=1.8/image) Hired 273 unique workers Provided a total of 19,189 verifications CHI’13 MTURK VERIFICATION STATS
  • 113. CHI’13 MTURK LABELING STATS Hired 185 unique workers Worked on 7,517 labeling tasks (AVG=40.6/turker) Provided a total of 13,379 labels (AVG=1.8/image) CHI’13 MTURK VERIFICATION STATS Hired 273 unique workers Provided a total of 19,189 verifications Median image labeling time vs. verification time: 35.2s vs. 10.5s
  • 114. CHI’13 MTURK KEY FINDINGS 81% accuracy without quality control 93% accuracy with quality control
  • 115. Some turker labeling successes...
  • 116. TURKER LABELING EXAMPLES Curb Ramp Missing
  • 117. TURKER LABELING EXAMPLES Curb Ramp Missing
  • 118. TURKER LABELING EXAMPLES Object in Path
  • 119. TURKER LABELING EXAMPLES Object in Path
  • 120. TURKER LABELING EXAMPLES Prematurely Ending Sidewalk
  • 121. TURKER LABELING EXAMPLES Prematurely Ending Sidewalk
  • 122. TURKER LABELING EXAMPLES Surface Problems
  • 123. TURKER LABELING EXAMPLES Surface Problems
  • 124. TURKER LABELING EXAMPLES Surface Problems Object in Path
  • 125. TURKER LABELING EXAMPLES Surface Problems Object in Path
  • 126. And now some turker failures…
  • 127. TURKER LABELING ISSUES Overlabeling Some Turkers Prone to High False Positives No Curb Ramp
  • 128. No Curb Ramp TURKER LABELING ISSUES Overlabeling Some Turkers Prone to High False Positives Incorrect Object in Path label. Stop sign is in grass.
  • 129. TURKER LABELING ISSUES Overlabeling Some Turkers Prone to High False Positives Surface Problems
  • 130. TURKER LABELING ISSUES Overlabeling Some Turkers Prone to High False Positives Surface Problems Tree not actually an obstacle
  • 131. TURKER LABELING ISSUES Overlabeling Some Turkers Prone to High False Positives No problems in this image
  • 132. TURKER LABELING ISSUES Overlabeling Some Turkers Prone to High False Positives
  • 133. T1 T2 T3 Maj. Vote 3 Turker Majority Vote Label Object in Path Curb Ramp Missing Sidewalk Ending Surface Problem Other T3 provides a label of low quality
  • 134. To look into the effect of turker majority vote on accuracy, we had 28 turkers label each image
  • 135. 28 groups of 1: We had 28 turkers label each image:
  • 136. 28 groups of 1: We had 28 turkers label each image: 9 groups of 3:
  • 137. 28 groups of 1: We had 28 turkers label each image: 9 groups of 3: 5 groups of 5:
  • 138. 28 groups of 1: We had 28 turkers label each image: 9 groups of 3: 5 groups of 5:
  • 139. 28 groups of 1: We had 28 turkers label each image: 9 groups of 3: 5 groups of 5: 4 groups of 7: 3 groups of 9:
  • 140. 78.3% 83.8% 86.8% 86.6% 87.9% 50% 60% 70% 80% 90% 100% 1 turker (N=28) 3 turkers (N=9) 5 turkers (N=5) 7 turkers (N=4) 9 turkers (N=3) Average Image-level Accuracy (%) Error bars: standard error Image-Level Accuracy Object in Path Curb Ramp Missing Sidewalk Ending Surface Problem 4 Labels Multiclass Accuracy
  • 141. 78.3% 83.8% 86.8% 86.6% 87.9% 50% 60% 70% 80% 90% 100% 1 turker (N=28) 3 turkers (N=9) 5 turkers (N=5) 7 turkers (N=4) 9 turkers (N=3) Average Image-level Accuracy (%) Error bars: standard error Image-Level Accuracy Multiclass Accuracy Object in Path Curb Ramp Missing Sidewalk Ending Surface Problem 4 Labels
  • 142. 78.3% 83.8% 86.8% 86.6% 87.9% 50% 60% 70% 80% 90% 100% 1 turker (N=28) 3 turkers (N=9) 5 turkers (N=5) 7 turkers (N=4) 9 turkers (N=3) Average Image-level Accuracy (%) Error bars: standard error Image-Level Accuracy Multiclass Accuracy Object in Path Curb Ramp Missing Sidewalk Ending Surface Problem 4 Labels Accuracy saturates after 5 turkers
  • 143. 78.3% 83.8% 86.8% 86.6% 87.9% 50% 60% 70% 80% 90% 100% 1 turker (N=28) 3 turkers (N=9) 5 turkers (N=5) 7 turkers (N=4) 9 turkers (N=3) Average Image-level Accuracy (%) Error bars: standard error Image-Level Accuracy Multiclass Accuracy Object in Path Curb Ramp Missing Sidewalk Ending Surface Problem 4 Labels Stderr: 0.2% Stderr=0.2%
  • 144. 78.3% 83.8% 86.8% 86.6% 87.9% 50% 60% 70% 80% 90% 100% 1 turker (N=28) 3 turkers (N=9) 5 turkers (N=5) 7 turkers (N=4) 9 turkers (N=3) Average Image-level Accuracy (%) Error bars: standard error Image-Level Accuracy Multiclass Accuracy Object in Path Curb Ramp Missing Sidewalk Ending Surface Problem 4 Labels Binary Accuracy
  • 145. 78.3% 83.8% 86.8% 86.6% 87.9% 50% 60% 70% 80% 90% 100% 1 turker (N=28) 3 turkers (N=9) 5 turkers (N=5) 7 turkers (N=4) 9 turkers (N=3) Average Image-level Accuracy (%) Error bars: standard error Image-Level Accuracy Multiclass Accuracy Object in Path Curb Ramp Missing Sidewalk Ending Surface Problem 4 Labels Binary Accuracy 1 Label Object in Path Curb Ramp Missing Sidewalk Ending Surface Problem Problem
  • 146. 78.3% 83.8% 86.8% 86.6% 87.9% 80.6% 86.9% 89.7% 90.6% 90.2% 50% 60% 70% 80% 90% 100% 1 turker (N=28) 3 turkers (N=9) 5 turkers (N=5) 7 turkers (N=4) 9 turkers (N=3) Average Image-level Accuracy (%) Error bars: standard error Image-Level Accuracy Multiclass Accuracy Object in Path Curb Ramp Missing Sidewalk Ending Surface Problem 4 Labels Binary Accuracy 1 Label Problem
  • 147. 81.2% 85.8% 88.1% 89.3% 91.8% 92.7% 90.7% 50% 60% 70% 80% 90% 100% EVALUATING QUALITY CONTROL MECHANISMS Image-Level, Binary Classification 1 labeler 1 labeler, 3 verifiers (majority vote) 1 labeler, 3 verifiers (zero tolerance) 3 labelers (majority vote) 3 labelers (majority vote) 3 verifiers (majority vote) 3 labelers (majority vote) 3 verifiers (zero tolerance) 5 labelers (majority vote)
  • 148. 81.2% 85.8% 88.1% 89.3% 91.8% 92.7% 90.7% 50% 60% 70% 80% 90% 100% EVALUATING QUALITY CONTROL MECHANISMS Image-Level, Binary Classification 1 labeler 1 labeler, 3 verifiers (majority vote) 1 labeler, 3 verifiers (zero tolerance) 3 labelers (majority vote) 3 labelers (majority vote) 3 verifiers (majority vote) 3 labelers (majority vote) 3 verifiers (zero tolerance) 5 labelers (majority vote) 3 labelers + 3 verifiers = 93%
  • 149. CHI’13 CONTRIBUTIONS: 1.Extended and reaffirmed findings from ASSETS’12 about viability of GSV and crowd work for locating and categorizing accessibility problems 2.Validated our ground truth labeling approach 3.Assessed simple quality control approaches
  • 150. ASSETS’12 Poster Feasibility study + labeling interface evaluation HCIC’13 Workshop Exploring early solutions to computer vision (CV) HCOMP’13 Poster 1st investigation of CV + crowdsourced verification CHI’13 Large-scale turk study + label validation with wheelchair users ASSETS’13 Applied to new domain: bus stop accessibility for visually impaired UIST’14 Crowdsourcing + CV + “smart” work allocation TODAY’S TALK
  • 151. ASSETS’12 Poster Feasibility study + labeling interface evaluation HCIC’13 Workshop Exploring early solutions to computer vision (CV) HCOMP’13 Poster 1st investigation of CV + crowdsourced verification CHI’13 Large-scale turk study + label validation with wheelchair users ASSETS’13 Applied to new domain: bus stop accessibility for visually impaired UIST’14 Crowdsourcing + CV + “smart” work allocation TODAY’S TALK
  • 152. All of the approaches so far relied purely on manual labor, which limits scalability
  • 153. & Manual Labor Computation
  • 154. Automatic Workflow Adaptation for Crowdsourcing Lin et al. 2012; Dai et al. 2011 ; Kamar et al. 2012
  • 155. Computer Vision & Streetview Goodfellow et al., 2014; Chu et al., 2014; Naik et al., 2014
  • 157. svCrawl Web Scraper Dataset svDetect Automatic Curb Ramp Detection svControl Automatic Task Allocation svVerify Manual Label Verification svLabel Manual Labeling Tohme 遠目 Remote Eye ・ Design Principles 1.Computer vision is cheap (zero cost) 2.Manual verification is far cheaper than manual labeling 3.Automatic curb ramp detection is hard and error prone 4.Fixing a false positive is easy, fixing a false negative is hard (requires manual labeling).
  • 158.
  • 159. The “lack of curb cuts is a primary obstacle to the smooth integration of those with disabilities into the commerce of daily life.” Kinney et al. vs. Yerusalim & Hoskins, 1993 3rd Circuit Court of Appeals
  • 160. “Without curb cuts, people with ambulatory disabilities simply cannot navigate the city” Kinney et al. vs. Yerusalim & Hoskins, 1993 3rd Circuit Court of Appeals
  • 161. Dataset svDetect Automatic Curb Ramp Detection svCrawl Web Scraper Tohme 遠目 Remote Eye ・
  • 162. svCrawl Web Scraper svDetect Automatic Curb Ramp Detection Tohme 遠目 Remote Eye ・ Curb Ramp Detection on Street View image False positives False negatives = missed curb ramps
  • 163. svCrawl Web Scraper Dataset svDetect Automatic Curb Ramp Detection svControl Automatic Task Allocation Tohme 遠目 Remote Eye ・
  • 164. svCrawl Web Scraper Dataset svDetect Automatic Curb Ramp Detection svControl Automatic Task Allocation svVerify Manual Label Verification Tohme 遠目 Remote Eye ・
  • 165. svCrawl Web Scraper Dataset svDetect Automatic Curb Ramp Detection svControl Automatic Task Allocation svVerify Manual Label Verification Tohme 遠目 Remote Eye ・ svVerify can only fix false positives, not false negatives! That is, there is no way for a worker to add new labels at this stage!
  • 166. svCrawl Web Scraper Dataset svDetect Automatic Curb Ramp Detection svControl Automatic Task Allocation svVerify Manual Label Verification svLabel Manual Labeling Tohme 遠目 Remote Eye ・
  • 167. svCrawl Web Scraper Dataset svDetect Automatic Curb Ramp Detection svControl Automatic Task Allocation svVerify Manual Label Verification svLabel Manual Labeling Tohme 遠目 Remote Eye ・ .
  • 168. svCrawl Web Scraper Dataset svDetect Automatic Curb Ramp Detection svControl Automatic Task Allocation svVerify Manual Label Verification svLabel Manual Labeling Tohme 遠目 Remote Eye ・
  • 169. svCrawl Web Scraper Dataset svDetect Automatic Curb Ramp Detection svVerify Manual Label Verification svLabel Manual Labeling Tohme 遠目 Remote Eye ・ Complexity: Cardinality: Depth: CV: 0.14 0.33 0.21 0.22
  • 170. svVerify Manual Label Verification Tohme 遠目 Remote Eye ・ Complexity: Cardinality: Depth: CV: 0.14 0.33 0.21 0.22 Predict presence of false negatives with linear SVM and Lasso regression svCrawl Web Scraper Dataset svLabel Manual Labeling
  • 171. svCrawl Web Scraper Dataset svDetect Automatic Curb Ramp Detection svVerify Manual Label Verification Tohme 遠目 Remote Eye ・ Complexity: Cardinality: Depth: CV: 0.82 0.25 0.96 0.54
  • 172. svCrawl Web Scraper Dataset svDetect Automatic Curb Ramp Detection svControl Automatic Task Allocation svVerify Manual Label Verification svLabel Manual Labeling Tohme 遠目 Remote Eye ・
  • 173. svCrawl Web Scraper Dataset svDetect Automatic Curb Ramp Detection svControl Automatic Task Allocation svVerify Manual Label Verification svLabel Manual Labeling Tohme 遠目 Remote Eye ・
  • 174. Google Street View Intersection Panoramas and GIS Metadata 3D Point-cloud Data Top-down Google Maps Imagery Scraper & Dataset
  • 175. Saskatoon Los Angeles Baltimore Washington D.C. Washington D.C. Baltimore Los Angeles Saskatoon
  • 176. Washington D.C. Dense urban area Semi-urban residential areas Scraper & Dataset
  • 177. Washington D.C. Baltimore Los Angeles Saskatoon * At the time of downloading data in summer 2013 Scraper & Dataset Total Area: 11.3 km2 Intersections: 1,086 Curb Ramps: 2,877 Missing Curb Ramps: 647 Avg. GSV Data Age: 2.2 yrs
  • 178. How well does GSV data reflect the current state of the physical world?
  • 179.
  • 180. Vs.
  • 181. Washington D.C. Baltimore Physical Audit Areas GSV and Physical World > 97.7% agreement 273 Intersections Dataset | Validating Dataset Small disagreement due to construction.
  • 182. Washington D.C. Baltimore Physical Audit Areas 273 Intersections > 97.7% agreement Dataset Key Takeaway Google Street View is a viable source of curb ramp data
  • 183. svCrawl Web Scraper Dataset svDetect Automatic Curb Ramp Detection svControl Automatic Task Allocation svVerify Manual Label Verification svLabel Manual Labeling Tohme 遠目 Remote Eye ・
  • 184. svCrawl Web Scraper Dataset svDetect Automatic Curb Ramp Detection svControl Automatic Task Allocation svVerify Manual Label Verification svLabel Manual Labeling Tohme 遠目 Remote Eye ・
  • 185. AUTOMATIC CURB RAMP DETECTION 1.Curb Ramp Detection 2.Post-Processing Output 3.SVM-Based Classification
  • 186. Deformable Part Models Felzenszwalb et al. 2008 Automatic Curb Ramp Detection http://www.cs.berkeley.edu/~rbg/latent/
  • 187. Deformable Part Models Felzenszwalb et al. 2008 Automatic Curb Ramp Detection http://www.cs.berkeley.edu/~rbg/latent/ Root filter Parts filter Displacement cost
  • 188. Automatic Curb Ramp Detection Multiple redundant detection boxes Detected Labels Stage 1: Deformable Part Model Correct 1 False Positive 12 Miss 0
  • 189. Automatic Curb Ramp Detection Curb ramps shouldn’t be in the sky or on roofs Correct 1 False Positive 12 Miss 0 Detected Labels Stage 1: Deformable Part Model
  • 190. Automatic Curb Ramp Detection Detected Labels Stage 2: Post-processing
  • 191. Automatic Curb Ramp Detection Detected Labels Stage 3: SVM-based Refinement Filter out labels based on their size, color, and position. Correct 1 False Positive 5 Miss 0
  • 192. Automatic Curb Ramp Detection Correct 1 False Positive 3 Miss 0 Detected Labels Stage 3: SVM-based Refinement
  • 193. Automatic Curb Ramp Detection Correct 6 False Positive 11 Miss 1 Detected Labels Stage 1: Deformable Part Model
  • 194. Automatic Curb Ramp Detection Correct 6 False Positive 6 Miss 1 Detected Labels Stage 2: Post-processing
  • 195. Automatic Curb Ramp Detection Correct 6 False Positive 4 Miss 1 Detected Labels Stage 3: SVM-based Refinement
  • 196. Some curb ramps never get detected False positive detections Automatic Curb Ramp Detection Correct 6 False Positive 4 Miss 1
  • 197. Some curb ramps never get detected False positive detections Automatic Curb Ramp Detection Correct 6 False Positive 4 Miss 1 These false negatives are expensive to correct!
  • 198. Used two-fold cross validation to evaluate CV sub-system
  • 199. 0% 20% 40% 60% 80% 100% 0% 20% 40% 60% 80% 100% Precision (%) Recall (%) Automatic Curb Ramp Detection COMPUTER VISION SUB-SYSTEM RESULTS Precision Higher, less false positives Recall Higher, less false negatives
  • 200. 0% 20% 40% 60% 80% 100% 0% 20% 40% 60% 80% 100% Precision (%) Recall (%) Automatic Curb Ramp Detection COMPUTER VISION SUB-SYSTEM RESULTS Goal: maximize area under curve
  • 201. 0% 20% 40% 60% 80% 100% 0% 20% 40% 60% 80% 100% Precision (%) Recall (%) Stage 1: DPM Stage 2: Post-Processing Stage 3: SVM Automatic Curb Ramp Detection COMPUTER VISION SUB-SYSTEM RESULTS
  • 202. 0% 20% 40% 60% 80% 100% 0% 20% 40% 60% 80% 100% Precision (%) Recall (%) Stage 1: DPM Stage 2: Post-Processing Stage 3: SVM Automatic Curb Ramp Detection COMPUTER VISION SUB-SYSTEM RESULTS
  • 203. 0% 20% 40% 60% 80% 100% 0% 20% 40% 60% 80% 100% Precision (%) Recall (%) Stage 1: DPM Stage 2: Post-Processing Stage 3: SVM Automatic Curb Ramp Detection COMPUTER VISION SUB-SYSTEM RESULTS More than 20% of curb ramps were missed
  • 204. 0% 20% 40% 60% 80% 100% 0% 20% 40% 60% 80% 100% Precision (%) Recall (%) Stage 1: DPM Stage 2: Post-Processing Stage 3: SVM Automatic Curb Ramp Detection COMPUTER VISION SUB-SYSTEM RESULTS Confidence threshold of -0.99, which results in 26% precision and 67% recall
  • 205. Occlusion Illumination Scale Viewpoint Variation Structures Similar to Curb Ramps Curb Ramp Design Variation Automatic Curb Ramp Detection CURB RAMP DETECTION IS A HARD PROBLEM
  • 206. Can we predict difficult intersections & CV performance?
  • 207. svCrawl Web Scraper Dataset svDetect Automatic Curb Ramp Detection svControl Automatic Task Allocation svVerify Manual Label Verification svLabel Manual Labeling Tohme 遠目 Remote Eye ・
  • 208. svCrawl Web Scraper Dataset svDetect Automatic Curb Ramp Detection svControl Automatic Task Allocation svVerify Manual Label Verification svLabel Manual Labeling Tohme 遠目 Remote Eye ・
  • 209. Automatic Task Allocation | Features to Assess Scene Difficulty for CV Number of connected streets from metadata Depth information for intersection complexity analysis Top-down images to assess complexity of an intersection Number of detections and confidence values
  • 210. svCrawl Web Scraper Dataset svDetect Automatic Curb Ramp Detection svControl Automatic Task Allocation svVerify Manual Label Verification svLabel Manual Labeling Tohme 遠目 Remote Eye ・
  • 211. svCrawl Web Scraper Dataset svDetect Automatic Curb Ramp Detection svControl Automatic Task Allocation svVerify Manual Label Verification svLabel Manual Labeling Tohme 遠目 Remote Eye ・
  • 212. 3x Manual Labeling | Labeling Interface
  • 213. svCrawl Web Scraper Dataset svDetect Automatic Curb Ramp Detection svControl Automatic Task Allocation svVerify Manual Label Verification svLabel Manual Labeling Tohme 遠目 Remote Eye ・
  • 214. svCrawl Web Scraper Dataset svDetect Automatic Curb Ramp Detection svControl Automatic Task Allocation svVerify Manual Label Verification svLabel Manual Labeling Tohme 遠目 Remote Eye ・
  • 215. 2x Manual Label Verification
  • 216. 2x Manual Label Verification
  • 217. Automatic Detection and Manual Verification Automatic Task Allocation Can Tohme achieve equivalent or better accuracy at a lower time cost compared to a completely manual approach?
  • 218. STUDY METHOD: CONDITIONS Manual labeling without smart task allocation & vs. CV + Verification without smart task allocation Tohme 遠目 Remote Eye ・ vs. Evaluation
  • 219. Accuracy Task Completion Time Evaluation STUDY METHOD: MEASURES
  • 220. Recruited workers from Mturk Used 1,046 GSV images (40 used for golden insertion) Evaluation STUDY METHOD: APPROACH
  • 221. RESULTS Labeling Tasks Verification Tasks # of distinct turkers: 242 161 1,270 582 # of HITs completed: # of tasks completed: 6,350 4,820 # of tasks allocated: 769 277 Evaluation We used Monte Carlo simulations for evaluation
  • 222. 84% 88% 86% 0% 20% 40% 60% 80% 100% Accuracy Measures (%) Precision Recall F-measure Manual Labeling CV and Manual Verification & 94 0 20 40 60 80 100 Task Completion Time / Scene (s) Manual Labeling CV and Manual Verification & Tohme 遠目 Remote Eye ・ Tohme 遠目 Remote Eye ・ Evaluation | Labeling Accuracy and Time Cost Error bars are standard deviations. ACCURACY COST (TIME)
  • 223. 84% 68% 88% 58% 86% 63% 0% 20% 40% 60% 80% 100% Accuracy Measures (%) Precision Recall F-measure Manual Labeling CV and Manual Verification & 94 42 0 20 40 60 80 100 Task Completion Time / Scene (s) Manual Labeling CV and Manual Verification & Tohme 遠目 Remote Eye ・ Tohme 遠目 Remote Eye ・ Evaluation | Labeling Accuracy and Time Cost Error bars are standard deviations. ACCURACY COST (TIME)
  • 224. 84% 68% 83% 88% 58% 86% 86% 63% 84% 0% 20% 40% 60% 80% 100% Accuracy Measures (%) Precision Recall F-measure Manual Labeling CV and Manual Verification & 94 42 81 0 20 40 60 80 100 Task Completion Time / Scene (s) Manual Labeling CV and Manual Verification & Tohme 遠目 Remote Eye ・ Tohme 遠目 Remote Eye ・ Evaluation | Labeling Accuracy and Time Cost Error bars are standard deviations. ACCURACY COST (TIME)
  • 225. 84% 68% 83% 88% 58% 86% 86% 63% 84% 0% 20% 40% 60% 80% 100% Accuracy Measures (%) Precision Recall F-measure Manual Labeling CV and Manual Verification & 94 42 81 0 20 40 60 80 100 Task Completion Time / Scene (s) Manual Labeling CV and Manual Verification & Tohme 遠目 Remote Eye ・ Tohme 遠目 Remote Eye ・ Evaluation | Labeling Accuracy and Time Cost Error bars are standard deviations. 13% reduction in cost ACCURACY COST (TIME)
  • 226. svControl Automatic Task Allocation svVerify Manual Label Verification svLabel Manual Labeling Evaluation | Smart Task Allocator ~80% of svVerify tasks were correctly routed ~50% of svLabel tasks were correctly routed
  • 227. svControl Automatic Task Allocation svVerify Manual Label Verification svLabel Manual Labeling Evaluation | Smart Task Allocator If svControl worked perfectly, Tohme’s cost would drop to 28% of a manually labelling approach alone.
  • 228. Example Labels from Manual Labeling
  • 229. Evaluation | Example Labels from Manual Labeling
  • 230. Evaluation | Example Labels from Manual Labeling
  • 231. Evaluation | Example Labels from Manual Labeling
  • 232. Evaluation | Example Labels from Manual Labeling
  • 233. Evaluation | Example Labels from Manual Labeling
  • 234. This is a driveway. Not a curb ramp. Evaluation | Example Labels from Manual Labeling
  • 235. Evaluation | Example Labels from Manual Labeling
  • 236. Evaluation | Example Labels from Manual Labeling
  • 237. Examples Labels from CV + Verification
  • 238. Raw Street View Image Evaluation | Example Labels from CV + Verification
  • 239. False detection Automatic Detection Evaluation | Example Labels from CV + Verification
  • 240. Automatic Detection + Human Verification Evaluation | Example Labels from CV + Verification
  • 241. Automatic Detection Evaluation | Example Labels from CV + Verification
  • 242. Automatic Detection + Human Verification Evaluation | Example Labels from CV + Verification
  • 243. False verification Automatic Detection + Human Verification Evaluation | Example Labels from CV + Verification
  • 244. UIST’14 CONTRIBUTIONS: 1.First CV system for automatically detecting curb ramps in images 2.Showed that automated methods could be used to improve labeling efficiency for curb ramps 3.Validated GSV as a viable curb ramp dataset
  • 245. TOWARDS SCALABLE ACCESSIBILITY DATA COLLECTION ASSETS’12 Poster Feasibility study + labeling interface evaluation HCIC’13 Workshop Exploring early solutions to computer vision (CV) HCOMP’13 Poster 1st investigation of CV + crowdsourced verification CHI’13 Large-scale turk study + label validation with wheelchair users ASSETS’13 Applied to new domain: bus stop accessibility for visually impaired UIST’14 Crowdsourcing + CV + “smart” work allocation The Future
  • 246.
  • 248. 8,209 Intersections in DC BACK OF THE ENVELOPE CALCULATIONS Manually labeling GSV with our custom interfaces would take 214 hours With Tohme, this drops to 184 hours We think we can do better  Unclear how long a physical audit would take
  • 249. FUTURE WORK: COMPUTER VISION Context integration & scene understanding 3D-data integration Improve training & sample size Mensuration
  • 250.
  • 251.
  • 252. FUTURE WORK: FASTER LABELING & VERIFICATION INTERFACES
  • 253. FUTURE WORK: TRACK PHYSICAL ACCESSIBILITY CHANGES OVER TIME
  • 254. FUTURE WORK: ADDITIONAL SURVEYING TECHNIQUES Transmits real-time imagery of physical space along with measurements
  • 255. THE CROWD-POWERED STREETVIEW ACCESSIBILITY TEAM! Kotaro Hara Jin Sun Victoria Le Robert Moore Sean Pannella Jonah Chazan David Jacobs Jon Froehlich Zachary Lawrence Graduate Student Undergraduate High School Professor
  • 256. Flickr User: Pedro Rocha https://www.flickr.com/photos/pedrorocha/3627562740/ Flickr User: Brooke Hoyer https://www.flickr.com/photos/brookehoyer/14816521847/ Flickr User: Jen Rossey https://www.flickr.com/photos/jenrossey/3185264564/ Flickr User: Steven Vance https://www.flickr.com/photos/jamesbondsv/8642938765 Flickr User: Jorge Gonzalez https://www.flickr.com/photos/macabrephotographer/6225178809/ Flickr User: Mike Fraser https://www.flickr.com/photos/67588280@N00/10800029263// PHOTO CREDITS Flickr User: Susan Sermoneta https://www.flickr.com/photos/en321/344387583/
  • 257. This work is supported by: Faculty Research Award Human Computer Interaction Laboratory makeability lab
  • 258. Human Computer Interaction Laboratory makeability lab CHARACTERIZING PHYSICAL WORLD ACCESSIBILITY AT SCALE USING CROWDSOURCING, COMPUTER VISION, & MACHINE LEARNING