This talk was given as part of the Human-Computer Interaction Institute seminar series at Carnegie Mellon University. My host was Professor Jeffrey Bigham. More info here: https://www.hcii.cmu.edu/news/seminar/event/2014/10/characterizing-physical-world-accessibility-scale-using-crowdsourcing-computer-vision-machine-learning
You can download the original PowerPoint deck with videos here:
http://www.cs.umd.edu/~jonf/talks.html
Abstract: Roughly 30.6 million individuals in the US have physical disabilities that affect their ambulatory activities; nearly half of those individuals report using an assistive aid such as a wheelchair, cane, crutches, or walker. Despite comprehensive civil rights legislation, many city streets, sidewalks, and businesses remain inaccessible. The problem is not just that street-level accessibility affects where and how people travel in cities but also that there are few, if any, mechanisms to determine accessible areas of a city a priori.
In this talk, I will describe our research developing novel, scalable data-collection methods for acquiring accessibility information about the built environment using a combination of crowdsourcing, computer vision, and online map imagery (e.g., Google Street View). Our overarching goal is to transform the ways in which accessibility information is collected and visualized for every sidewalk, street, and building façade in the world. This work is in collaboration with University of Maryland Professor David Jacobs and graduate students Kotaro Hara and Jin Sun along with a number of undergraduate students and high school interns.
Hakim Robinson Resume post undergrad revisionHakim Robinson
Hakim Robinson has over 10 years of experience in film production. He has worked as a director, producer, director of photography, camera operator, sound department, and production roles on over 20 projects. His experience includes working on feature films, television shows, music videos, and shorts. He has a Bachelor's degree in Television Producing from The Savannah College of Art and Design and certifications in audio production and camera operation.
This document provides lessons and tips for teaching online journalism skills to students. It discusses creating mobile-first news sites that are constantly updated with social integration and multimedia. It provides ideas for mobile, 24-hour news, multimedia, experimental, and social media lessons. Some examples include having students report remotely using apps, editing short news updates, shooting and editing brief videos, finding new platforms to cover news, and writing mock tweets about events. The document also offers tips for shooting and editing video and audio interviews, such as planning ahead, testing equipment, capturing b-roll, and editing tightly. Overall, it aims to teach students the skills needed to engage in digital and mobile journalism.
SmartWheels | Mapping for AccessibilitySusan Oldham
Everyday, wheelchair users navigating across city streets, campuses, and pathways face obstacles due to inaccessible routes and changing terrain. Now, with SmartWheels, they can rely on real-time, personalized route directions.
The power of the SmartWheels system is that the wheelchair itself becomes an internet-connected sensor suite for the user. The wheelchair utilizes terrain-sensing technology, which determines the type of surface underneath the wheels as well as grade, elevation, and surrounding curbs and obstacles. Users can enter additional information about the environment to provide context to the collected data.
SmartWheels is an integrated system comprised of a SmartWheels Sensor Kit, smartphone app, and crowd-sourced accessible maps for any browser. The Connector Kit can be installed on any wheelchair and provides Internet and sensor capabilities directly to the wheelchair. After installing the Connector Kit, the person can track his or her route data, plus get or give recommendations for improved routes.
SmartWheels is a brain collective of Human Centered Design & Engineering (HCDE) students at the University of Washington and we seek to solve real-life accessibility problems. Luke Easterwood, Susan Oldham, Annuska Perkins and Tristan Plank designed SmartWheels as part of an Interaction Design class taught by Kelly Franznick of Blink Interaction at UW.
Applying Iterative Design to the Eco-Feedback Design Process Jon Froehlich
Although randomized controlled trials are the gold standard in evaluating the effectiveness of eco-feedback systems on reducing consumption behaviors, such trials are resource intensive and costly. As such, it is crucial that the intervention—the eco-feedback artifact—is well designed before effort is invested in a longitudinal study.
In this talk, I will discuss the application of iterative design to eco-feedback systems. Iterative design is a design methodology based on a cyclic process of prototyping, user testing, and analysis, the results of which are then used to inform a new round of prototyping (and the cycle continues). Through an 18-month design process of a prototype eco-feedback display (Froehlich, 2011), I will describe how iterative design was used to evaluate and refine the aesthetic, usability, understandability, and educational potential of an eco-feedback system before a field deployment. I will highlight the role of massive online surveys in evaluating early eco-feedback design ideas and the role of in-home interviews in evaluating higher-fidelity (more refined) designs. Finally, I will close the talk with a discussion of low-cost methods to deploy and test eco-feedback designs in the field even when underlying resource sensing systems (e.g., smart meters) are unavailable. These methods can be used to evaluate how the eco-feedback system may fit into domestic space, explore differences in perspective and preference across household members, and evaluate how the system affects household dynamics (e.g., if the design provokes privacy concerns) before behavioral trials are conducted in earnest.
Froehlich, J. (2011). Sensing and Feedback of Everyday Activities to Promote Environmental Behaviors. University of Washington Doctoral Dissertation 2011. http://www.cs.umd.edu/~jonf/publications.html
Social Fabrics: Designing Wearable E-Textiles for Interaction, Introspection,...Jon Froehlich
You can see a video recording of this talk online: https://www.youtube.com/watch?v=DwnZmJUybY4.
You can download the original PowerPoint slide deck with videos here: http://www.cs.umd.edu/~jonf/talks.html
Talk Abstract: Advances in electronic textiles (e-textiles), embedded computing, and biometric sensing enable new types of wearable interactive experiences. In this talk, I will introduce three e-textile projects from my research group: BodyVis, Social Fabric Fitness, and ILikeThisShirt.com which explore and push on how computational clothing can be used to facilitate group interaction, provoke self-inquiry, and stimulate introspection.
Background: I gave this talk at the National Academy of Science's "DC Art Science Evening Rendezvous" (DASER) at the Keck Center. The evening's theme was "technology and creativity" highlighted by the opening of University of Maryland Computer Science Professor Ben Shneiderman's “Every AlgoRiThm has ART in it: Treemap Art Project.” In addition to Shneiderman and myself, the other speakers included Manuel Lima, a designer, author, researcher, and lecturer, and Jonah Brucker-Cohen, an assistant professor of digital media and networked culture, Lehman
College, City University of New York.
For more information, see:
* http://www.cpnas.org/press/announcements/treemapfinalrelease.pdf
* https://www.eventbrite.com/e/dc-art-science-evening-rendezvous-daser-tickets-11950067975
* http://www.cpnas.org/events/experience-future-events-daser.html
This document contains a list of 200 embedded system projects from various domains like general embedded systems, gesture based systems, vehicular technology, ARM controller based, CAN bus based, Android based, and greenhouse monitoring systems. Each project is assigned a unique identification code and title. The document also provides contact information for Hades InfoTech, the organization providing these project options.
Hakim Robinson Resume post undergrad revisionHakim Robinson
Hakim Robinson has over 10 years of experience in film production. He has worked as a director, producer, director of photography, camera operator, sound department, and production roles on over 20 projects. His experience includes working on feature films, television shows, music videos, and shorts. He has a Bachelor's degree in Television Producing from The Savannah College of Art and Design and certifications in audio production and camera operation.
This document provides lessons and tips for teaching online journalism skills to students. It discusses creating mobile-first news sites that are constantly updated with social integration and multimedia. It provides ideas for mobile, 24-hour news, multimedia, experimental, and social media lessons. Some examples include having students report remotely using apps, editing short news updates, shooting and editing brief videos, finding new platforms to cover news, and writing mock tweets about events. The document also offers tips for shooting and editing video and audio interviews, such as planning ahead, testing equipment, capturing b-roll, and editing tightly. Overall, it aims to teach students the skills needed to engage in digital and mobile journalism.
SmartWheels | Mapping for AccessibilitySusan Oldham
Everyday, wheelchair users navigating across city streets, campuses, and pathways face obstacles due to inaccessible routes and changing terrain. Now, with SmartWheels, they can rely on real-time, personalized route directions.
The power of the SmartWheels system is that the wheelchair itself becomes an internet-connected sensor suite for the user. The wheelchair utilizes terrain-sensing technology, which determines the type of surface underneath the wheels as well as grade, elevation, and surrounding curbs and obstacles. Users can enter additional information about the environment to provide context to the collected data.
SmartWheels is an integrated system comprised of a SmartWheels Sensor Kit, smartphone app, and crowd-sourced accessible maps for any browser. The Connector Kit can be installed on any wheelchair and provides Internet and sensor capabilities directly to the wheelchair. After installing the Connector Kit, the person can track his or her route data, plus get or give recommendations for improved routes.
SmartWheels is a brain collective of Human Centered Design & Engineering (HCDE) students at the University of Washington and we seek to solve real-life accessibility problems. Luke Easterwood, Susan Oldham, Annuska Perkins and Tristan Plank designed SmartWheels as part of an Interaction Design class taught by Kelly Franznick of Blink Interaction at UW.
Applying Iterative Design to the Eco-Feedback Design Process Jon Froehlich
Although randomized controlled trials are the gold standard in evaluating the effectiveness of eco-feedback systems on reducing consumption behaviors, such trials are resource intensive and costly. As such, it is crucial that the intervention—the eco-feedback artifact—is well designed before effort is invested in a longitudinal study.
In this talk, I will discuss the application of iterative design to eco-feedback systems. Iterative design is a design methodology based on a cyclic process of prototyping, user testing, and analysis, the results of which are then used to inform a new round of prototyping (and the cycle continues). Through an 18-month design process of a prototype eco-feedback display (Froehlich, 2011), I will describe how iterative design was used to evaluate and refine the aesthetic, usability, understandability, and educational potential of an eco-feedback system before a field deployment. I will highlight the role of massive online surveys in evaluating early eco-feedback design ideas and the role of in-home interviews in evaluating higher-fidelity (more refined) designs. Finally, I will close the talk with a discussion of low-cost methods to deploy and test eco-feedback designs in the field even when underlying resource sensing systems (e.g., smart meters) are unavailable. These methods can be used to evaluate how the eco-feedback system may fit into domestic space, explore differences in perspective and preference across household members, and evaluate how the system affects household dynamics (e.g., if the design provokes privacy concerns) before behavioral trials are conducted in earnest.
Froehlich, J. (2011). Sensing and Feedback of Everyday Activities to Promote Environmental Behaviors. University of Washington Doctoral Dissertation 2011. http://www.cs.umd.edu/~jonf/publications.html
Social Fabrics: Designing Wearable E-Textiles for Interaction, Introspection,...Jon Froehlich
You can see a video recording of this talk online: https://www.youtube.com/watch?v=DwnZmJUybY4.
You can download the original PowerPoint slide deck with videos here: http://www.cs.umd.edu/~jonf/talks.html
Talk Abstract: Advances in electronic textiles (e-textiles), embedded computing, and biometric sensing enable new types of wearable interactive experiences. In this talk, I will introduce three e-textile projects from my research group: BodyVis, Social Fabric Fitness, and ILikeThisShirt.com which explore and push on how computational clothing can be used to facilitate group interaction, provoke self-inquiry, and stimulate introspection.
Background: I gave this talk at the National Academy of Science's "DC Art Science Evening Rendezvous" (DASER) at the Keck Center. The evening's theme was "technology and creativity" highlighted by the opening of University of Maryland Computer Science Professor Ben Shneiderman's “Every AlgoRiThm has ART in it: Treemap Art Project.” In addition to Shneiderman and myself, the other speakers included Manuel Lima, a designer, author, researcher, and lecturer, and Jonah Brucker-Cohen, an assistant professor of digital media and networked culture, Lehman
College, City University of New York.
For more information, see:
* http://www.cpnas.org/press/announcements/treemapfinalrelease.pdf
* https://www.eventbrite.com/e/dc-art-science-evening-rendezvous-daser-tickets-11950067975
* http://www.cpnas.org/events/experience-future-events-daser.html
This document contains a list of 200 embedded system projects from various domains like general embedded systems, gesture based systems, vehicular technology, ARM controller based, CAN bus based, Android based, and greenhouse monitoring systems. Each project is assigned a unique identification code and title. The document also provides contact information for Hades InfoTech, the organization providing these project options.
Using Crowdsourcing, Automated Methods and Google Street View to Collect Side...Kotaro Hara
In this presentation, I describe a system that uses crowdsourcing, computer vision, machine learning, and Google Street View to collect sidewalk accessibility data.
Crowdsourcing can be used to effectively identify street-level accessibility problems using Google Street View images. Researchers conducted 3 studies: 1) Researchers labeled images and achieved moderate to substantial agreement, showing consistency. 2) Wheelchair users also agreed with researchers' labels. 3) Mechanical Turk workers achieved 81% accuracy without quality control and 93% accuracy with validation, showing crowds can perform this task. Using multiple Turk judgments increased accuracy over single judgments. This method allows accessibility problems to be identified and mapped at scale.
This document summarizes a presentation given by Dr. Barry Norton on knowledge graphs for data fusion. Some key points discussed include:
- Knowledge graphs can integrate data from various sources like video analytics, access control, sensors and background information to analyze related events.
- Milestone's video management software has the capability to recognize individuals across camera streams and correlate suspicious access control events with later cybersecurity incidents using a knowledge graph approach.
- The presentation discusses the history and applications of knowledge graphs, highlighting how they can provide benefits for security, transportation and other use cases when combined with video and sensor data from an Internet of Things environment.
OpenAI and other large AI companies are lobbying for regulation in the US to create barriers that maintain their competitive advantage. However, open source models are becoming increasingly competitive through techniques like training on smaller specialized datasets, low-rank parameterization, and quantization. Progress in AI will be driven more by the curation and management of specialized, minimal, modular datasets for training and evaluation, which provides an opportunity for the data management community. Curation, rather than model size, will determine success by enabling specialized models trained on trusted data to produce correct, verifiable results.
The Purdue IronHacks are the world's first virtual Open Data Hacks. Read more about our work in turning open data into novel and useful applications for the public!
New wayfinding system for City of Toronto's underground walkwayAmy Chong
The team was tasked with redesigning the wayfinding system for the PATH, a 30km underground pedestrian walkway network in Toronto. The current system was poorly designed and led users to frequently get lost. Through user research, the team found navigation was difficult due to poorly placed signs and a lack of connections to street-level locations. The new design features consistent, location-based signs with route, street intersection, and accessibility information to help users intuitively navigate between buildings and access their destinations without getting lost. A virtual simulation confirmed participants could reach destinations without wrong turns using the new system.
Yinhai Wang - Smart Transportation: Research under Smart Cities Context - GCS16KC Digital Drive
This document discusses challenges and opportunities in smart transportation research under smart cities. It outlines how transportation is a major issue impacting environment, safety, and public health. Smart cities and transportation big data can help address key issues of efficiency, sustainability and safety through data analytics. The document presents examples of extracting transportation data from mobile networks and DRIVE Net, a system for data sharing, visualization, and analysis to support e-science investigations in transportation. Research needs include developing methods to utilize spatial and temporal big data to support analysis and decision making in transportation.
Read about the first ever virtual open data hack where developers turn open data into novel and useful citizen applications, and how you can get involved!
2013 Talk on Informatics tools for public transport re cities and healthPatrick Sunter
A presentation at the 2013 meeting of the UniMelb-based "Transport, Health & Chronic Diseases Research Network", on 13 Nov, 2013 (See http://cwhgs.unimelb.edu.au/knowledge/knowledge
). Talk title:- 'Some Remarks on Issues around Data and Tools for Understanding Public Transport Networks from My PhD Work'.
Roland is currently working with TfL on the Surface Intelligent Transport System, which is looking to improve the insight available from existing and new data sources. Have worked on event driven architectures for many years and across many sectors although with a primary focus on Transport.
Intro to accessibility workshop slidesbillcorrigan
Slides from a workshop in April, 2014. Focus on tools and techniques to make sites and content accessible to everyone, regardless of disabilities. Starts with Intro, moves into description of an accessibility, and concludes with a look at what developers can do to avoid problems.
DEVELOPMENT OF CONTROL SOFTWARE FOR STAIR DETECTION IN A MOBILE ROBOT USING A...IAEME Publication
In this paper our main aim is to design and develop the control software for the detection and alignment of stairs by a stair climbing and manually operated robot. The robot platform is a differential drive, with skid steering system. The system is mounted on a rugged chassis. Vision sensors are mounted on the robot. These are cameras which will provide motion images of the robot’s surroundings. The application software will apply image processing and artificial intelligence techniques to detect stairs at Real-time and align the robot at an appropriate distance from the stair. Use of canny edge detection method to detect the edges of the stairs, smoothen the image and removing noise from the image. Neural networking will be used to detect stairs and faults. Machine learning technology to overcome faults in stairs and act accordingly from the saved experiences. This will be Linux based application which will have support of OpenCV API.
Andrea Mocci: Beautiful Design, Beautiful Coding at I T.A.K.E. Unconference 2015Mozaic Works
The document describes a software visualization approach called CodeCity that represents software systems as 3D cities. It presents results from a controlled experiment evaluating CodeCity on program comprehension tasks. The experiment involved 41 participants from academia and industry and found that CodeCity led to a statistically significant increase in task correctness and decrease in completion time compared to traditional tools. The document discusses lessons learned from designing and conducting the experiment and concludes that the city metaphor visualization provided benefits for program comprehension.
Designing Augmented Reality Experiences for MobileTryMyUI
Professor Ed Johnston discusses designing augmented reality experiences for mobile. He provides an overview of marker-based and geolocational augmented reality. Johnston summarizes an augmented reality tour he created of Asbury Park. He also discusses challenges with augmented reality experiences including balancing user experience and details, project timelines, and long-term support. Johnston shares compelling statistics on augmented reality adoption and believes higher education will adopt augmented and virtual reality within 2-3 years.
The Americans with Disabilities Act 1990 (ADA), is a landmark civil rights law that prohibits discrimination based on disability. Title II of the ADA requires state and local governments to make their programs and services accessible to persons with disabilities. This requirement extends not only to physical access at government facilities, programs and events — but also to pedestrian facilities in public rights-of-way. To comply with the ADA, every state and local government is required to prepare a self-evaluation report to identify program access issues. From this, a transition plan is required, with a schedule identifying corrective measures to achieve a barrier-free environment. In 2008, Bellevue undertook an ADA sidewalk and curb ramp self-evaluation update to assess its program responsibilities for existing pedestrian facilities in the public rights-of-way. The City employed innovative technologies to document barriers and prioritize improvements where most needed. Implementation of this technology development and compliance effort involved a coordinated staffing and funding commitment from the City of Bellevue, Federal Highway Administration and King County, with technical support from Starodub Inc., an engineering services firm. The technical precision offered by Bellevue’s approach is identified as a best practice in ADA Compliance at Transportation Agencies: A Review of Practices (NCHRP 20-07 Task 249), a Texas Transportation Institute study. The report notes that “[e]fforts such as those at the City of Bellevue, Washington, that rely on the collection of large datasets at extremely fine spatial and temporal disaggregation levels have the potential to significantly automate the identification of non-compliant locations in the field.”
This document discusses data visualization. It defines data visualization as turning information into a visual landscape that is easier for the human brain to process than text. Good data visualization communicates clearly, meets audience needs, and tells the truth. It provides examples of common visualization types like charts and graphs and best practices like using the right colors. The document also discusses popular visualization tools and provides real-world examples of data visualization.
This document discusses various projects undertaken by Stamen Design to visualize web data. It describes projects like MoveOn.org's Virtual Town Hall that mapped participants in online political discussions. It also discusses projects with Digg, Trulia and SFMOMA that visualized social media activity, real estate data and art collections in novel digital formats. Throughout, it emphasizes concepts like live, vast and deep data and explores ways to represent complex information through interactive maps and other visualizations.
Making in the Human-Computer Interaction Lab (HCIL)Jon Froehlich
You can download the PowerPoint file with embedded movies here: http://www.cs.umd.edu/~jonf/talks.html
----------------
In the HCIL's Makeability Lab at the University of Maryland, we design interactive experiences that cross between bits and atoms—the virtual and the physical—and back again to confront some of the world's greatest challenges: environmental sustainability, health and wellness, and universal accessibility.
In my talk, I’ll begin with an overview of the “Maker ethos” and the rise of Maker/DIY culture. I’ll then discuss how "Making" at the University of Maryland before shifting to how the HCIL (Human-Computer Interaction Lab) has begun introducing Maker tools and projects in research (e.g., [1–4]) and in the classroom including an introduction to our new(ish) HCIL Hackerspace. The talk closes with an overview of my Tangible Interactive Computing classes and how I've attempted to imbue it with a "Maker" and design studio spirit. At end, I hope to prompt discussion about the future of physical computing and making and where university education fits in.
REFERENCES
[1] Hara, K., Le, V. and Froehlich, J. 2013. Combining Crowdsourcing and Google Street View to Identify Street-level Accessibility Problems. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’13) (New York, NY, USA, May 2013).
[2] Hara, K., Shiri, A., Campbell, M., Cynthia, B., Le, V., Pannella, S., Moore, R., Minckler, K., Ng, R. and Froehlich, J. 2013. Improving Public Transit Accessibility for Blind Riders by Crowdsourcing Bus Stop Landmark Locations with Google Street View. Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility Technology (2013), 16:1–16:8.
[3] Mauriello, M., Gubbels, M. and Froehlich, J. 2014. Social Fabric Fitness: The Design and Evaluation of Wearable E-Textile Displays to Support Group Running. SIGCHI Conference on Human Factors in Computing Systems (CHI ’14) (2014).
[4] Norooz, L. and Froehlich, J. 2013. Exploring early designs for teaching anatomy and physiology to children using wearable e-textiles. Proceedings of the 12th International Conference on Interaction Design and Children - IDC ’13 (New York, New York, USA, Jun. 2013), 577–580.
A Brief Overview of the HCIL Hackerspace at UMDJon Froehlich
A brief overview of the HCIL Hackerspace at the University of Maryland started by Computer Science Assistant Professor Jon Froehlich in 2012. The Hackerspace is within the Human-Computer Interaction Lab (HCIL), which is one of the oldest HCI research labs in the world having been founded by Professor Shneiderman in 1983. This slide deck also includes a few pictures of other HCIL spaces including the main lab, hallways, office space, and the usability lab. The design of the main lab and hallway space (e.g., tangible timeline on wall) was led by Professor Allison Druin.
More Related Content
Similar to Characterizing Physical World Accessibility at Scale Using Crowdsourcing, Computer Vision, & Machine Learning
Using Crowdsourcing, Automated Methods and Google Street View to Collect Side...Kotaro Hara
In this presentation, I describe a system that uses crowdsourcing, computer vision, machine learning, and Google Street View to collect sidewalk accessibility data.
Crowdsourcing can be used to effectively identify street-level accessibility problems using Google Street View images. Researchers conducted 3 studies: 1) Researchers labeled images and achieved moderate to substantial agreement, showing consistency. 2) Wheelchair users also agreed with researchers' labels. 3) Mechanical Turk workers achieved 81% accuracy without quality control and 93% accuracy with validation, showing crowds can perform this task. Using multiple Turk judgments increased accuracy over single judgments. This method allows accessibility problems to be identified and mapped at scale.
This document summarizes a presentation given by Dr. Barry Norton on knowledge graphs for data fusion. Some key points discussed include:
- Knowledge graphs can integrate data from various sources like video analytics, access control, sensors and background information to analyze related events.
- Milestone's video management software has the capability to recognize individuals across camera streams and correlate suspicious access control events with later cybersecurity incidents using a knowledge graph approach.
- The presentation discusses the history and applications of knowledge graphs, highlighting how they can provide benefits for security, transportation and other use cases when combined with video and sensor data from an Internet of Things environment.
OpenAI and other large AI companies are lobbying for regulation in the US to create barriers that maintain their competitive advantage. However, open source models are becoming increasingly competitive through techniques like training on smaller specialized datasets, low-rank parameterization, and quantization. Progress in AI will be driven more by the curation and management of specialized, minimal, modular datasets for training and evaluation, which provides an opportunity for the data management community. Curation, rather than model size, will determine success by enabling specialized models trained on trusted data to produce correct, verifiable results.
The Purdue IronHacks are the world's first virtual Open Data Hacks. Read more about our work in turning open data into novel and useful applications for the public!
New wayfinding system for City of Toronto's underground walkwayAmy Chong
The team was tasked with redesigning the wayfinding system for the PATH, a 30km underground pedestrian walkway network in Toronto. The current system was poorly designed and led users to frequently get lost. Through user research, the team found navigation was difficult due to poorly placed signs and a lack of connections to street-level locations. The new design features consistent, location-based signs with route, street intersection, and accessibility information to help users intuitively navigate between buildings and access their destinations without getting lost. A virtual simulation confirmed participants could reach destinations without wrong turns using the new system.
Yinhai Wang - Smart Transportation: Research under Smart Cities Context - GCS16KC Digital Drive
This document discusses challenges and opportunities in smart transportation research under smart cities. It outlines how transportation is a major issue impacting environment, safety, and public health. Smart cities and transportation big data can help address key issues of efficiency, sustainability and safety through data analytics. The document presents examples of extracting transportation data from mobile networks and DRIVE Net, a system for data sharing, visualization, and analysis to support e-science investigations in transportation. Research needs include developing methods to utilize spatial and temporal big data to support analysis and decision making in transportation.
Read about the first ever virtual open data hack where developers turn open data into novel and useful citizen applications, and how you can get involved!
2013 Talk on Informatics tools for public transport re cities and healthPatrick Sunter
A presentation at the 2013 meeting of the UniMelb-based "Transport, Health & Chronic Diseases Research Network", on 13 Nov, 2013 (See http://cwhgs.unimelb.edu.au/knowledge/knowledge
). Talk title:- 'Some Remarks on Issues around Data and Tools for Understanding Public Transport Networks from My PhD Work'.
Roland is currently working with TfL on the Surface Intelligent Transport System, which is looking to improve the insight available from existing and new data sources. Have worked on event driven architectures for many years and across many sectors although with a primary focus on Transport.
Intro to accessibility workshop slidesbillcorrigan
Slides from a workshop in April, 2014. Focus on tools and techniques to make sites and content accessible to everyone, regardless of disabilities. Starts with Intro, moves into description of an accessibility, and concludes with a look at what developers can do to avoid problems.
DEVELOPMENT OF CONTROL SOFTWARE FOR STAIR DETECTION IN A MOBILE ROBOT USING A...IAEME Publication
In this paper our main aim is to design and develop the control software for the detection and alignment of stairs by a stair climbing and manually operated robot. The robot platform is a differential drive, with skid steering system. The system is mounted on a rugged chassis. Vision sensors are mounted on the robot. These are cameras which will provide motion images of the robot’s surroundings. The application software will apply image processing and artificial intelligence techniques to detect stairs at Real-time and align the robot at an appropriate distance from the stair. Use of canny edge detection method to detect the edges of the stairs, smoothen the image and removing noise from the image. Neural networking will be used to detect stairs and faults. Machine learning technology to overcome faults in stairs and act accordingly from the saved experiences. This will be Linux based application which will have support of OpenCV API.
Andrea Mocci: Beautiful Design, Beautiful Coding at I T.A.K.E. Unconference 2015Mozaic Works
The document describes a software visualization approach called CodeCity that represents software systems as 3D cities. It presents results from a controlled experiment evaluating CodeCity on program comprehension tasks. The experiment involved 41 participants from academia and industry and found that CodeCity led to a statistically significant increase in task correctness and decrease in completion time compared to traditional tools. The document discusses lessons learned from designing and conducting the experiment and concludes that the city metaphor visualization provided benefits for program comprehension.
Designing Augmented Reality Experiences for MobileTryMyUI
Professor Ed Johnston discusses designing augmented reality experiences for mobile. He provides an overview of marker-based and geolocational augmented reality. Johnston summarizes an augmented reality tour he created of Asbury Park. He also discusses challenges with augmented reality experiences including balancing user experience and details, project timelines, and long-term support. Johnston shares compelling statistics on augmented reality adoption and believes higher education will adopt augmented and virtual reality within 2-3 years.
The Americans with Disabilities Act 1990 (ADA), is a landmark civil rights law that prohibits discrimination based on disability. Title II of the ADA requires state and local governments to make their programs and services accessible to persons with disabilities. This requirement extends not only to physical access at government facilities, programs and events — but also to pedestrian facilities in public rights-of-way. To comply with the ADA, every state and local government is required to prepare a self-evaluation report to identify program access issues. From this, a transition plan is required, with a schedule identifying corrective measures to achieve a barrier-free environment. In 2008, Bellevue undertook an ADA sidewalk and curb ramp self-evaluation update to assess its program responsibilities for existing pedestrian facilities in the public rights-of-way. The City employed innovative technologies to document barriers and prioritize improvements where most needed. Implementation of this technology development and compliance effort involved a coordinated staffing and funding commitment from the City of Bellevue, Federal Highway Administration and King County, with technical support from Starodub Inc., an engineering services firm. The technical precision offered by Bellevue’s approach is identified as a best practice in ADA Compliance at Transportation Agencies: A Review of Practices (NCHRP 20-07 Task 249), a Texas Transportation Institute study. The report notes that “[e]fforts such as those at the City of Bellevue, Washington, that rely on the collection of large datasets at extremely fine spatial and temporal disaggregation levels have the potential to significantly automate the identification of non-compliant locations in the field.”
This document discusses data visualization. It defines data visualization as turning information into a visual landscape that is easier for the human brain to process than text. Good data visualization communicates clearly, meets audience needs, and tells the truth. It provides examples of common visualization types like charts and graphs and best practices like using the right colors. The document also discusses popular visualization tools and provides real-world examples of data visualization.
This document discusses various projects undertaken by Stamen Design to visualize web data. It describes projects like MoveOn.org's Virtual Town Hall that mapped participants in online political discussions. It also discusses projects with Digg, Trulia and SFMOMA that visualized social media activity, real estate data and art collections in novel digital formats. Throughout, it emphasizes concepts like live, vast and deep data and explores ways to represent complex information through interactive maps and other visualizations.
Similar to Characterizing Physical World Accessibility at Scale Using Crowdsourcing, Computer Vision, & Machine Learning (20)
Making in the Human-Computer Interaction Lab (HCIL)Jon Froehlich
You can download the PowerPoint file with embedded movies here: http://www.cs.umd.edu/~jonf/talks.html
----------------
In the HCIL's Makeability Lab at the University of Maryland, we design interactive experiences that cross between bits and atoms—the virtual and the physical—and back again to confront some of the world's greatest challenges: environmental sustainability, health and wellness, and universal accessibility.
In my talk, I’ll begin with an overview of the “Maker ethos” and the rise of Maker/DIY culture. I’ll then discuss how "Making" at the University of Maryland before shifting to how the HCIL (Human-Computer Interaction Lab) has begun introducing Maker tools and projects in research (e.g., [1–4]) and in the classroom including an introduction to our new(ish) HCIL Hackerspace. The talk closes with an overview of my Tangible Interactive Computing classes and how I've attempted to imbue it with a "Maker" and design studio spirit. At end, I hope to prompt discussion about the future of physical computing and making and where university education fits in.
REFERENCES
[1] Hara, K., Le, V. and Froehlich, J. 2013. Combining Crowdsourcing and Google Street View to Identify Street-level Accessibility Problems. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’13) (New York, NY, USA, May 2013).
[2] Hara, K., Shiri, A., Campbell, M., Cynthia, B., Le, V., Pannella, S., Moore, R., Minckler, K., Ng, R. and Froehlich, J. 2013. Improving Public Transit Accessibility for Blind Riders by Crowdsourcing Bus Stop Landmark Locations with Google Street View. Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility Technology (2013), 16:1–16:8.
[3] Mauriello, M., Gubbels, M. and Froehlich, J. 2014. Social Fabric Fitness: The Design and Evaluation of Wearable E-Textile Displays to Support Group Running. SIGCHI Conference on Human Factors in Computing Systems (CHI ’14) (2014).
[4] Norooz, L. and Froehlich, J. 2013. Exploring early designs for teaching anatomy and physiology to children using wearable e-textiles. Proceedings of the 12th International Conference on Interaction Design and Children - IDC ’13 (New York, New York, USA, Jun. 2013), 577–580.
A Brief Overview of the HCIL Hackerspace at UMDJon Froehlich
A brief overview of the HCIL Hackerspace at the University of Maryland started by Computer Science Assistant Professor Jon Froehlich in 2012. The Hackerspace is within the Human-Computer Interaction Lab (HCIL), which is one of the oldest HCI research labs in the world having been founded by Professor Shneiderman in 1983. This slide deck also includes a few pictures of other HCIL spaces including the main lab, hallways, office space, and the usability lab. The design of the main lab and hallway space (e.g., tangible timeline on wall) was led by Professor Allison Druin.
As usual, I suggest you download the full PowerPoint (PPTX) version of this talk to view the embedded animations and videos (which should enhance understandability). http://www.cs.umd.edu/~jonf/talks.html
---
In their State of Green Business report, the GreenBiz Group listed gamification as one of the top sustainable business trends of 2012, noting that game mechanics are increasingly used by companies to provide “rewards for making good, green choices” (Makower, 2012). In the last few years, we have seen a surge of interest in green gamification, which is beginning to touch upon nearly all aspects of our everyday life from cars that rank and reward fuel-efficient driving performance (e.g., the Nissan Leaf) to sanitation services that monitor and reward home recycling behavior (e.g., Recyclebank). As Ashok Kamal, the CEO of the green social media marketing company Bennu notes, this movement represents a “tidal wave of green gamification that is capturing the attention of the green community and the business community as a whole” (Cousteau, Kamal, Freeman, & Pank, 2012).
Given such vibrant enthusiasm surrounding “green gamification,” it is hard not to react with some degree of skepticism. Climate change, pollution, and other human-driven environmental ills are complex, multi-faceted problems—can gamification actually play a serious role in their solution? In this talk, I attempt to provide a partial answer by providing a teaser for my new chapter on gamifying green to come out this year in the book “The Gameful World” edited by Steffen P. Walz and Sebastian Deterding. To learn more, go here: www.gamefulworld.org.
Moving Beyond Line Graphs: A (Brief) History and Future of Eco-Feedback DesignJon Froehlich
I gave this talk at Behavior, Energy and Climate Change 2010 in Sacramento, CA. I briefly introduce energy feedback (eco-feedback) displays, provide a critique of the current (over) reliance on time series line graphs, and point out directions for the future. If you're interested in this topic, see my webpage for more: http://bit.ly/jonuw
Sensing Opportunities and Zero Effort Applications for Mobile Health PersuasionJon Froehlich
This is my Mobile Health 2010 (#mh2010) talk that I gave on May 24th for the session: "The Sweet Spot of Behavior Change via Mobile Devices."
I use lots of animations, so I strongly encourage you to download the PowerPoint pptx here:
http://www.cs.washington.edu/homes/jfroehli/talks.html
It looks like SlideShare messed up the format of these slides, to view the original, see: http://www.cs.washington.edu/homes/jfroehli/talks.html
Eco-feedback technology provides feedback on individual or group behaviors with a goal of reducing environmental impact. The history of eco-feedback extends back more than 40 years to the origins of environmental psychology. Despite its stated purpose, few HCI eco-feedback studies have attempted to measure behavior change. This leads to two overarching questions: (1) what can HCI learn from environmental psychology and (2) what role should HCI have in designing and evaluating eco-feedback technology? To help answer these questions, this paper conducts a comparative survey of eco-feedback technology, including 89 papers from environmental psychology and 44 papers from the HCI and UbiComp literature. We also provide an overview of predominant models of proenvironmental behaviors and a summary of key motivation techniques to promote this behavior.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
Discover top-tier mobile app development services, offering innovative solutions for iOS and Android. Enhance your business with custom, user-friendly mobile applications.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
The Microsoft 365 Migration Tutorial For Beginner.pptx
Characterizing Physical World Accessibility at Scale Using Crowdsourcing, Computer Vision, & Machine Learning
1. Human Computer
Interaction
Laboratory
makeability lab
CHARACTERIZING PHYSICAL WORLD ACCESSIBILITY AT SCALE
USING CROWDSOURCING, COMPUTER VISION, & MACHINE LEARNING
25. Human Computer
Interaction
Laboratory
makeability lab
CHARACTERIZING PHYSICAL WORLD ACCESSIBILITY AT SCALE
USING CROWDSOURCING, COMPUTER VISION, & MACHINE LEARNING
30. The National Council on Disability noted that there is no comprehensive information on “the degree to which sidewalks are accessible” in cities.
National Council on Disability, 2007
The impact of the Americans with Disabilities Act: Assessing the progress toward achieving the goals of the ADA
31. The lack of street-level accessibility information can have a significant impact on the independence and mobility of citizens
cf. Nuernberger, 2008; Thapar et al., 2004
32. I usually don’t go where I don’t know [about accessible routes]
-P3, congenital polyneuropathy
33. “Man in Wheelchair Hit By Vehicle Has Died From Injuries”
-The Aurora, May 9, 2013
39. How might a tool like AccessScore:
Change the way people think about and understand their neighborhoods
Influence property values
Impact where people choose to live
Change how governments/citizens make decisions about infrastructural investments
40. AccessScore would not change how people navigate the city, for this we need a different tool…
42. Routing for: Manual Wheelchair
1st of 3 Suggested Routes 16 minutes, 0.7 miles, 1 obstacle
!
!
!
!
A
B
Route 1
Route 2
Surface Problem
Avg Severity: 3.6 (Hard to Pass)
Recent Comments:
“Obstacle is passable in a manual chair but not in a motorized chair”
Routing for: Manual Wheelchair
A
1st of 3 Suggested Routes 16 minutes, 0.7 miles, 1 obstacle
!
ACCESSIBILITY AWARE NAVIGATION SYSTEMS
45. Safe Routes to School Walkability Audit
Rock Hill, South Carolina
Walkability Audit Wake County, North Carolina
Walkability Audit Wake County, North Carolina
TRADITIONAL WALKABILITY AUDITS
46. Safe Routes to School Walkability Audit
Rock Hill, South Carolina
Walkability Audit
Wake County, North Carolina
Walkability Audit
Wake County, North Carolina
TRADITIONAL WALKABILITY AUDITS
50. Similar to physical audits, these tools are built for in situ reporting and do not support remote, virtual inquiry—which limits scalability
Not designed for accessibility data collection
51. MARK & FIND ACCESSIBLE BUSINESSES
wheelmap.org
axsmap.com
52. MARK & FIND ACCESSIBLE BUSINESSES
wheelmap.org
axsmap.com
Focuses on businesses rather than streets & sidewalks
Model is still to report on places you’ve visited
53. Our Approach: Use Google Street View (GSV) as a massive data source for scalably finding and characterizing street-level accessibility
54. HIGH-LEVEL RESEARCH QUESTIONS
1.Can we use Google Street View (GSV) to find street- level accessibility problems?
2.Can we create interactive systems to allow minimally trained crowdworkers to quickly and accurately perform remote audit tasks?
3.Can we use computer vision and machine learning to scale our approach?
55. ASSETS’12 Poster
Feasibility study + labeling interface evaluation
HCIC’13 Workshop
Exploring early solutions to computer vision (CV)
HCOMP’13 Poster
1st investigation of CV + crowdsourced verification
CHI’13 Large-scale turk study + label validation with wheelchair users
ASSETS’13
Applied to new domain: bus stop accessibility for visually impaired
UIST’14
Crowdsourcing + CV + “smart” work allocation
TOWARDS SCALABLE ACCESSIBILITY DATA COLLECTION
56. ASSETS’12 Poster Feasibility study + labeling interface evaluation
HCIC’13 Workshop
Exploring early solutions to computer vision (CV)
HCOMP’13 Poster
1st investigation of CV + crowdsourced verification
CHI’13
Large-scale turk study + label validation with wheelchair users
ASSETS’13 Applied to new domain: bus stop accessibility for visually impaired
UIST’14
Crowdsourcing + CV + “smart” work allocation
TODAY’S TALK
57. ASSETS’12 Poster
Feasibility study + labeling interface evaluation
HCIC’13 Workshop Exploring early solutions to computer vision (CV)
HCOMP’13 Poster
1st investigation of CV + crowdsourced verification
CHI’13
Large-scale turk study + label validation with wheelchair users
ASSETS’13
Applied to new domain: bus stop accessibility for visually impaired
UIST’14 Crowdsourcing + CV + “smart” work allocation
TODAY’S TALK
58. ASSETS’12 GOALS:
1.Investigate viability of reapproprating online map imagery to determine sidewalk accessibility via crowd workers
2.Examine the effect of three different interactive labeling interfaces on task accuracy and duration
60. WEB-BASED LABELING INTERFACE
FOUR STEP PROCESS
1. Find and mark accessibility problem
2. Select problem category
3. Rate problem severity
4. Submit completed image
61. WEB-BASED LABELING INTERFACE VIDEO
Video shown to crowd workers before they labeled their first image
http://youtu.be/aD1bx_SikGo
65. DATASET BREAKDOWN
34
29
27
11
19
0
10
20
30
40
No Curb Ramp
Surface Problem
Object in Path
Sidewalk Ending
No Sidewalk Accessibility Issues
Manually curated 100 images from urban neighborhoods in LA, Baltimore, Washington DC, and NYC
66. DATASET BREAKDOWN
34
29
27
11
19
0
10
20
30
40
No Curb Ramp
Surface Problem
Object in Path
Sidewalk Ending
No Sidewalk Accessibility Issues
Manually curated 100 images from urban neighborhoods in LA, Baltimore, Washington DC, and NYC
Used to evaluate false positive labeling activity
67. DATASET BREAKDOWN
34
29
27
11
19
0
10
20
30
40
No Curb Ramp
Surface Problem
Object in Path
Sidewalk Ending
No Sidewalk Accessibility Issues
Manually curated 100 images from urban neighborhoods in LA, Baltimore, Washington DC, and NYC
Used to evaluate false positive labeling activity
This breakdown based on majority vote data from 3 independent researcher labels
77. Object in Path
Curb Ramp Missing
R1
R2
R3
Researcher Label Table
Image Level Analysis
This table tells us what accessibility problems exist in the image
78. Pixel Level Analysis
Labeled pixels tell us where the accessibility problems exist in the image.
79. Why do we care about image level vs. pixel level?
82. Coarse
Precise
Point Location Level
Sub-block Level
Block Level
(Pixel Level)
(Image Level)
Pixel level labels could be used for training machine learning algorithms for detection and recognition tasks
83. Coarse
Precise
Localization Spectrum
Point Location Level
Sub-block Level
Block Level
Class Spectrum
Multiclass
Object in Path
Curb Ramp Missing
Prematurely Ending Sidewalk
Surface Problem
Binary
Problem
No Problem
(Pixel Level)
(Image Level)
TWO ACCESSIBILITY PROBLEM SPECTRUMS
Different ways of thinking about accessibility problem labels in GSV
Coarse
Precise
84. Object in Path
Curb Ramp Missing
R1
R2
R3
Researcher Label Table
Problem
Multiclass label
Binary Label
Sidewalk Ending
Surface Problem
Other
85. To produce a single ground truth dataset, we used majority vote.
86. R1
R2
R3
Maj. Vote
Researcher Label Table
Object in Path
Curb Ramp Missing
Sidewalk Ending
Surface Problem
Other
87. R1
R2
R3
Maj. Vote
Researcher Label Table
Object in Path
Curb Ramp Missing
Sidewalk Ending
Surface Problem
Other
88. R1
R2
R3
Maj. Vote
Researcher Label Table
Object in Path
Curb Ramp Missing
Sidewalk Ending
Surface Problem
Other
89. ASSETS’12 MTURK STUDY METHOD
Independently posted 3 labeling interfaces to MTurk. Crowdworkers could work with only one interface.
For training, turkers required to watch first 1.5 mins of 3-min instructional video.
Hired ~7 workers per image to explore avg accuracy
Turkers paid ~3-5 cents per HIT. We varied number of images/HIT from 1-10.
90. ASSETS’12 MTURK DESCRIPTIVE RESULTS
Hired 132 unique workers
Worked on 2,325 assignments
Provided a total of 4,309 labels (AVG=1.9/image)
91. MAIN FINDINGS: IMAGE-LEVEL ANALYSIS
0%
20%
40%
60%
80%
100%
Point-and-click
Outline
Rectangle
AVERAGE ACCURACY
Higher is better
0
10
20
30
40
50
Point-and-click
Outline
Rectangle
MEDIAN TASK TIME (SECS)
Lower is better
92. MAIN FINDINGS: IMAGE-LEVEL ANALYSIS
83.0%
82.6%
79.2%
0%
20%
40%
60%
80%
100%
Point-and-click
Outline
Rectangle
AVERAGE ACCURACY
All three interfaces performed similarly. This is without quality control.
0
10
20
30
40
50
Point-and-click
Outline
Rectangle
MEDIAN TASK TIME (SECS)
Higher is better
Lower is better
93. MAIN FINDINGS: IMAGE-LEVEL ANALYSIS
83.0%
82.6%
79.2%
0%
20%
40%
60%
80%
100%
Point-and-click
Outline
Rectangle
32.9
41.5
43.3
0
10
20
30
40
50
Point-and-click
Outline
Rectangle
AVERAGE ACCURACY
MEDIAN TASK TIME (SECS)
All three interfaces performed similarly. This is without quality control.
Point-and-click is the fastest; 26% faster than Outline & 32% faster than Rectangle
Higher is better
Lower is better
94. ASSETS’12 CONTRIBUTIONS:
1.Demonstrated that minimally trained crowd workers could locate and categorize sidewalk accessibility problems in GSV images with > 80% accuracy
2.Showed that point-and-click fastest labeling interface but that outline faster than rectangle
95. ASSETS’12 Poster
Feasibility study + labeling interface evaluation
HCIC’13 Workshop
Exploring early solutions to computer vision (CV)
HCOMP’13 Poster
1st investigation of CV + crowdsourced verification
CHI’13
Large-scale turk study + label validation with wheelchair users
ASSETS’13
Applied to new domain: bus stop accessibility for visually impaired
UIST’14
Crowdsourcing + CV + “smart” work allocation
TODAY’S TALK
96. ASSETS’12 Poster Feasibility study + labeling interface evaluation
HCIC’13 Workshop
Exploring early solutions to computer vision (CV)
HCOMP’13 Poster
1st investigation of CV + crowdsourced verification
CHI’13
Large-scale turk study + label validation with wheelchair users
ASSETS’13
Applied to new domain: bus stop accessibility for visually impaired
UIST’14
Crowdsourcing + CV + “smart” work allocation
TODAY’S TALK
97. CHI’13 GOALS:
1.Expand ASSETS’12 study with larger sample.
•Examine accuracy as function of turkers/image
•Evaluate quality control mechanisms
•Gain qualitative understanding of failures/successes
2.Validate researcher ground truth with labels from three wheelchair users
99. CHI’13 GOALS:
1.Expand ASSETS’12 study with larger sample.
•Examine accuracy as function of turkers/image
•Evaluate quality control mechanisms
•Gain qualitative understanding of failures/successes
2.Validate researcher ground truth with labels from three wheelchair users
102. IN-LAB STUDY METHOD
Three wheelchair participants
Independently labeled 75 of 229 GSV images
Used think-aloud protocol. Sessions were video recorded
30-min post-study interview
We used Fleiss’ kappa to measure agreement between wheelchair users and researchers
103. Here is an example recording from the study session
104.
105. IN-LAB STUDY RESULTS
Strong agreement (κmulticlass=0.74) between wheelchair participants and researcher labels (ground truth)
In interviews, one participant mentioned using GSV to explore areas prior to travel
106. CHI’13 GOALS:
1.Expand ASSETS’12 study with larger sample.
•Examine accuracy as function of turkers/image
•Evaluate quality control mechanisms
•Gain qualitative understanding of failures/successes
2.Validate researcher ground truth with labels from three wheelchair users
107. CHI’13 GOALS:
1.Expand ASSETS’12 study with larger sample.
•Examine accuracy as function of turkers/image
•Evaluate quality control mechanisms
•Gain qualitative understanding of failures/successes
2.Validate researcher ground truth with labels from three wheelchair users
108. CHI’13 MTURK STUDY METHOD
Similar to ASSETS’12 but more images (229 vs. 100) and more turkers (185 vs. 132)
Added crowd verification quality control
Recruited 28+ turkers per image to investigate accuracy as function of workers
109. University of Maryland: Help make our sidewalks more accessible for wheelchair users with Google Maps
Kotaro Hara
Timer: 00:07:00 of 3 hours
10
3 hours
Labeling Interface
110. Kotaro Hara
Timer: 00:07:00 of 3 hours
University of Maryland: Help make our sidewalks more accessible for wheelchair users with Google Maps
3 hours
10
Verification Interface
111. Kotaro Hara
Timer: 00:07:00 of 3 hours
University of Maryland: Help make our sidewalks more accessible for wheelchair users with Google Maps
3 hours
10
Verification Interface
112. CHI’13 MTURK LABELING STATS
Hired 185 unique workers Worked on 7,517 labeling tasks (AVG=40.6/turker) Provided a total of 13,379 labels (AVG=1.8/image)
Hired 273 unique workers
Provided a total of 19,189 verifications
CHI’13 MTURK VERIFICATION STATS
113. CHI’13 MTURK LABELING STATS
Hired 185 unique workers
Worked on 7,517 labeling tasks (AVG=40.6/turker)
Provided a total of 13,379 labels (AVG=1.8/image)
CHI’13 MTURK VERIFICATION STATS
Hired 273 unique workers
Provided a total of 19,189 verifications
Median image labeling time vs. verification time: 35.2s vs. 10.5s
114. CHI’13 MTURK KEY FINDINGS
81% accuracy without quality control
93% accuracy with quality control
127. TURKER LABELING ISSUES
Overlabeling
Some Turkers Prone to High False Positives
No Curb Ramp
128. No Curb Ramp
TURKER LABELING ISSUES
Overlabeling
Some Turkers Prone to High False Positives
Incorrect Object in Path label. Stop sign is in grass.
129. TURKER LABELING ISSUES
Overlabeling
Some Turkers Prone to High False Positives
Surface Problems
130. TURKER LABELING ISSUES
Overlabeling
Some Turkers Prone to High False Positives
Surface Problems
Tree not actually an obstacle
131. TURKER LABELING ISSUES
Overlabeling
Some Turkers Prone to High False Positives
No problems in this image
133. T1
T2
T3
Maj. Vote
3 Turker Majority Vote Label
Object in Path
Curb Ramp Missing
Sidewalk Ending
Surface Problem
Other
T3 provides a label of low quality
134. To look into the effect of turker majority vote on accuracy, we had 28 turkers label each image
135. 28 groups of 1:
We had 28 turkers
label each image:
136. 28 groups of 1:
We had 28 turkers
label each image:
9 groups of 3:
137. 28 groups of 1:
We had 28 turkers
label each image:
9 groups of 3:
5 groups of 5:
138. 28 groups of 1:
We had 28 turkers
label each image:
9 groups of 3:
5 groups of 5:
139. 28 groups of 1:
We had 28 turkers
label each image:
9 groups of 3:
5 groups of 5:
4 groups of 7:
3 groups of 9:
149. CHI’13 CONTRIBUTIONS:
1.Extended and reaffirmed findings from ASSETS’12 about viability of GSV and crowd work for locating and categorizing accessibility problems
2.Validated our ground truth labeling approach
3.Assessed simple quality control approaches
150. ASSETS’12 Poster
Feasibility study + labeling interface evaluation
HCIC’13 Workshop
Exploring early solutions to computer vision (CV)
HCOMP’13 Poster
1st investigation of CV + crowdsourced verification
CHI’13
Large-scale turk study + label validation with wheelchair users
ASSETS’13
Applied to new domain: bus stop accessibility for visually impaired
UIST’14
Crowdsourcing + CV + “smart” work allocation
TODAY’S TALK
151. ASSETS’12 Poster Feasibility study + labeling interface evaluation
HCIC’13 Workshop
Exploring early solutions to computer vision (CV)
HCOMP’13 Poster
1st investigation of CV + crowdsourced verification
CHI’13
Large-scale turk study + label validation with wheelchair users
ASSETS’13
Applied to new domain: bus stop accessibility for visually impaired
UIST’14
Crowdsourcing + CV + “smart” work allocation
TODAY’S TALK
152. All of the approaches so far relied purely on manual labor, which limits scalability
157. svCrawl
Web Scraper
Dataset
svDetect
Automatic Curb Ramp Detection
svControl
Automatic Task Allocation
svVerify
Manual Label Verification
svLabel
Manual Labeling
Tohme
遠目
Remote Eye
・
Design Principles
1.Computer vision is cheap (zero cost)
2.Manual verification is far cheaper than manual labeling
3.Automatic curb ramp detection is hard and error prone
4.Fixing a false positive is easy, fixing a false negative is hard (requires manual labeling).
158.
159. The “lack of curb cuts is a primary obstacle to the smooth integration of those with disabilities into the commerce of daily life.”
Kinney et al. vs. Yerusalim & Hoskins, 1993
3rd Circuit Court of Appeals
160. “Without curb cuts, people with ambulatory disabilities simply cannot navigate the city”
Kinney et al. vs. Yerusalim & Hoskins, 1993
3rd Circuit Court of Appeals
165. svCrawl
Web Scraper
Dataset
svDetect
Automatic Curb Ramp Detection
svControl Automatic Task Allocation
svVerify
Manual Label Verification
Tohme
遠目
Remote Eye
・
svVerify can only fix false positives, not false negatives! That is, there is no way for a worker to add new labels at this stage!
177. Washington D.C.
Baltimore
Los Angeles
Saskatoon
* At the time of downloading data in summer 2013
Scraper & Dataset
Total Area: 11.3 km2
Intersections: 1,086
Curb Ramps: 2,877
Missing Curb Ramps: 647
Avg. GSV Data Age: 2.2 yrs
178. How well does GSV data reflect the current state of the physical world?
181. Washington D.C.
Baltimore
Physical Audit Areas
GSV and Physical World > 97.7% agreement
273 Intersections
Dataset | Validating Dataset
Small disagreement due to construction.
182. Washington D.C.
Baltimore
Physical Audit Areas
273 Intersections
> 97.7% agreement
Dataset
Key Takeaway
Google Street View is a viable source of curb ramp data
186. Deformable Part Models
Felzenszwalb et al. 2008
Automatic Curb Ramp Detection
http://www.cs.berkeley.edu/~rbg/latent/
187. Deformable Part Models
Felzenszwalb et al. 2008
Automatic Curb Ramp Detection
http://www.cs.berkeley.edu/~rbg/latent/
Root filter
Parts filter
Displacement cost
188. Automatic Curb Ramp Detection
Multiple redundant detection boxes
Detected Labels Stage 1: Deformable Part Model
Correct
1
False Positive
12
Miss
0
189. Automatic Curb Ramp Detection
Curb ramps shouldn’t be in the sky or on roofs
Correct
1
False Positive
12
Miss
0
Detected Labels Stage 1: Deformable Part Model
191. Automatic Curb Ramp Detection
Detected Labels Stage 3: SVM-based Refinement
Filter out labels based on their size, color, and position.
Correct
1
False Positive
5
Miss
0
196. Some curb ramps never get detected
False positive detections
Automatic Curb Ramp Detection
Correct
6
False Positive
4
Miss
1
197. Some curb ramps never get detected
False positive detections
Automatic Curb Ramp Detection
Correct
6
False Positive
4
Miss
1
These false negatives are expensive to correct!
205. Occlusion
Illumination
Scale
Viewpoint Variation
Structures Similar to Curb Ramps
Curb Ramp Design Variation
Automatic Curb Ramp Detection
CURB RAMP DETECTION IS A HARD PROBLEM
206. Can we predict difficult intersections & CV performance?
209. Automatic Task Allocation | Features to Assess Scene Difficulty for CV
Number of connected streets from metadata
Depth information for intersection complexity analysis
Top-down images to assess complexity of an intersection
Number of detections and confidence values
217. Automatic Detection and Manual Verification
Automatic Task Allocation
Can Tohme achieve equivalent or better accuracy at a lower time cost compared to a completely manual approach?
218. STUDY METHOD: CONDITIONS
Manual labeling without smart task allocation
&
vs.
CV + Verification without smart task allocation
Tohme
遠目
Remote Eye
・
vs.
Evaluation
220. Recruited workers from Mturk
Used 1,046 GSV images (40 used for golden insertion)
Evaluation
STUDY METHOD: APPROACH
221. RESULTS
Labeling Tasks
Verification Tasks
# of distinct turkers:
242
161
1,270
582
# of HITs completed:
# of tasks completed:
6,350
4,820
# of tasks allocated:
769
277
Evaluation
We used Monte Carlo simulations for evaluation
222. 84%
88%
86%
0%
20%
40%
60%
80%
100%
Accuracy Measures (%)
Precision
Recall
F-measure
Manual Labeling
CV and Manual Verification
&
94
0
20
40
60
80
100
Task Completion Time / Scene (s)
Manual Labeling
CV and Manual Verification
&
Tohme
遠目
Remote Eye
・
Tohme
遠目
Remote Eye
・
Evaluation | Labeling Accuracy and Time Cost
Error bars are standard deviations.
ACCURACY
COST (TIME)
223. 84%
68%
88%
58%
86%
63%
0%
20%
40%
60%
80%
100%
Accuracy Measures (%)
Precision
Recall
F-measure
Manual Labeling
CV and Manual Verification
&
94
42
0
20
40
60
80
100
Task Completion Time / Scene (s)
Manual Labeling
CV and Manual Verification
&
Tohme
遠目
Remote Eye
・
Tohme
遠目
Remote Eye
・
Evaluation | Labeling Accuracy and Time Cost
Error bars are standard deviations.
ACCURACY
COST (TIME)
224. 84%
68%
83%
88%
58%
86%
86%
63%
84%
0%
20%
40%
60%
80%
100%
Accuracy Measures (%)
Precision
Recall
F-measure
Manual Labeling
CV and Manual Verification
&
94
42
81
0
20
40
60
80
100
Task Completion Time / Scene (s)
Manual Labeling
CV and Manual Verification
&
Tohme
遠目
Remote Eye
・
Tohme
遠目
Remote Eye
・
Evaluation | Labeling Accuracy and Time Cost
Error bars are standard deviations.
ACCURACY
COST (TIME)
225. 84%
68%
83%
88%
58%
86%
86%
63%
84%
0%
20%
40%
60%
80%
100%
Accuracy Measures (%)
Precision
Recall
F-measure
Manual Labeling
CV and Manual Verification
&
94
42
81
0
20
40
60
80
100
Task Completion Time / Scene (s)
Manual Labeling
CV and Manual Verification
&
Tohme
遠目
Remote Eye
・
Tohme
遠目
Remote Eye
・
Evaluation | Labeling Accuracy and Time Cost
Error bars are standard deviations.
13% reduction in cost
ACCURACY
COST (TIME)
226. svControl
Automatic Task Allocation
svVerify
Manual Label Verification
svLabel
Manual Labeling
Evaluation | Smart Task Allocator
~80% of svVerify tasks were correctly routed
~50% of svLabel tasks were correctly routed
227. svControl
Automatic Task Allocation
svVerify
Manual Label Verification
svLabel
Manual Labeling
Evaluation | Smart Task Allocator
If svControl worked perfectly, Tohme’s cost would drop to 28% of a manually labelling approach alone.
244. UIST’14 CONTRIBUTIONS:
1.First CV system for automatically detecting curb ramps in images
2.Showed that automated methods could be used to improve labeling efficiency for curb ramps
3.Validated GSV as a viable curb ramp dataset
245. TOWARDS SCALABLE ACCESSIBILITY DATA COLLECTION
ASSETS’12 Poster
Feasibility study + labeling interface evaluation
HCIC’13 Workshop
Exploring early solutions to computer vision (CV)
HCOMP’13 Poster 1st investigation of CV + crowdsourced verification
CHI’13
Large-scale turk study + label validation with wheelchair users
ASSETS’13
Applied to new domain: bus stop accessibility for visually impaired
UIST’14
Crowdsourcing + CV + “smart” work allocation
The Future
248. 8,209
Intersections in DC
BACK OF THE ENVELOPE CALCULATIONS
Manually labeling GSV with our custom interfaces would take 214 hours With Tohme, this drops to 184 hours We think we can do better Unclear how long a physical audit would take
249. FUTURE WORK: COMPUTER VISION
Context integration & scene understanding 3D-data integration Improve training & sample size Mensuration
254. FUTURE WORK: ADDITIONAL SURVEYING TECHNIQUES
Transmits real-time imagery of physical space along with measurements
255. THE CROWD-POWERED STREETVIEW ACCESSIBILITY TEAM!
Kotaro Hara
Jin Sun
Victoria Le
Robert Moore
Sean Pannella
Jonah Chazan
David Jacobs
Jon Froehlich
Zachary Lawrence
Graduate Student
Undergraduate
High School
Professor
256. Flickr User: Pedro Rocha
https://www.flickr.com/photos/pedrorocha/3627562740/
Flickr User: Brooke Hoyer
https://www.flickr.com/photos/brookehoyer/14816521847/
Flickr User: Jen Rossey
https://www.flickr.com/photos/jenrossey/3185264564/
Flickr User: Steven Vance https://www.flickr.com/photos/jamesbondsv/8642938765
Flickr User: Jorge Gonzalez
https://www.flickr.com/photos/macabrephotographer/6225178809/
Flickr User: Mike Fraser https://www.flickr.com/photos/67588280@N00/10800029263//
PHOTO CREDITS
Flickr User: Susan Sermoneta
https://www.flickr.com/photos/en321/344387583/
257. This work is supported by:
Faculty Research Award
Human Computer
Interaction
Laboratory
makeability lab
258. Human Computer
Interaction
Laboratory
makeability lab
CHARACTERIZING PHYSICAL WORLD ACCESSIBILITY AT SCALE
USING CROWDSOURCING, COMPUTER VISION, & MACHINE LEARNING