2016 D-STOP Symposium ("Smart Cities") session by CTR's Jen Duthie. Get symposium details: http://ctr.utexas.edu/research/d-stop/education/annual-symposium/
BDE SC6-hang out - technology part-SWC - MartinBigData_Europe
The document discusses a pilot project within the Big Data Europe initiative that aims to integrate citizen budget data from multiple municipalities. The pilot will develop a platform to aggregate budget and spending data from different sources and formats to allow for analysis and visualization. Technical components like Apache Flume, Kafka, Spark and HDFS will be used to ingest, store and analyze the data. A semantic layer will consolidate the data and link it. The pilot aims to evaluate the platform with municipalities and receive feedback on analyzing a growing amount of integrated budget data over time.
The document summarizes two open data projects - ADEQUATe and CommuniData. ADEQUATe aims to improve the quality of open data through quality assessment, monitoring, improvement algorithms, and data linkage. It has developed a data monitoring portal and tools for quality assessment, improvements, and semantic search. CommuniData aims to make open data more accessible to non-experts and strengthen e-participation at a local level through open data. It has created search tools and chatbots to find relevant local data and allows simple publishing of data to support discussions on a participatory platform for a Vienna neighborhood. Both projects were funded by the Austrian government and involve multiple academic partners.
Big Data Europe SC6 WS #3: PILOT SC6: CITIZEN BUDGET ON MUNICIPAL LEVEL, Mart...BigData_Europe
This document describes a pilot project that aims to create an online dashboard of municipal budget and spending data from multiple European cities. The project will harvest, link, analyze and visualize open budget data to make it more useful for citizens, researchers and decision makers. Data will be integrated from Athens, Thessaloniki and Kalamaria in Greece initially, and potentially Vienna, Linz and Barcelona. The technical architecture uses Apache tools like Flume, Kafka and Spark to ingest, store and analyze the heterogeneous data sources in real-time. The goal is to evaluate how a big data approach can provide new insights into public finances from an integrated, multilingual and longitudinal perspective.
The document proposes an open government data system for Jordan with the following key points:
- It would make more government data available to the public in open formats like CSV and JSON to enable academic and commercial uses.
- Data on the system would include both raw datasets and summarized data and insights from government agencies. Formats would need to follow open standards.
- Each dataset would include the raw data files, metadata files describing the data, and checksum files to ensure correctness. Metadata would also provide descriptions, collection methods, and potential uses.
- The system would have a centralized agency to manage it, government agencies to upload data, and public users to access and analyze the data through a web interface or API
This document describes YAMCAT, an open source catalog for managing spatial metadata and facilitating collaboration on projects. YAMCAT allows users to search for and share geospatial metadata, download geospatial data, and preview data layers using WMS. It includes forms for adding and editing metadata according to ISO 19139 standards and can export metadata to the GeoNetwork catalog. YAMCAT requires a web server with PHP and mySQL and optionally MapServer, and aims to support collaboration and data sharing for scientific projects.
BDE SC6-hang out - technology part-SWC - MartinBigData_Europe
The document discusses a pilot project within the Big Data Europe initiative that aims to integrate citizen budget data from multiple municipalities. The pilot will develop a platform to aggregate budget and spending data from different sources and formats to allow for analysis and visualization. Technical components like Apache Flume, Kafka, Spark and HDFS will be used to ingest, store and analyze the data. A semantic layer will consolidate the data and link it. The pilot aims to evaluate the platform with municipalities and receive feedback on analyzing a growing amount of integrated budget data over time.
The document summarizes two open data projects - ADEQUATe and CommuniData. ADEQUATe aims to improve the quality of open data through quality assessment, monitoring, improvement algorithms, and data linkage. It has developed a data monitoring portal and tools for quality assessment, improvements, and semantic search. CommuniData aims to make open data more accessible to non-experts and strengthen e-participation at a local level through open data. It has created search tools and chatbots to find relevant local data and allows simple publishing of data to support discussions on a participatory platform for a Vienna neighborhood. Both projects were funded by the Austrian government and involve multiple academic partners.
Big Data Europe SC6 WS #3: PILOT SC6: CITIZEN BUDGET ON MUNICIPAL LEVEL, Mart...BigData_Europe
This document describes a pilot project that aims to create an online dashboard of municipal budget and spending data from multiple European cities. The project will harvest, link, analyze and visualize open budget data to make it more useful for citizens, researchers and decision makers. Data will be integrated from Athens, Thessaloniki and Kalamaria in Greece initially, and potentially Vienna, Linz and Barcelona. The technical architecture uses Apache tools like Flume, Kafka and Spark to ingest, store and analyze the heterogeneous data sources in real-time. The goal is to evaluate how a big data approach can provide new insights into public finances from an integrated, multilingual and longitudinal perspective.
The document proposes an open government data system for Jordan with the following key points:
- It would make more government data available to the public in open formats like CSV and JSON to enable academic and commercial uses.
- Data on the system would include both raw datasets and summarized data and insights from government agencies. Formats would need to follow open standards.
- Each dataset would include the raw data files, metadata files describing the data, and checksum files to ensure correctness. Metadata would also provide descriptions, collection methods, and potential uses.
- The system would have a centralized agency to manage it, government agencies to upload data, and public users to access and analyze the data through a web interface or API
This document describes YAMCAT, an open source catalog for managing spatial metadata and facilitating collaboration on projects. YAMCAT allows users to search for and share geospatial metadata, download geospatial data, and preview data layers using WMS. It includes forms for adding and editing metadata according to ISO 19139 standards and can export metadata to the GeoNetwork catalog. YAMCAT requires a web server with PHP and mySQL and optionally MapServer, and aims to support collaboration and data sharing for scientific projects.
SC7 Hangout 3: The BDE Secure Societies PilotBigData_Europe
This document summarizes a pilot project using big data to support secure societies. The pilot aims to integrate satellite imagery data from Sentinel-1 with social media and news data using an open-source platform. It includes two workflows: a change detection workflow that analyzes Sentinel-1 imagery to detect changes over time and an event detection workflow that monitors news and social media to detect events. The pilot demonstrates cross-validating events detected in social media with changes detected in satellite imagery. Plans for the second phase include optimizing the workflows to improve scalability and adding security mechanisms to the platform.
FSF innovation tools for strengthening integrity and risk adjusted certificationFSC Ukraine
This document discusses FSC innovation tools for strengthening certification integrity and risk management, including opportunities for cooperation. It outlines the FSC GIS portal for mapping certified forests, the open knowledge repository, and structured data templates for country risk profiles covering areas like online reputation, stakeholder insights, and spatial analysis. It proposes active support for the forest map by various stakeholders, systematic independent investigation data sharing between stakeholders using templates, integrating GIS tools and research studies, and using communication platforms to enable cooperation. It asks if readers are ready to support these proposals.
European Open Data Portal and Policy Compass: from national Open Data reposit...OW2
In November 2015 the European Commission officially lunched the European Data Portal http://www.europeandataportal.eu . The mission of the portal is to become the catalogue of all European public data providing them in all official languages of the European Union. The portal is harvesting metadata from heterogeneous open data portals of 28 EU and other 11 European countries. It lists over 580 000 datasets and it is the biggest Open Data portal worldwide. From the techincal perspective, it is the first official Open Data portal implementing the new DCAT Application Profile specification.
The portal is the place to find European public data and it is a basis for other innovative services. One of them is Policy Compass https://policycompass.eu. It brings together open public data, social media, e-participation platforms, causal models, and argumentation technology for constructing, sharing, visualizing and debating progress metrics and impacts of policies.
Both portals are Open Source. They provide rich APIs and may become a data source for other applications.
This document outlines plans for the Clariah Structured Data Hub project. Clariah aims to provide humanities scholars access to large digital resources and tools to enable ground-breaking research. The Structured Data Hub will curate and link structured datasets on various levels from micro to macro. It will also create tools to facilitate the research process, such as data evaluation, linking, analysis, and visualization. The project will involve a design phase with two pilot studies, followed by preparation, execution, and close phases to develop a research infrastructure with linked data and tools.
This document discusses a pilot project to create an online dashboard of municipal budget and economic data from Athens, Thessaloniki, and Kalamaria, Greece. The goals are to make budget information more useful for citizens, researchers, and decision makers by harvesting, normalizing, linking, analyzing, and visualizing heterogeneous budget execution data. A common semantic model called LinkedEconomy will be used to integrate the data in a structured, reusable format. The pilot will produce weekly budget spending reports and financial ratio comparisons of revenues per capita for the three cities using over 7MB of new open budget and census data.
During the tranSMART Annual Meeting 2015, Kees van Bochove, chair of the tranSMART Foundation Architecture Working Group, presented on current tranSMART development highlights, which illustrate how the tranSMART core database layer and APIs enables a range of varying translational research applications.
138b-Daraio Sapientia and onthology based data management as key enabling tec...innovationoecd
This document discusses an ontology-based data management (OBDM) approach for integrating data for research and innovation policy analysis. The main advantages of the OBDM approach over traditional "silos" approaches are openness, interoperability, and improved data quality. The OBDM approach uses an ontology to provide a common conceptual framework that can be mapped to different data sources to allow integrated access and analysis of the data without merging the sources.
BDE SC6-ws-05/12/2016 technology part - SWCBigData_Europe
The document discusses a pilot project within the Big Data Europe program to create an online dashboard of economic data from municipal budgets. The project aims to aggregate budget and spending data from multiple sources and formats, normalize it using RDF, and analyze and visualize the data to provide insights for citizens, researchers, and decision makers. Technical components used include Apache Flume, Kafka, Spark, HDFS, Virtuoso triplestore, and D3 for visualization. An initial version has been implemented and will be evaluated with municipalities and other stakeholders.
Introduction to the OpenDataCommunities service, which includes around 170 DCLG datasets. There is a mixture of statistics on housing, planning and Local Government finance; detailed data on services provided by individual councils; and information on registered providers of social housing. Presented by Linda O'Halloran, Head of Products for the Local Digital Programme, at the Flood Resilience Discovery Day in Bristol on 27 February 2015.
The document provides an overview of the SC1 Health Workshop technical platform. The platform goals are low cost of ownership, ease of use with big data, flexibility for different use cases, embracing emerging big data technologies, and simple integration. The platform architecture uses Docker containers and Compose files to define the pipeline topology. Components are developed as Docker images and the platform can be installed manually or using Docker Machine on various environments.
This document describes QB'er, a tool for converting statistical datasets into linked open data on the semantic web. It aims to address problems with today's workflow for working with multiple disconnected datasets, including a lack of comparability and repeating cleaning efforts. QB'er allows researchers to standardize individual datasets according to community best practices, share code lists with colleagues, and publish standardized, interlinked datasets on a structured data hub. This grows a graph of interconnected datasets and makes the cleaning and mapping efforts reusable rather than disposable. A demonstration shows uploading a historical census dataset and mapping its variable values to codes while preserving the original values.
The document discusses using fuzzy cognitive maps (FCMs) as decision support tools for smart cities, specifically for smart mobility applications. It aims to simulate urban mobility decision-making processes based on an ongoing research project involving several pilot cities. Key aspects discussed include identifying smart city concepts, exploiting social media and open data to inform policy scenarios, and creating theory-driven and data-driven decision support tools like FCMs. The research outputs will evaluate the potential and barriers of using social media, open data, and FCMs to support evidence-based decision making in smart cities.
Controlled Vocabularies and Text Mining - Use Cases at the Goettingen State a...Ralf Stockmann
The document discusses several use cases for text mining and controlled vocabularies at the Goettingen State and University Library. It describes a project called eAqua that compares semantic graphs between journal article headings and full texts. It also discusses a project called Europeana 4D that visualizes data from multiple sources on an interactive timeline and map to show connections and relationships. Guidelines are provided for how to build datasets in KML format and contribute them to the Europeana 4D prototype visualization tool.
Team 5: Open Land Use Metadata Harvesting on NextGEOSSplan4all
This document discusses the creation of metadata for open land use datasets. It aims to:
1) Create metadata that complies with the ISO/TS 19139 standard and insert it into the MICKA metadata catalogue.
2) Generate a separate webpage for each administrative unit in the EU from the country level down to the local municipality level.
The process involves:
1) Processing input data on EU administrative units and land use.
2) Using a template to automatically generate XML metadata records.
3) Aggregating attribute data for higher-level administrative units.
4) Importing records into the NextGeoss CKAN catalogue.
The goals are full EU coverage, standards compliance, and
This document discusses how Frankston City linked metadata to mapping layers to provide metadata search capabilities. It describes linking metadata from a main table to individual mapping layers through mapfiles. The metadata is then presented to users through a search interface, allowing them to discover information about each mapping layer such as the custodian, editor, and constraints.
Profileboost Industry Ecosystem ExternalAlan Edgett
The document outlines the various companies and organizations involved in data sharing, network optimization, and the digital advertising ecosystem. It lists advertising technology firms, data exchanges, analytics companies, algorithm and demand side platforms, publisher ad networks, and other groups that compile, share, and optimize user data and digital advertising.
SC1 Workshop 2 General Introduction to BDEBigData_Europe
The Big Data Europe project is developing a flexible platform called the Big Data Integrator to support big data applications across multiple societal domains. The platform incorporates existing big data technologies in a modular Docker-based architecture. The project is testing the platform through 7 pilot use cases in domains like health, agriculture, energy, transport, climate, social sciences, and security. The pilots demonstrate how the platform can integrate and analyze diverse data sources to address challenges in each domain. The project is engaging stakeholders through workshops and other activities to gather requirements and showcase results.
Advanced Topics in OpenAPI: Added Value Services and Protection in the OpenTr...🧑💻 Manuel Coppotelli
The objectives of this work were to study a series of advanced aspects that an organization can consider when expose data through an OpenService.
I studied the problems relative to the implementation of Added Value Services using the information exposed through an OpenAPI, in particular a complex route planner that combines both timetables and real-time data on the public transport.
The exposed information can also be used by a byzantine user to infer whether a service provider is respecting the terms of its SLA.
Obviously an organization do not want to expose data that would allow to infer this kind of information; therefore arises the problem of studying what is the right tradeoff that allows to have a sort of protection but, a the same time, maintain the openness of the data.
The solution studied for this work have been applied to the real case of OpenTrasporti (a project by the Italian Ministry for Transportation and Infrastructures)
Open Data Portals: 9 Solutions and How they CompareSafe Software
Get a comparison of CKAN, Socrata, ArcGIS Open Data and other top open data solutions. Plus get answers to best practice questions such as: Which datasets are important to share? What are the approximate costs? Which file formats should the data be shared in? How often should the data get updated? And overall, how can we ensure success with our open data portal?
The document provides an overview of the Dublinked Technology Workshop held on December 15th, 2011. It includes presentations on transportation data, spatial web services, linked data, and semantic data description. Breakout sessions covered topics like data publishing, discovery, web services, and advanced functions. The workshop aimed to address challenges around sharing digital data between organizations and discussed technical requirements and tools to support open government data platforms.
SC7 Hangout 3: The BDE Secure Societies PilotBigData_Europe
This document summarizes a pilot project using big data to support secure societies. The pilot aims to integrate satellite imagery data from Sentinel-1 with social media and news data using an open-source platform. It includes two workflows: a change detection workflow that analyzes Sentinel-1 imagery to detect changes over time and an event detection workflow that monitors news and social media to detect events. The pilot demonstrates cross-validating events detected in social media with changes detected in satellite imagery. Plans for the second phase include optimizing the workflows to improve scalability and adding security mechanisms to the platform.
FSF innovation tools for strengthening integrity and risk adjusted certificationFSC Ukraine
This document discusses FSC innovation tools for strengthening certification integrity and risk management, including opportunities for cooperation. It outlines the FSC GIS portal for mapping certified forests, the open knowledge repository, and structured data templates for country risk profiles covering areas like online reputation, stakeholder insights, and spatial analysis. It proposes active support for the forest map by various stakeholders, systematic independent investigation data sharing between stakeholders using templates, integrating GIS tools and research studies, and using communication platforms to enable cooperation. It asks if readers are ready to support these proposals.
European Open Data Portal and Policy Compass: from national Open Data reposit...OW2
In November 2015 the European Commission officially lunched the European Data Portal http://www.europeandataportal.eu . The mission of the portal is to become the catalogue of all European public data providing them in all official languages of the European Union. The portal is harvesting metadata from heterogeneous open data portals of 28 EU and other 11 European countries. It lists over 580 000 datasets and it is the biggest Open Data portal worldwide. From the techincal perspective, it is the first official Open Data portal implementing the new DCAT Application Profile specification.
The portal is the place to find European public data and it is a basis for other innovative services. One of them is Policy Compass https://policycompass.eu. It brings together open public data, social media, e-participation platforms, causal models, and argumentation technology for constructing, sharing, visualizing and debating progress metrics and impacts of policies.
Both portals are Open Source. They provide rich APIs and may become a data source for other applications.
This document outlines plans for the Clariah Structured Data Hub project. Clariah aims to provide humanities scholars access to large digital resources and tools to enable ground-breaking research. The Structured Data Hub will curate and link structured datasets on various levels from micro to macro. It will also create tools to facilitate the research process, such as data evaluation, linking, analysis, and visualization. The project will involve a design phase with two pilot studies, followed by preparation, execution, and close phases to develop a research infrastructure with linked data and tools.
This document discusses a pilot project to create an online dashboard of municipal budget and economic data from Athens, Thessaloniki, and Kalamaria, Greece. The goals are to make budget information more useful for citizens, researchers, and decision makers by harvesting, normalizing, linking, analyzing, and visualizing heterogeneous budget execution data. A common semantic model called LinkedEconomy will be used to integrate the data in a structured, reusable format. The pilot will produce weekly budget spending reports and financial ratio comparisons of revenues per capita for the three cities using over 7MB of new open budget and census data.
During the tranSMART Annual Meeting 2015, Kees van Bochove, chair of the tranSMART Foundation Architecture Working Group, presented on current tranSMART development highlights, which illustrate how the tranSMART core database layer and APIs enables a range of varying translational research applications.
138b-Daraio Sapientia and onthology based data management as key enabling tec...innovationoecd
This document discusses an ontology-based data management (OBDM) approach for integrating data for research and innovation policy analysis. The main advantages of the OBDM approach over traditional "silos" approaches are openness, interoperability, and improved data quality. The OBDM approach uses an ontology to provide a common conceptual framework that can be mapped to different data sources to allow integrated access and analysis of the data without merging the sources.
BDE SC6-ws-05/12/2016 technology part - SWCBigData_Europe
The document discusses a pilot project within the Big Data Europe program to create an online dashboard of economic data from municipal budgets. The project aims to aggregate budget and spending data from multiple sources and formats, normalize it using RDF, and analyze and visualize the data to provide insights for citizens, researchers, and decision makers. Technical components used include Apache Flume, Kafka, Spark, HDFS, Virtuoso triplestore, and D3 for visualization. An initial version has been implemented and will be evaluated with municipalities and other stakeholders.
Introduction to the OpenDataCommunities service, which includes around 170 DCLG datasets. There is a mixture of statistics on housing, planning and Local Government finance; detailed data on services provided by individual councils; and information on registered providers of social housing. Presented by Linda O'Halloran, Head of Products for the Local Digital Programme, at the Flood Resilience Discovery Day in Bristol on 27 February 2015.
The document provides an overview of the SC1 Health Workshop technical platform. The platform goals are low cost of ownership, ease of use with big data, flexibility for different use cases, embracing emerging big data technologies, and simple integration. The platform architecture uses Docker containers and Compose files to define the pipeline topology. Components are developed as Docker images and the platform can be installed manually or using Docker Machine on various environments.
This document describes QB'er, a tool for converting statistical datasets into linked open data on the semantic web. It aims to address problems with today's workflow for working with multiple disconnected datasets, including a lack of comparability and repeating cleaning efforts. QB'er allows researchers to standardize individual datasets according to community best practices, share code lists with colleagues, and publish standardized, interlinked datasets on a structured data hub. This grows a graph of interconnected datasets and makes the cleaning and mapping efforts reusable rather than disposable. A demonstration shows uploading a historical census dataset and mapping its variable values to codes while preserving the original values.
The document discusses using fuzzy cognitive maps (FCMs) as decision support tools for smart cities, specifically for smart mobility applications. It aims to simulate urban mobility decision-making processes based on an ongoing research project involving several pilot cities. Key aspects discussed include identifying smart city concepts, exploiting social media and open data to inform policy scenarios, and creating theory-driven and data-driven decision support tools like FCMs. The research outputs will evaluate the potential and barriers of using social media, open data, and FCMs to support evidence-based decision making in smart cities.
Controlled Vocabularies and Text Mining - Use Cases at the Goettingen State a...Ralf Stockmann
The document discusses several use cases for text mining and controlled vocabularies at the Goettingen State and University Library. It describes a project called eAqua that compares semantic graphs between journal article headings and full texts. It also discusses a project called Europeana 4D that visualizes data from multiple sources on an interactive timeline and map to show connections and relationships. Guidelines are provided for how to build datasets in KML format and contribute them to the Europeana 4D prototype visualization tool.
Team 5: Open Land Use Metadata Harvesting on NextGEOSSplan4all
This document discusses the creation of metadata for open land use datasets. It aims to:
1) Create metadata that complies with the ISO/TS 19139 standard and insert it into the MICKA metadata catalogue.
2) Generate a separate webpage for each administrative unit in the EU from the country level down to the local municipality level.
The process involves:
1) Processing input data on EU administrative units and land use.
2) Using a template to automatically generate XML metadata records.
3) Aggregating attribute data for higher-level administrative units.
4) Importing records into the NextGeoss CKAN catalogue.
The goals are full EU coverage, standards compliance, and
This document discusses how Frankston City linked metadata to mapping layers to provide metadata search capabilities. It describes linking metadata from a main table to individual mapping layers through mapfiles. The metadata is then presented to users through a search interface, allowing them to discover information about each mapping layer such as the custodian, editor, and constraints.
Profileboost Industry Ecosystem ExternalAlan Edgett
The document outlines the various companies and organizations involved in data sharing, network optimization, and the digital advertising ecosystem. It lists advertising technology firms, data exchanges, analytics companies, algorithm and demand side platforms, publisher ad networks, and other groups that compile, share, and optimize user data and digital advertising.
SC1 Workshop 2 General Introduction to BDEBigData_Europe
The Big Data Europe project is developing a flexible platform called the Big Data Integrator to support big data applications across multiple societal domains. The platform incorporates existing big data technologies in a modular Docker-based architecture. The project is testing the platform through 7 pilot use cases in domains like health, agriculture, energy, transport, climate, social sciences, and security. The pilots demonstrate how the platform can integrate and analyze diverse data sources to address challenges in each domain. The project is engaging stakeholders through workshops and other activities to gather requirements and showcase results.
Advanced Topics in OpenAPI: Added Value Services and Protection in the OpenTr...🧑💻 Manuel Coppotelli
The objectives of this work were to study a series of advanced aspects that an organization can consider when expose data through an OpenService.
I studied the problems relative to the implementation of Added Value Services using the information exposed through an OpenAPI, in particular a complex route planner that combines both timetables and real-time data on the public transport.
The exposed information can also be used by a byzantine user to infer whether a service provider is respecting the terms of its SLA.
Obviously an organization do not want to expose data that would allow to infer this kind of information; therefore arises the problem of studying what is the right tradeoff that allows to have a sort of protection but, a the same time, maintain the openness of the data.
The solution studied for this work have been applied to the real case of OpenTrasporti (a project by the Italian Ministry for Transportation and Infrastructures)
Open Data Portals: 9 Solutions and How they CompareSafe Software
Get a comparison of CKAN, Socrata, ArcGIS Open Data and other top open data solutions. Plus get answers to best practice questions such as: Which datasets are important to share? What are the approximate costs? Which file formats should the data be shared in? How often should the data get updated? And overall, how can we ensure success with our open data portal?
The document provides an overview of the Dublinked Technology Workshop held on December 15th, 2011. It includes presentations on transportation data, spatial web services, linked data, and semantic data description. Breakout sessions covered topics like data publishing, discovery, web services, and advanced functions. The workshop aimed to address challenges around sharing digital data between organizations and discussed technical requirements and tools to support open government data platforms.
Open trip planner status update may 2011bibianamchugh
This document discusses the benefits of open data and open source software in transportation planning. It summarizes the launch and adoption of Google Transit in cities worldwide using the General Transit Feed Specification. It also describes how open data from TriMet in Portland led to the development of third party apps and how the city resolved to open data. It discusses open source trip planners like Open Trip Planner that were developed for Portland using open data and an open development process.
The Digital Transformation Team in Italy developed a Data & Analytics Framework (DAF) to help public sector organizations make more data-driven decisions. DAF includes a big data platform to centralize, store, and analyze data. It also includes a data team of data scientists and engineers. The goals of DAF are to break down data silos, make more data open and accessible via APIs, and build data products and applications. DAF aims to support public policymaking and monitoring through centralized data infrastructure and analytics capabilities shared across organizations.
[Srijan Wednesday Webinar] Leveraging the OGD Platform and Visualization EngineSrijan Technologies
Data is one of the most valuable resources of modern digitized governance, but raw data alone can not provide the decision makers valuable insights. To turn the numbers into knowledge, we need to separate noise from the data. We also need to choose the right way to present the data, so that it’s easily interpreted.
There Open Government Data Platform India makes a lot of valuable open government datasets available from different sectors. And you are free to leverage these for creating visualizations and info-graphics using the in-house visualization engine.
Watch the webinar and learn how the OGD platform can help you create data-driven products and solutions.
Key Takeaways:
- Understand the importance of visualizations to showcase valuable insights
- Learn how to use open government data in decision making
- Know how to choose your format of visualization according to the data
- Learn how to create and share maps and charts using the Data Visualization Engine of the OGD Platform
2014 ABP Dialogue talk: "Examples of Collaborative Data, and Free and Open So...Patrick Sunter
My presentation to the August 2014 ABP RhD "Dialogue" peer-talks series, "Examples of Collaborative Data, and Free and Open Source Software, of interest to Urban Researchers”.
The document discusses big data and how it differs from traditional IT approaches. It defines big data using the four V's - volume, velocity, variety, and variability. Technologies used for big data like Hadoop, MapReduce, and NoSQL databases are outlined. Differences between big data infrastructure and traditional IT infrastructure and BI are explored. Examples of how Orbitz and the DoD use big data are provided. The business value of big data analytics is discussed as enabling new types of analysis and insights not previously possible.
This document discusses open data initiatives and provides guidance on starting an open data program. It defines open data as government data that is freely available in a machine-readable format for public use and reuse. Making data open means it is technically accessible in standard formats and legally open with an open license allowing commercial and non-commercial use. Open data programs aim for transparency, citizen engagement and business support. The document lists examples of open data from different cities and recommends choosing a initial data set, applying an open license, formatting it for audiences, and creating a central portal for discovery. It also discusses using hackathons to support open data and the need for open data policies, resources, and sustainability plans.
Dotted Eyes - Open Software, Standards and DataDotted Eyes
Dotted Eyes is a UK-based spatial solutions provider with over 20 years of experience. They take a solution-led approach, focusing on open software, standards and data to provide tailored solutions that best meet customers' requirements. Case studies show how open solutions can help keep transport maps up to date for events and provide a cost-effective hosted application for contractor data analysis.
Deployment strategies of Open Data Node focused mainly on pilots (2015-May)Comsode - FP7 project
This document discusses enabling open data management in public institutions through a pilot deployment of the COMSODE Open Data Node. It describes the benefits of open data for public administrations, private sectors, and civil society. Public institutions are invited to participate in the pilot program and will receive assistance from COMSODE in categorizing their data, setting up processes to publish select data through the Open Data Node tools, and getting trained on using the platform.
Geospatial Intelligence Middle East 2013_Big Data_Steven RamageSteven Ramage
Some initial considerations and discussion points around geospatial big data. Location adds context and relevance. Need to consider a number of V factors including Value.
https://www.youtube.com/watch?v=nvlHJgRE3pU
Won ITAC Graduation Projects Competition, ITAC ID: GP2015.R10.75
A web application that analyze big volumes of product reviews, social networks posts and tweets related to a given product. Then, present these results of this big data analytical job in a user friendly, understandable, and easily interpreted manner that can be used by different customers for different purposes.
Technologies used:
1- Hadoop
2- Hadoop Streaming
3- R Statistical
4- PHP
5- Google Charts API
The document outlines an agenda for a presentation on big data. It discusses key topics like the state of big data adoption, a holistic approach to big data, five high value use cases, technical components, and the future of big data and cloud. The presentation aims to provide an overview of big data and how organizations can take a comprehensive approach to leveraging their data assets.
Boost your data analytics with open data and public news contentOntotext
Get guidance through the gigantic sea of freely available Open Data and learn how it can empower you analysis of any kind of sources.
This webinar is a live demo of news and data analytics, based on rich links within big knowledge graphs. It will show you how to:
Build ranking reports (e.g for people and organisations)
View topics linked implicitly (e.g. daughter companies, key personnel, products …)
Draw trend lines
Extend your analytics with additional data sources
Advanced Analytics and Machine Learning with Data VirtualizationDenodo
Watch full webinar here: https://bit.ly/32c6TnG
Advanced data science techniques, like machine learning, have proven an extremely useful tool to derive valuable insights from existing data. Platforms like Spark, and complex libraries for R, Python and Scala put advanced techniques at the fingertips of the data scientists. However, these data scientists spent most of their time looking for the right data and massaging it into a usable format. Data virtualization offers a new alternative to address these issues in a more efficient and agile way.
Attend this webinar and learn:
- How data virtualization can accelerate data acquisition and massaging, providing the data scientist with a powerful tool to complement their practice
- How popular tools from the data science ecosystem: Spark, Python, Zeppelin, Jupyter, etc. integrate with Denodo
- How you can use the Denodo Platform with large data volumes in an efficient way
- About the success McCormick has had as a result of seasoning the Machine Learning and Blockchain Landscape with data virtualization
Similar to Data Rodeo: A Data Analytics Environment for the Central Texas Region (20)
Updates provided to the D-STOP Business Advisory Council at the 2017 Symposium and Board Meeting: https://ctr.utexas.edu/2018/04/12/d-stop-2017-symposium-archive/
Updates provided to the D-STOP Business Advisory Council at the 2017 Symposium and Board Meeting: https://ctr.utexas.edu/2018/04/12/d-stop-2017-symposium-archive/
Updates provided to the D-STOP Business Advisory Council at the 2017 Symposium and Board Meeting: https://ctr.utexas.edu/2018/04/12/d-stop-2017-symposium-archive/
This document discusses ongoing research projects related to collaborative sensing and heterogeneous networking leveraging vehicular fleets. Specifically, it discusses:
1) How increased cluster density of vehicles improves overall data rates and reduces variability in individual user rates.
2) Modeling what collaborative sensing systems can "see" or be aware of in obstructed environments and how coverage benefits scale with increased penetration of collaborative vehicles.
3) Developing optimal information sharing policies to maximize situational awareness for autonomous nodes in resource-constrained network environments.
Updates provided to the D-STOP Business Advisory Council at the 2017 Symposium and Board Meeting: https://ctr.utexas.edu/2018/04/12/d-stop-2017-symposium-archive/
Updates provided to the D-STOP Business Advisory Council at the 2017 Symposium and Board Meeting: https://ctr.utexas.edu/2018/04/12/d-stop-2017-symposium-archive/
Updates provided to the D-STOP Business Advisory Council at the 2017 Symposium and Board Meeting: https://ctr.utexas.edu/2018/04/12/d-stop-2017-symposium-archive/
Updates provided to the D-STOP Business Advisory Council at the 2017 Symposium and Board Meeting: https://ctr.utexas.edu/2018/04/12/d-stop-2017-symposium-archive/
Updates provided to the D-STOP Business Advisory Council at the 2017 Symposium and Board Meeting: https://ctr.utexas.edu/2018/04/12/d-stop-2017-symposium-archive/
Updates provided to the D-STOP Business Advisory Council at the 2017 Symposium and Board Meeting: https://ctr.utexas.edu/2018/04/12/d-stop-2017-symposium-archive/
Online platforms are emerging as a powerful mechanism for matching resources to requests. In the setting of freight, the requests arrive from shippers, who have a diverse collection of goods. The resources are supplied by shippers (trucks), and have various physical constraints (driver’s route preferences, carrying capacity, geographic preferences, etc.). Online platforms are emerging that (a) learn the characteristics of shippers and carriers, and (b) efficiently match goods to trucks based on such learning.
Our project will develop algorithms for such online resource allocation. This is a challenging problem, due to the complexity of the learning tasks. Such algorithms can have considerable impact on efficiently using trucking resources.
Through this project, the research team will leverage the computing resources and expertise at UT to develop a “data discovery environment” for transportation data to aid decision-making. Many efforts focus on leveraging transportation data to help travelers make decisions, but less thought has gone into a framework for using big data to help transportation agency staff and decision makers. The team will start by building the DDE for the Central Texas region, in collaboration with the local MPO, the City of Austin, and the local transit agency. Initially, the project will focus on creating more meaning from existing data sources, and as the project progresses, it will grow to include more novel data sources and methods. The data platform will be web-based and part of the research includes not only building the tool but developing appropriate protocols for access and governance.
This document discusses modeling strategies for autonomous and connected vehicles. It proposes modifying traditional four-step transportation models to account for autonomous vehicle adoption rates and different trip types. Autonomous vehicle passenger car equivalents and flow ratios are modeled based on vehicle speed, market penetration, and other factors. The document also describes plans for a 4G deployment test bed to demonstrate connected vehicle technologies on managed lanes in Dallas-Fort Worth and Virginia.
Advanced driver assistance systems (ADAS) are a key technology for improving road safety. But both current and proposed ADAS are limited in important ways. Vision- and lidar-based ADAS performs poorly in heavy rain, snow, or fog. Lack of vehicle situational awareness due to these sensing limitations will unfortunately be the cause of many accidents, including fatalities, for connected and automated vehicles in the years to come. The goal of this research is to develop and test a sensing strategy with robust perception: No blind spots, applicable to all driveable environments, and available in all weather conditions. We believe there are three key requirements for collaborative all-weather sensing:
– Precise vehicle positioning within a common reference frame
– Decimeter-accurate vision and radar mapping
– A means of quantifying the benefits of collaborative sensing
Vehicular radar and communication are the two primary means of using radio frequency (RF) signals in transportation systems. Automotive radars provide high-resolution sensing using proprietary waveforms in millimeter wave (mmWave) bands and vehicular communications allow vehicles to exchange safety messages or raw sensor data. Both the techniques can be used for applications such as forward collision warning, cooperative adaptive cruise control, and pre-crash applications.
Many areas of machine learning and data mining focus on point estimates of key parameters. In transportation, however, the inherent variance, and, critically, the need to understand the limits of that variance and the impact it may have, have long been understood to be important. Indeed, variance and other risk measures that capture the cost of the spread around the mean, are critical factors in understanding how people act. Thus they are critical for prediction, as well as for purposes of long term planning, where controlling risk may be equally important to controlling the mean (the point estimate).
There has been tremendous progress on large scale optimization techniques to enable the solution of large scale machine learning and data analytics problems. Stochastic Gradient Descent and its variants is probably the most-used large-scale optimization technique for learning. This has not yet seen an impact on the problem of statistical inference — namely, obtaining distributional information that might allow us to control the variance and hence the risk of certain solutions.
Investigation and findings on reservation-based intersections and managed lanes
Real-Time Signal Control and Traffic Stability
Congestion on urban arterials is largely centered around intersection control. Traditional traffic signal schemes are limited in their ability to adapt in real time to traffic conditions or by their ability to coordinate with each other to ensure adequate performance. Specifically, there is a tension between adaptivity (as with actuated signals) and coordination through pre-timed signals (signal progression). We propose to investigate whether routing protocols in telecommunications networks can be applied to resolve these problems. Specifically, the backpressure algorithm of Tassiulas & Emphremides (1992) can ensure system stability through decentralized control under relatively weak regularity conditions. It is as yet unknown whether this algorithm can be adapted to traffic signal systems, and if so, what modifications are needed. Traffic systems differ in several significant ways from telecommunication networks: each intersection approach has relatively few queues (lanes) that must be shared among traffic to various definitions. First-in, first-out constraints lead to head-of-line blocking effects, traffic waves move at a much slower speed than data packets, and traffic queues are tightly limited by physical space (finite buffers). Determining whether (and how) the backpressure concept can be adapted to traffic networks requires significant research, and has the potential to dramatically improve signal performance.
Improved Models for Managed Lane Operations
Managed lanes (ML) are increasingly being considered as a tool to mitigate congestion on highways with limited areas for capacity expansion. Managed lanes are dynamically priced based on the congestion level, and can be set either with the objective of maximum utilization (e.g., a public operator) or profit maximization (e.g., a private operator). Optimization models for determining these pricing policies make restrictive assumptions about the layout of these corridors (often a single entrance and exit) or knowledge of traveler characteristics on behalf of the modeler (e.g., distribution of willingness to pay). Developing new models to address these issues would allow for better utilization of these facilities.
Professor Robert W. Heath Jr. is the director of UT SAVES (Situation-Aware Vehicular Engineering Systems), which combines expertise in wireless communications, signal processing, and transportation research. UT SAVES collaborates with automotive companies like Honda R&D Americas on projects involving sensing, communication, and analytics for applications such as automated driving. Membership provides access to UT SAVES research and facilities, including graduate research assistants and experimental capabilities in areas like millimeter wave communication and sensor fusion. Current research projects focus on cooperative sensing, vehicle-to-everything communication, and applying 5G cellular networks to driving assistance technologies.
The Business Advisory Council meeting covered the following topics in 3 sentences or less:
The meeting covered updates on education and workforce development programs at the Engineering Education and Research Center including summer internships and distinguished lectures. Research updates were provided on 30 completed projects and 18 ongoing projects covering topics like connected corridors and autonomous vehicles. New proposed research was presented on topics such as video data analytics, traffic signal optimization, and modeling willingness to share trips in autonomous vehicles.
The document discusses managing mobility during the design-build reconstruction of the Dallas Horseshoe highway interchange project. It describes the project's high traffic volumes and constraints. It highlights the contractor's successes in maintaining access and maximizing work during limited closures. It stresses the importance of collaboration between the agency and contractor in developing traffic control plans and finding solutions to difficult situations.
More from Center for Transportation Research - UT Austin (20)
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
Discover top-tier mobile app development services, offering innovative solutions for iOS and Android. Enhance your business with custom, user-friendly mobile applications.
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Introduction of Cybersecurity with OSS at Code Europe 2024
Data Rodeo: A Data Analytics Environment for the Central Texas Region
1. A Data Analytics Environment for the Central Texas Region
Dr. Jen Duthie, PE
D-STOP Symposium
April 2016
DATA RODEO
2. Access to data
& tools
Innovation &
collaboration
Data-centric
research
& learning
Enable Smart
Cities
Goals
3. Public release of select data
Open Portals
• City
• State
• OpenStreetMapPublished/crowd-sourced data
Public Sector
Agencies
Private Sector
• App developers
• Communications
• OEMs
AgencyData
RegionalData
Map image: screenshot from http://www.openstreetmap.org , TMC: Source: http://www.moxa.com ; Full link at http://bit.ly/1qfBJrd
D-STOP + TACC
• Storage
• Analytics
• Research
• Education
Community
• Open Portals
• OpenStreetMap
Rodeo - Framework
4. Open: access data, access code
Scalable: individuals, cities, regions
Replicable: works here, works there
Empowering: easy access, new insights
Guiding principles
5. Server Side
Leveraging TACC @ UT
Hosted in the cloud
Geospatial versioning (e.g., GeoGig)
Conversion for common data types (i.e., ETL)
Free and open source components
DATA RODEO
Client Side
Edit via free tools (e.g., iD, QGIS)
Edit via proprietary tools (e.g., ArcGIS)
Analyze via custom tools
Visualize via data dashboard
Access via public API
7. Lower barrier to entry for transportation modeling
and analytics
Test predictive models against reality
Enable coordination for regional planning
Data-driven policies
RODEO-Enabled innovations
Guiding principles
Archive transportation-related data from multiple sources (private & public)
Provide easy and intuitive access to data & data analysis tools through a web framework
Enable use of advanced modeling tools & computational resources through framework
Promote “reproducible science” & faster technology transfer – Enable research