Barbara Poore's presentation in the 2nd Workdshop on usability of geographic information, 23rd March 2010 at UCL, London. See details at http://www.virart.nottingham.ac.uk/GI%20Usability/index.html
Riley, Jenn. “Getting Comfortable with Metadata Reuse.” O Rare! Performance in Special Collections: The 54th Annual RBMS Preconference, Minneapolis, June 23 – 26, 2013
Christopher Parker's presentation in the 2nd Workdshop on usability of geographic information, 23rd March 2010 at UCL, London. See details at http://www.virart.nottingham.ac.uk/GI%20Usability/index.html
REFERENCE:
PARKER, C.J., MAY, A. and MITCHELL, V., 2012. Using VGI to enhance user judgements of quality and authority. IN: Geographical Information Science Research UK (GISRUK) 2012 Conference Proceedings. Available at: https://dspace.lboro.ac.uk/dspace-jspui/handle/2134/9509
Amateur Volunteered Geographic Information (VGI) has been used together with Professional Geographic Information (PGI) since its inception during the mid 00’s alongside neogeography. While the geographic accuracy and quality of VGI has been demonstrated to be more than good enough, no previous research has been published on the influence VGI has on the user perceptions of the mashup. This paper presents a quantitative investigation into how including VGI in mashups, and telling users their mashup contains VGI influences user perceptions of quality and authority- which ultimately decide if the user wishes to utilise the mashup or not.
Towards a socio-economical evaluation framework of Volunteered Geographic Information (VGI) Anisur Rahman Gazi (1), Research Director: Dr. Stéphane Roche (2) 1 PhD Student, Geomatic Science, Université Laval, anisur-rahman.gazi.1@ulaval.ca 2 Professor, Geomatic Science, Université Laval, stephane.roche@scg.ulaval.ca Abstract Neogeography with World Wide Web and satellite technology has become an important subject matter of modern digital age. Volunteered Geographic Information (VGI), using the web 2.0 platform with wikilike initiatives has enriched and is developing the idea of Neogeography in the digital world. VGI is the user-generated-content with geospatial reference in the field of Geomatics since VGI harnesses the tools to create, assemble, and disseminate geographic data provided voluntarily by individuals. Wikimapia, OpenstreetMap, Google Mymaps and Cloudmade are some examples of this phenomenon. These sites provide general base map information and allow users to create their own content by marking locations where various events occurred or certain features exist, but aren’t already shown on the base map. The importance of VGI is growing as since the numbers of contributors are increasing day by day. The field of VGI has already drawn attention to the contributors of participatory Geoweb as well as to the researchers in the field of Geomatics. The emerging Participatory Geoweb (web 2.0) and VGI aim to create a highly dynamic environment building a digital Neighbourhood along with the participating people of the whole world. It has a big hope to bring a positive change in the social life and economic sector amongst the neighbours of that digital neighbourhood. It is introducing a new business model, new economics and a new way of thinking about the future of Geographic Information Science (GIScience). Project: SII- 86 (ECOGEO II) - GEOIDE
This document summarizes a study analyzing tweets from the 2011 UK riots. It collected 2.6 million tweets using hashtags related to the riots from 700,000 accounts. It developed tools to analyze information flows and conducted content analysis to understand how Twitter was used. Key findings included that rumors spread quickly on Twitter but were often dispelled, social media was used more for organizing cleanups than the riots, and mainstream media and journalists tweeted the most about the riots. Future work focused on developing an analysis workbench.
The document summarizes a study exploring how volunteers provide geographic information (VGI) that users find beneficial. The study aimed to: 1) identify where VGI is useful in users' activities; 2) understand how VGI differs from professional information; and 3) determine how VGI versus professional information affects activity outcomes. The researcher selected hill walkers, surfers, and kayakers as groups that rely on geographic data. Interviews examined which information sources and delivery methods groups use and find most reliable. Preliminary findings suggest currency, depth, and quality are most important to users searching for information.
The study aimed to determine how the inclusion of volunteered geographic information (VGI) alongside professional geographic information (PGI) impacts user judgements and system acceptance. 101 participants evaluated a travel website containing different combinations of VGI and PGI data. Presenting VGI increased perceptions of currency, usefulness, credibility and authority. Telling users about the VGI modestly improved authority, usefulness and satisfaction. The results suggest VGI can enhance existing systems without negatively impacting user perceptions if applied appropriately.
Riley, Jenn. “Getting Comfortable with Metadata Reuse.” O Rare! Performance in Special Collections: The 54th Annual RBMS Preconference, Minneapolis, June 23 – 26, 2013
Christopher Parker's presentation in the 2nd Workdshop on usability of geographic information, 23rd March 2010 at UCL, London. See details at http://www.virart.nottingham.ac.uk/GI%20Usability/index.html
REFERENCE:
PARKER, C.J., MAY, A. and MITCHELL, V., 2012. Using VGI to enhance user judgements of quality and authority. IN: Geographical Information Science Research UK (GISRUK) 2012 Conference Proceedings. Available at: https://dspace.lboro.ac.uk/dspace-jspui/handle/2134/9509
Amateur Volunteered Geographic Information (VGI) has been used together with Professional Geographic Information (PGI) since its inception during the mid 00’s alongside neogeography. While the geographic accuracy and quality of VGI has been demonstrated to be more than good enough, no previous research has been published on the influence VGI has on the user perceptions of the mashup. This paper presents a quantitative investigation into how including VGI in mashups, and telling users their mashup contains VGI influences user perceptions of quality and authority- which ultimately decide if the user wishes to utilise the mashup or not.
Towards a socio-economical evaluation framework of Volunteered Geographic Information (VGI) Anisur Rahman Gazi (1), Research Director: Dr. Stéphane Roche (2) 1 PhD Student, Geomatic Science, Université Laval, anisur-rahman.gazi.1@ulaval.ca 2 Professor, Geomatic Science, Université Laval, stephane.roche@scg.ulaval.ca Abstract Neogeography with World Wide Web and satellite technology has become an important subject matter of modern digital age. Volunteered Geographic Information (VGI), using the web 2.0 platform with wikilike initiatives has enriched and is developing the idea of Neogeography in the digital world. VGI is the user-generated-content with geospatial reference in the field of Geomatics since VGI harnesses the tools to create, assemble, and disseminate geographic data provided voluntarily by individuals. Wikimapia, OpenstreetMap, Google Mymaps and Cloudmade are some examples of this phenomenon. These sites provide general base map information and allow users to create their own content by marking locations where various events occurred or certain features exist, but aren’t already shown on the base map. The importance of VGI is growing as since the numbers of contributors are increasing day by day. The field of VGI has already drawn attention to the contributors of participatory Geoweb as well as to the researchers in the field of Geomatics. The emerging Participatory Geoweb (web 2.0) and VGI aim to create a highly dynamic environment building a digital Neighbourhood along with the participating people of the whole world. It has a big hope to bring a positive change in the social life and economic sector amongst the neighbours of that digital neighbourhood. It is introducing a new business model, new economics and a new way of thinking about the future of Geographic Information Science (GIScience). Project: SII- 86 (ECOGEO II) - GEOIDE
This document summarizes a study analyzing tweets from the 2011 UK riots. It collected 2.6 million tweets using hashtags related to the riots from 700,000 accounts. It developed tools to analyze information flows and conducted content analysis to understand how Twitter was used. Key findings included that rumors spread quickly on Twitter but were often dispelled, social media was used more for organizing cleanups than the riots, and mainstream media and journalists tweeted the most about the riots. Future work focused on developing an analysis workbench.
The document summarizes a study exploring how volunteers provide geographic information (VGI) that users find beneficial. The study aimed to: 1) identify where VGI is useful in users' activities; 2) understand how VGI differs from professional information; and 3) determine how VGI versus professional information affects activity outcomes. The researcher selected hill walkers, surfers, and kayakers as groups that rely on geographic data. Interviews examined which information sources and delivery methods groups use and find most reliable. Preliminary findings suggest currency, depth, and quality are most important to users searching for information.
The study aimed to determine how the inclusion of volunteered geographic information (VGI) alongside professional geographic information (PGI) impacts user judgements and system acceptance. 101 participants evaluated a travel website containing different combinations of VGI and PGI data. Presenting VGI increased perceptions of currency, usefulness, credibility and authority. Telling users about the VGI modestly improved authority, usefulness and satisfaction. The results suggest VGI can enhance existing systems without negatively impacting user perceptions if applied appropriately.
The document discusses the key components and concepts of a National Spatial Data Infrastructure (NSDI). An NSDI aims to integrate distributed geospatial data through partnerships between different levels of government and private organizations. It establishes standards, frameworks and metadata to facilitate discovery and sharing of geospatial data. Central to an NSDI is a clearinghouse that allows users to search metadata from distributed servers according to common protocols. When properly implemented through the coordination of stakeholders, an NSDI can help reduce data duplication, lower costs and make critical spatial information more accessible.
The document provides an introduction to data mining, including:
1. Defining data mining as the process of discovering patterns in large data sets using methods from artificial intelligence, machine learning, statistics, and database systems.
2. Explaining the CRISP-DM process as the standard method for data mining projects, which includes business understanding, data understanding, data preparation, modeling, evaluation, and deployment.
3. Noting some challenges of data mining like data quality, privacy, and ensuring findings are meaningful and not just random patterns.
TNGIC 2011 Keynote Managing Mountains of DataZsoltNC
The keynote address covered several topics related to geospatial data including digital preservation, cloud computing, elevation data planning, and advocacy. Digital preservation initiatives like GeoMAPP were discussed as was leveraging cloud computing and data as a service models. Uses of LiDAR data for applications like flood risk assessment and landslide analysis were presented. The concept of the "adjacent possible" and advocacy through organizations like URISA and TNGIC were also covered.
Tim Davies, SMART Infrastructure Facility E-Research Coordinator, presented the SMART Data Management Systement as part of the SMART Seminar Series on Thursday, 5th March 2015.
This document provides an introduction to data mining techniques. It discusses how data mining emerged due to the problem of data explosion and the need to extract knowledge from large datasets. It describes data mining as an interdisciplinary field that involves methods from artificial intelligence, machine learning, statistics, and databases. It also summarizes some common data mining frameworks and processes like KDD, CRISP-DM and SEMMA.
Presentation by Ivan Schotsmans (DV Community) at the Data Vault Modelling an...Patrick Van Renterghem
The start of GDPR implementations in Europe was, for most organizations, also the start of rethinking their Data Warehouse strategy. The experience of past implementations gave a better view on the do's and don'ts. One of the important lessons learned was the approach of handling information quality. It's not something you handle on top of your data warehouse. To be successful, information quality goes hand in hand with your data warehouse implementation.
The document discusses data management principles and best practices. It covers topics such as data quality, security, organization, and the data lifecycle. Effective data management requires following principles like ensuring data is accurate, consistent, complete, up-to-date, and unambiguous. It also requires proper data backup, storage, capturing, transfer between systems, analytics, and presentation to end users. Planning is important to integrate data management across all stages from initial collection through final analysis and sharing.
The first step towards understanding what data assets mean for your organization is understanding what those assets mean for each other. Metadata—literally, data about data—is one of many Data Management disciplines inherent in good systems development, and is perhaps the most mislabeled and misunderstood out of the lot. Understanding Metadata and its associated technologies as more than just straightforward technological tools can provide powerful insight into the efficiency of organizational practices, and can also enable you to combine more sophisticated Data Management techniques in support of larger and more complex business initiatives.
In this webinar, we will:
Illustrate how to leverage Metadata in support of your business strategy
Discuss foundational Metadata concepts based on “The DAMA Guide to the Data Management Body of Knowledge” (DAMA DMBOK)
Enumerate guiding principles for and lessons previously learned from Metadata and its practical uses
Learning from past infrastructure to embrace friction and create the Research...Research Data Alliance
RDA provides a neutral space for researchers to develop standards and share data across disciplines through working groups and interest groups. It focuses on developing interoperability through deliverables like registries and identifiers. While it doesn't define architecture, it aims to foster connections and provide unity. RDA also takes a "glocal" approach, implementing standards locally while addressing global issues. Friction in collaboration is inevitable but necessary for progress, and RDA provides a place for discussions to work through differences.
Democratizing Data within your organization - Data DiscoveryMark Grover
n this talk, we talk about the challenges at scale in an organization like Lyft. We delve into data discovery as a challenge towards democratizing data within your organization. And, go in detail about the solution to solve the challenge of data discovery.
The document discusses how various UN organizations utilize geospatial data and technologies. It describes several key programs and departments such as UNEP, OCHA, and UNOSAT that work with geospatial data. It provides examples of how specific units like UNEP/DEWA/GRID-Geneva and OCHA manage geospatial data and tools. The document also discusses challenges around data sharing and interoperability between different UN organizations and the efforts underway to improve linked data approaches and common standards.
Geonetwork is an open source spatial data management system that allows for the discovery, access, and sharing of geospatial data and services. It implements international metadata standards to provide standardized descriptions of data. This increases collaboration between organizations by reducing duplication and improving data consistency, quality and accessibility. Geonetwork provides features like metadata editing, catalog search, map viewing and data downloading to help users find and utilize spatial information.
This document provides an introduction to the CSE5243 Intro to Data Mining course at Ohio State University. It outlines the course schedule, textbook, work and grading. The course covers various topics in data mining including what data mining is, why it is used, the types of data and patterns that can be mined, technologies used, applications, and issues. Major data mining functions include classification, clustering, association rule mining, outlier analysis, and analysis of structured/network data.
This document outlines the learning objectives and resources for a course on data mining and analytics. The course aims to:
1) Familiarize students with key concepts in data mining like association rule mining and classification algorithms.
2) Teach students to apply techniques like association rule mining, classification, cluster analysis, and outlier analysis.
3) Help students understand the importance of applying data mining concepts across different domains.
The primary textbook listed is "Data Mining: Concepts and Techniques" by Jiawei Han and Micheline Kamber. Topics that will be covered include introduction to data mining, preprocessing, association rules, classification algorithms, cluster analysis, and applications.
Context is everything, from the clothing you choose in the morning to the dinner menu you plan based on available ingredients and time. The word on the street is that DITA maps are the express context designed to drive builds for particular deliverables and conditionality for DITA topics. That is partly true, but it is not the whole story.
For one thing, maps are far more versatile than just as build directives. Moreover, DITA topic processing can get its cues from contexts other than maps. And therein hangs the premise of Going Mapless.
To get our own context for this presentation, we start with a quick review of the original architectural definition of DITA and then trace the popular information architectures and tools that have grown up with the standard as we currently know it. Then Don introduces some scenarios where DITA could be useful if freed from the the prevailing map-driven processing paradigm, and he walks you through some available methods and solutions for using DITA in these unconventional ways.
This presentation was given at Information Development World on October 2, 2015.
Why documenting research data? Is it worth the extra effort? learnings from t...ILRI
A presentation by Traore et al. at the Workshop on Dealing with Drivers of Rapid Change in Africa: Integration of Lessons from Long-term Research on INRM, ILRI, Nairobi, June 12-13, 2008.
5th Multicore World
15-17 February 2016 – Shed 6, Wellington, New Zealand
http://openparallel.com/multicore-world-2016/
We start by dividing applications into data plus model components and classifying each component (whether from Big Data or Big Simulations) in the same way. These leads to 64 properties divided into 4 views, which are Problem Architecture (Macro pattern); Execution Features (Micro patterns); Data Source and Style; and finally the Processing (runtime) View.
We discuss convergence software built around HPC-ABDS (High Performance Computing enhanced Apache Big Data Stack) http://hpc-abds.org/kaleidoscope/ and show how one can merge Big Data and HPC (Big Simulation) concepts into a single stack.
We give examples of data analytics running on HPC systems including details on persuading Java to run fast.
Some details can be found at http://dsc.soic.indiana.edu/publications/HPCBigDataConvergence.pdf
Devising a citizen science monitoring programme for tree regeneration the upl...Muki Haklay
Presentation by Chris Andrews from a participatory virtual workshop June 2020 on citizen science in the Cairngorms national park. Aims of presentation: To provide a background information as to what's going on ecologically in the uplands; To explore why some upland habitats might be changing; Example of what could be done through a case study at the ECN Cairngorm long-term monitoring site; Provide a framework in which to think about what variables might be useful to citizen science project on monitoring regeneration.
The value of citizen science for environmental monitoring in ScotlandMuki Haklay
Presentation by Nadia Dewhurst Richman from a participatory virtual workshop in June 2020. This presentation gives an overview of the benefits of citizen science using examples of existing projects in Scotland, along with an introduction to Scotland’s Environment Web.
More Related Content
Similar to The metadata crisis: Can geographic information be made more usable?
The document discusses the key components and concepts of a National Spatial Data Infrastructure (NSDI). An NSDI aims to integrate distributed geospatial data through partnerships between different levels of government and private organizations. It establishes standards, frameworks and metadata to facilitate discovery and sharing of geospatial data. Central to an NSDI is a clearinghouse that allows users to search metadata from distributed servers according to common protocols. When properly implemented through the coordination of stakeholders, an NSDI can help reduce data duplication, lower costs and make critical spatial information more accessible.
The document provides an introduction to data mining, including:
1. Defining data mining as the process of discovering patterns in large data sets using methods from artificial intelligence, machine learning, statistics, and database systems.
2. Explaining the CRISP-DM process as the standard method for data mining projects, which includes business understanding, data understanding, data preparation, modeling, evaluation, and deployment.
3. Noting some challenges of data mining like data quality, privacy, and ensuring findings are meaningful and not just random patterns.
TNGIC 2011 Keynote Managing Mountains of DataZsoltNC
The keynote address covered several topics related to geospatial data including digital preservation, cloud computing, elevation data planning, and advocacy. Digital preservation initiatives like GeoMAPP were discussed as was leveraging cloud computing and data as a service models. Uses of LiDAR data for applications like flood risk assessment and landslide analysis were presented. The concept of the "adjacent possible" and advocacy through organizations like URISA and TNGIC were also covered.
Tim Davies, SMART Infrastructure Facility E-Research Coordinator, presented the SMART Data Management Systement as part of the SMART Seminar Series on Thursday, 5th March 2015.
This document provides an introduction to data mining techniques. It discusses how data mining emerged due to the problem of data explosion and the need to extract knowledge from large datasets. It describes data mining as an interdisciplinary field that involves methods from artificial intelligence, machine learning, statistics, and databases. It also summarizes some common data mining frameworks and processes like KDD, CRISP-DM and SEMMA.
Presentation by Ivan Schotsmans (DV Community) at the Data Vault Modelling an...Patrick Van Renterghem
The start of GDPR implementations in Europe was, for most organizations, also the start of rethinking their Data Warehouse strategy. The experience of past implementations gave a better view on the do's and don'ts. One of the important lessons learned was the approach of handling information quality. It's not something you handle on top of your data warehouse. To be successful, information quality goes hand in hand with your data warehouse implementation.
The document discusses data management principles and best practices. It covers topics such as data quality, security, organization, and the data lifecycle. Effective data management requires following principles like ensuring data is accurate, consistent, complete, up-to-date, and unambiguous. It also requires proper data backup, storage, capturing, transfer between systems, analytics, and presentation to end users. Planning is important to integrate data management across all stages from initial collection through final analysis and sharing.
The first step towards understanding what data assets mean for your organization is understanding what those assets mean for each other. Metadata—literally, data about data—is one of many Data Management disciplines inherent in good systems development, and is perhaps the most mislabeled and misunderstood out of the lot. Understanding Metadata and its associated technologies as more than just straightforward technological tools can provide powerful insight into the efficiency of organizational practices, and can also enable you to combine more sophisticated Data Management techniques in support of larger and more complex business initiatives.
In this webinar, we will:
Illustrate how to leverage Metadata in support of your business strategy
Discuss foundational Metadata concepts based on “The DAMA Guide to the Data Management Body of Knowledge” (DAMA DMBOK)
Enumerate guiding principles for and lessons previously learned from Metadata and its practical uses
Learning from past infrastructure to embrace friction and create the Research...Research Data Alliance
RDA provides a neutral space for researchers to develop standards and share data across disciplines through working groups and interest groups. It focuses on developing interoperability through deliverables like registries and identifiers. While it doesn't define architecture, it aims to foster connections and provide unity. RDA also takes a "glocal" approach, implementing standards locally while addressing global issues. Friction in collaboration is inevitable but necessary for progress, and RDA provides a place for discussions to work through differences.
Democratizing Data within your organization - Data DiscoveryMark Grover
n this talk, we talk about the challenges at scale in an organization like Lyft. We delve into data discovery as a challenge towards democratizing data within your organization. And, go in detail about the solution to solve the challenge of data discovery.
The document discusses how various UN organizations utilize geospatial data and technologies. It describes several key programs and departments such as UNEP, OCHA, and UNOSAT that work with geospatial data. It provides examples of how specific units like UNEP/DEWA/GRID-Geneva and OCHA manage geospatial data and tools. The document also discusses challenges around data sharing and interoperability between different UN organizations and the efforts underway to improve linked data approaches and common standards.
Geonetwork is an open source spatial data management system that allows for the discovery, access, and sharing of geospatial data and services. It implements international metadata standards to provide standardized descriptions of data. This increases collaboration between organizations by reducing duplication and improving data consistency, quality and accessibility. Geonetwork provides features like metadata editing, catalog search, map viewing and data downloading to help users find and utilize spatial information.
This document provides an introduction to the CSE5243 Intro to Data Mining course at Ohio State University. It outlines the course schedule, textbook, work and grading. The course covers various topics in data mining including what data mining is, why it is used, the types of data and patterns that can be mined, technologies used, applications, and issues. Major data mining functions include classification, clustering, association rule mining, outlier analysis, and analysis of structured/network data.
This document outlines the learning objectives and resources for a course on data mining and analytics. The course aims to:
1) Familiarize students with key concepts in data mining like association rule mining and classification algorithms.
2) Teach students to apply techniques like association rule mining, classification, cluster analysis, and outlier analysis.
3) Help students understand the importance of applying data mining concepts across different domains.
The primary textbook listed is "Data Mining: Concepts and Techniques" by Jiawei Han and Micheline Kamber. Topics that will be covered include introduction to data mining, preprocessing, association rules, classification algorithms, cluster analysis, and applications.
Context is everything, from the clothing you choose in the morning to the dinner menu you plan based on available ingredients and time. The word on the street is that DITA maps are the express context designed to drive builds for particular deliverables and conditionality for DITA topics. That is partly true, but it is not the whole story.
For one thing, maps are far more versatile than just as build directives. Moreover, DITA topic processing can get its cues from contexts other than maps. And therein hangs the premise of Going Mapless.
To get our own context for this presentation, we start with a quick review of the original architectural definition of DITA and then trace the popular information architectures and tools that have grown up with the standard as we currently know it. Then Don introduces some scenarios where DITA could be useful if freed from the the prevailing map-driven processing paradigm, and he walks you through some available methods and solutions for using DITA in these unconventional ways.
This presentation was given at Information Development World on October 2, 2015.
Why documenting research data? Is it worth the extra effort? learnings from t...ILRI
A presentation by Traore et al. at the Workshop on Dealing with Drivers of Rapid Change in Africa: Integration of Lessons from Long-term Research on INRM, ILRI, Nairobi, June 12-13, 2008.
5th Multicore World
15-17 February 2016 – Shed 6, Wellington, New Zealand
http://openparallel.com/multicore-world-2016/
We start by dividing applications into data plus model components and classifying each component (whether from Big Data or Big Simulations) in the same way. These leads to 64 properties divided into 4 views, which are Problem Architecture (Macro pattern); Execution Features (Micro patterns); Data Source and Style; and finally the Processing (runtime) View.
We discuss convergence software built around HPC-ABDS (High Performance Computing enhanced Apache Big Data Stack) http://hpc-abds.org/kaleidoscope/ and show how one can merge Big Data and HPC (Big Simulation) concepts into a single stack.
We give examples of data analytics running on HPC systems including details on persuading Java to run fast.
Some details can be found at http://dsc.soic.indiana.edu/publications/HPCBigDataConvergence.pdf
Devising a citizen science monitoring programme for tree regeneration the upl...Muki Haklay
Presentation by Chris Andrews from a participatory virtual workshop June 2020 on citizen science in the Cairngorms national park. Aims of presentation: To provide a background information as to what's going on ecologically in the uplands; To explore why some upland habitats might be changing; Example of what could be done through a case study at the ECN Cairngorm long-term monitoring site; Provide a framework in which to think about what variables might be useful to citizen science project on monitoring regeneration.
The value of citizen science for environmental monitoring in ScotlandMuki Haklay
Presentation by Nadia Dewhurst Richman from a participatory virtual workshop in June 2020. This presentation gives an overview of the benefits of citizen science using examples of existing projects in Scotland, along with an introduction to Scotland’s Environment Web.
citizen science - a brief introduction Muki Haklay
Presentation by Muki Haklay in a participatory virtual workshop June 2020. The presentation provided an overview of the types of activities that fall under the umbrella term citizen science - from activities that people do at home using the computers and the internet (volunteer computing or volunteer thinking) to ecological monitoring of landscape change in an opportunistic way. The presentation also pointed out to the multiple goals of citizen science projects - from engaging people in environmental issues, to providing opportunities to disadvantaged groups in society. The level of participation across projects was also highlighted, indicating that as requirements and knowledge increase, the number of people that are currently engaged in citizen science project decreases.
Citizen Science as a tool to support land management in the Cairngorms Nation...Muki Haklay
Presentation by Jan Dick from the participatory virtual workshop in June 2020. Part of UKRI project to explore the suitability of citizen science for Long-Term Scoio-Ecological Research (LTSER)
Slides from Susanne Hecker and Muki Haklay talk in an ECSA webinar about the ECSA Characteristics of Citizen science https://zenodo.org/communities/citscicharacteristics/ - covering the methodology and the main features of the document. The webinar is available here https://zenodo.org/record/3859970
Platforms for Citizen Science - ExCiteS experienceMuki Haklay
Muki Haklay has experience developing several citizen science mapping platforms over the past 20 years, including Community Maps, GeoKey, and Sapelli. These platforms have evolved from early desktop GIS systems to modern web-based and mobile apps that allow collaborative mapping. Some key insights are that infrastructure systems have a longevity of around 5 years before needing redesign, engaging communities requires both digital and paper tools, and balancing research goals with reliable software can be challenging. Maintaining these types of platforms requires ongoing development and technical support.
This document discusses citizen science projects across different domains and levels of participation. It provides an overview of citizen science activities and the relationship between scientists and the public. It also positions citizen science within the context of public engagement, using an example from the DITOs project. Finally, it introduces the next step of the EU-Citizen.Science project.
Citizen Science in Open Science context: measuring & understanding impacts of...Muki Haklay
Citizen science has grown rapidly in recent decades due to societal and technological trends. It includes a wide range of activities across disciplines. While not all participants want deep engagement, citizen science can involve fully participatory research processes. It is gaining recognition from the public and policymakers. Evaluation of citizen science projects requires sensitivity, as rigid criteria may exclude some activities or newcomers. Overall, citizen science shows potential for increasing scientific literacy and knowledge while achieving important research goals.
Extreme Citizen Science technologies: attempting to embed values in codeMuki Haklay
Extreme Citizen Science (ExCiteS) is a situated, bottom-up practice that takes into account local needs, practices and culture and works with broad networks of people to design and build new devices and knowledge creation processes that can transform the world. The ExCiteS group at UCL was set up to support the implementation of this concept through the development of theories, methodologies, processes, and technologies that allow any community, regardless of (technical) literacy, to engage in citizen science projects that produce results that are meaningful and useful for them. Stemming from theoretical foundations in participatory action research and public participation geographic information systems (PPGIS), our technologies are designed to carry values with them. Once we visit these values, we can see how they turn into code, and ask how successful these efforts are, using cases in the Amazon, Congo-basin, Namibia, UK, and Malta.
The role of learning in community science and citizen scienceMuki Haklay
This are slides from the talk on 12 Oct, Joint workshop of the Teaching and Learning and Citizen Science Special Interest Groups of the British Ecological Society, which was held on 12th October 2018 at the University of Reading. The talk explores links between learning and citizen science - contributory and collegial in particular. This is an improved version of the Citizen Inquiry slides
The persistent environmental digital divide(s) -RGS-IBG 2018Muki Haklay
Over 25 years ago, as the web was emerging as a medium for distributing public information, it was promoted as a tool for increased democratisation. From the age of dial-up modem and PCs to the use of mobile phones and smartphones, concerns about digital divides and how they impact the ability of local participation in environmental decision making never resolved. These digital divides are creating a tapestry of marginalisation through different devices, skills, and communication potentials, and it is valuable to reflect on their dimensions – both technical and social, and consider how we can consider them in a systematic way. The talk will attempt to reflect on technological and social changes and the attempts to address them.
Digital Geographies Working Group - citizen science - passive and assertive i...Muki Haklay
This document discusses citizen science and inclusiveness. It outlines a spectrum of citizen science projects from volunteer computing and thinking to DIY science. It discusses the educational attainment levels of participants in different projects, noting gender and education imbalances. The document contrasts passive inclusiveness, where projects do not intentionally create obstacles to participation, with assertive inclusiveness, where outreach is done to underrepresented groups. Passive inclusiveness has lower costs but may focus on more educated groups, while assertive inclusiveness aims for more social benefits through greater engagement and empowerment of marginalized communities. Citizen science offers opportunities to study inclusion, but tensions exist regarding participation and what inclusiveness means.
Pecha Kucha session: multi country science programs Ecsite 2018Muki Haklay
Doing It Together Science (DITOs) is a 3-year project, funded by the EU Horizon 2020 programme, that is aimed to increase awareness of and participation in citizen science across Europe and beyond. It is focused on communication, coordination, and support of citizen science activities. Therefore, the project promotes the sharing of best practices among existing networks for a greater public and policy engagement with citizen science through a wide range of events and activities.
Open Science and Citizen Science - researcher, participants, and institutiona...Muki Haklay
Presentation from the OECD workshop on 9th April 2018, GSF-NESTI Workshop on "Reconciling Scientific Excellence and Open Science" asked the question "What do we want out of science and how can we incentivise and monitor these outputs?". The talk covers the personal experience as a researcher, the experience of participants in citizen science projects, and the institutional aspects.
Introduction to Citizen Science and Scientific Crowdsourcing - Data Quality s...Muki Haklay
This is part of the course "introduction to citizen science and scientific crowdsourcing", which you can find at https://extendstore.ucl.ac.uk/product?catalog=UCLXICSSCJan17 . The lecture is dedicated to data management in citizen science, and this part is focusing on data quality
The role of learning in citizen scienceMuki Haklay
This is a presentation from the citizen science impact event at the Open University http://www.open.ac.uk/blogs/opentel/citizen-science-impact-event-at-the-open-university/
Citizen science offer different levels of engagement to participants, which have been captured in typologies of the field (contributory, collaborative, co-created, collegial / crowdsourcing, distributed intelligence, participatory science, extreme citizen science). These typologies do no explicitly examine learning. At the same time, projects and activities striving to fulfil multiple goals (excellent scientific output, satisfying engagement, good recruitment, learning …). Within ythe range of citizen science project, we can consider different aspects of learning that are occurring in them, Projects and use examples from a range of project, and raise some aspects that can help those who are designing co-created projects.
The Willing Volunteer – Incorporating Voluntary Data into National DatabasesMuki Haklay
At present few mapping databases contain crowd sourced or voluntary data. Consider how, in the future, this will be a valuable source of data for national geospatial, cadastral and mapping agencies
Examining the values that are embedded in the processes and technologies of p...Muki Haklay
A persistent question about participatory methodologies that rely on technologies, such as public participation geographic information systems (PPGIS), is how to integrate values, such as inclusiveness of all the people that are impacted by a decision, or identifying options that are popular by the majority but acceptable to the minority, within technologically focused projects. Moreover, technologies do not operate by themselves – they are embedded in organizational, political, and social processes that set how they are used, who can use them, and in what context. Therefore, we should explore where the values reside?
Two factors obscure our view: The misleading conceptualisation that technologies are value free, and can be used for good or for bad – which put all the weight on the process, and ignores the way in which any technology allow only certain actions to be taken. Another popular view of technology conceptualisation is to emphasise their advantages (upside) and ignore their limitations. If we move beyond these, and other “common sense” views of technologies, we can notice how process and technology intertwine.
We can therefore look at the way the process/technology reinforce and limit each other, and the way that the values are integrated and influence them. With this analysis, we can also consider how technological development can explicitly include considerations of values, and be philosophically, politically, and social-theory informed. We need to consider the roles, skills, and knowledge of the people that are involved in each part of the process – from community facilitation to software development.
The paper will draw on the experience of developing participatory geographic information technologies over the past 20 years, and will suggest future directions for values-based participatory technology development.
Into the Night - Technology for citizen scienceMuki Haklay
Current citizen science seems effortless...just download an app and start using it. However, there are many technical aspects that are necessary to make a citizen science project work. In this session, we will provide an overview of all the technical elements that are required - from the process of designing an app., to designing and managing a back-end system, to testing the system end to end before deployment. Participants will have the opportunity to engage in a short exercise to consider the design of an app for a citizen science project that addresses light pollution.
Into the Night - Citizen Science Training day - introduction to citizen scienceMuki Haklay
This document provides an introduction to environmental citizen science projects. It discusses different types of citizen science, including contributory projects where the public contributes data designed by scientists, collaborative projects where the public helps design the project, and co-created projects designed by scientists and the public together. The document outlines considerations for setting up a citizen science project, such as balancing goals of increasing awareness, collecting data, and education. It also discusses recruiting and retaining participants, as well as evaluating projects for their scientific and societal impacts.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Trusted Execution Environment for Decentralized Process MiningLucaBarbaro3
Presentation of the paper "Trusted Execution Environment for Decentralized Process Mining" given during the CAiSE 2024 Conference in Cyprus on June 7, 2024.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
FREE A4 Cyber Security Awareness Posters-Social Engineering part 3Data Hops
Free A4 downloadable and printable Cyber Security, Social Engineering Safety and security Training Posters . Promote security awareness in the home or workplace. Lock them Out From training providers datahops.com
This presentation provides valuable insights into effective cost-saving techniques on AWS. Learn how to optimize your AWS resources by rightsizing, increasing elasticity, picking the right storage class, and choosing the best pricing model. Additionally, discover essential governance mechanisms to ensure continuous cost efficiency. Whether you are new to AWS or an experienced user, this presentation provides clear and practical tips to help you reduce your cloud costs and get the most out of your budget.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...
The metadata crisis: Can geographic information be made more usable?
1. The Metadata Crisis: Can geographic
information be made more usable?
--Barbara Poore
--Eric Wolf
U.S. Geological Survey
2. Excavating a politics of repair and
maintenance
“..perhaps we have been looking in the wrong
place. Perhaps we should have been looking at
breakdown and failure as no longer atypical and
therefore only worth addressing if they result in
catastrophe and, instead, at breakdown and
failure as the means by which societies learn and
learn to re-produce.”
--Graham and Thrift
2007
3. Metadata
• Metadata — describe the content, quality,
condition, and other characteristics of data.
• Major uses of metadata:
– organize and maintain an organization's investment in
data.
– provide information to data catalogs and clearinghouses.
– provide information to aid data transfer.
16. Some rules for Metadata
Rule #1: Metadata should be Object-based.
Rule #2: Metadata should link out.
Rule #3: Metadata leave breadcrumbs or a history.
OSM doesn't link out - the user ID and user name can link into
the Wiki where users are encouraged to create profiles. The
profile is an extension of the metadata that can change as
needed.
17. Metadata Squared
• Data and metadata interchangeable
• Multiple linked ways to get information, all socially
mediated, produced by community, accessible to any
user
– Computer programs
– Tiling schemes
– Web pages
– Wiki pages
– IRC chat rooms
– Youtube videos
– Tweets
18. The Before and After of Metadata
• Made by professionals • Made by users
• Process of subtraction • Process of addition
• Structured (Hierarchies) • Miscellaneous (Tags)
• Implicit links • Explicit links
• Metadata travels • User travels
19. Everything is Miscellaneous
• 1st Order (organize things in space)
• 2nd Order (separation of description from
object)
• 3rd Order (everything digital, data not
distinguishable from metadata)