Governments produce large amounts of valuable data, such as spreadsheets and maps, as part of daily operations and decision-making. This data can be useful to many citizens and organizations beyond the government, and it is ultimately the citizen who has paid for producing it. More and more governments make this data publicly available through data catalogs, such as data.gov, data.gov.uk, or statcentral.ie. This slide set introduces dcat, a unified format for publishing the metadata of such catalogs, that supports the querying, federation, consumption and archival of these valuable data assets.
FAIR Data Bridging from researcher data management to ELIXIR archives in the...Carole Goble
ISMB-ECCB 2021, NIH/ODSS Session, 27 July 2021
ELIXIR is the pan-national European Research Infrastructure for Life Science data, whose 23 national nodes and the EBI coordinate the development and long-term sustainability of domain public databases. FAIR services, policies and curation approaches aim to build a FAIR connected data ecosystem of trusted domain repositories, from ENA, HPA and EGA to specialised resources like CorkOakDB and PIPPA for plant phenotypes. But this is only one part of the data landscape and often the end of data’s journey. The nodes support research projects to operate “FAIR data first”, working with institutional and national platforms that are often generic or designed for project-based data management. We need to bridge between project-based and community-based, and support researchers across their whole RDM lifecycle, navigating the complexity this ecosystem. The ELIXIR-CONVERGE project and its flagship RDMkit toolkit (https://rdmkit.elixir-europe.org) aims to do just that.
About the Webinar
In May 2012, the Library of Congress announced a new modeling initiative focused on reflecting the MARC 21 library standard as a Linked Data model for the Web, with an initial model to be proposed by the consulting company Zepheira. The goal of the initiative is to translate the MARC 21 format to a Linked Data model while retaining the richness and benefits of existing data in the historical format.
In this webinar, Eric Miller of Zepheira will report on progress towards this important goal, starting with an analysis of the translation problem and concluding with potential migration scenarios for a broad-based transition from MARC to a new bibliographic framework.
FAIR Data Bridging from researcher data management to ELIXIR archives in the...Carole Goble
ISMB-ECCB 2021, NIH/ODSS Session, 27 July 2021
ELIXIR is the pan-national European Research Infrastructure for Life Science data, whose 23 national nodes and the EBI coordinate the development and long-term sustainability of domain public databases. FAIR services, policies and curation approaches aim to build a FAIR connected data ecosystem of trusted domain repositories, from ENA, HPA and EGA to specialised resources like CorkOakDB and PIPPA for plant phenotypes. But this is only one part of the data landscape and often the end of data’s journey. The nodes support research projects to operate “FAIR data first”, working with institutional and national platforms that are often generic or designed for project-based data management. We need to bridge between project-based and community-based, and support researchers across their whole RDM lifecycle, navigating the complexity this ecosystem. The ELIXIR-CONVERGE project and its flagship RDMkit toolkit (https://rdmkit.elixir-europe.org) aims to do just that.
About the Webinar
In May 2012, the Library of Congress announced a new modeling initiative focused on reflecting the MARC 21 library standard as a Linked Data model for the Web, with an initial model to be proposed by the consulting company Zepheira. The goal of the initiative is to translate the MARC 21 format to a Linked Data model while retaining the richness and benefits of existing data in the historical format.
In this webinar, Eric Miller of Zepheira will report on progress towards this important goal, starting with an analysis of the translation problem and concluding with potential migration scenarios for a broad-based transition from MARC to a new bibliographic framework.
Scientific Workflows: what do we have, what do we miss?Paolo Romano
Presentation given on June 22, 2013, in Nice, at the CIBB 2013 International Workshop.
In collaboration with Paolo Missier, University of Newcastle upon Tyne, UK
Talk delivered at YOW! Developer Conferences in Melbourne, Brisbane and Sydney Australia on 1-9 December 2016.
Abstract: Governments collect a lot of data. Data on air quality, toxic chemicals, laws and regulations, public health, and the census are intended to be widely distributed. Some data is not for public consumption. This talk focuses on open government data — the information that is meant to be made available for benefit of policy makers, researchers, scientists, industry, community organisers, journalists and members of civil society.
We’ll cover the evolution of Linked Data, which is now being used by Google, Apple, IBM Watson, federal governments worldwide, non-profits including CSIRO and OpenPHACTS, and thousands of others worldwide.
Next we’ll delve into the evolution of the U.S. Environmental Protection Agency’s Open Data service that we implemented using Linked Data and an Open Source Data Platform. Highlights include how we connected to hundreds of billions of open data facts in the world’s largest, open chemical molecules database PubChem and DBpedia.
WHO SHOULD ATTEND
Data scientists, software engineers, data analysts, DBAs, technical leaders and anyone interested in utilising linked data and open government data.
The presentation gives an overview of what metadata is and why it is important. It also addresses the benefits that metadata can bring and offers advice and tips on how to produce good quality metadata and, to close, how EUDAT uses metadata in the B2FIND service.
November 2016
There are high expectations for Linked Government Data—the practice of publishing public sector information on the Web using Linked Data formats. This slideset reviews some of the ongoing work in the US, UK, and within W3C, as well as activities within my institute (DERI, National University of Ireland, Galway).
Within the course, we will present Linked Data as a set of best practices for publishing and connecting structured data on the Web. These best practices have been adopted by an increasing number of data providers over the past years, leading to the creation of a global data space that contains many billions of assertions – the Web of Linked Data.
Linked Open Data Principles, Technologies and ExamplesOpen Data Support
Theoretical and practical introducton to linked data, focusing both on the value proposition, the theory/foundations, and on practical examples. The material is tailored to the context of the EU institutions.
The current status of Linked Open Data (LOD) shows evidence of many datasets available on the Web in RDF. In the meantime, there are still many challenges to overcome by organizations in their journey of publishing five stars datasets on the Web. Those challenges are not only technical, but are also organizational. At this moment where connectionist AI is gaining a wave of popularity with many applications, LOD needs to go beyond the guarantee of FAIR principles. One direction is to build a sustainable LOD ecosystem with FAIR-S principles. In parallel, LOD should serve as a catalyzer for solving societal issues (LOD for Social Good) and personal empowerment through data (Social Linked Data).
EUDAT & OpenAIRE Webinar: How to write a Data Management Plan - July 14, 2016...EUDAT
| www.eudat.eu | 2nd Session: July 14, 2016.
In this webinar, Sarah Jones (DCC) and Marjan Grootveld (DANS) talked through the aspects that Horizon 2020 requires from a DMP. They discussed examples from real DMPs and also touched upon the Software Management Plan, which for some projects can be a sensible addition
A Linked Data Prototype for the Union Catalog of Digital Archives Taiwanandrea huang
Linked data paradigm has provided the potential for any data to link or to be linked with structural information, internally and externally. To improve on current cultural
service of the Union Catalog of Digital Archives Taiwan (catalog.digitalarchives.tw), a linked data prototype is developed and benefited by extending the Art & Architecture Thesaurus (AAT) for a machine-understandable catalog service.
However, knowledge engineering is time and labor consuming, especially for an archive that is non-western based in culture and multidisciplinary in natural. This
makes data semantics of the UCdaT are extremely challenged for mapping to international standards and vocabularies.
At this stage, the triple store is an experimental addition to the existing Union Catalog of Digital Archives Taiwan architecture, and provides semantic links to target collections for relative suggestions. This will guide us in creating a future technical architecture that is scalable to the whole archive level, compliant with learning by doing
guidelines, and preserves the data even that is difficult to be understood fully at present, but at least to be linked by others that may provide third-party’s understandings for their own reuse.
Reinventing Laboratory Data To Be Bigger, Smarter & FasterOSTHUS
• Big Data technologies, especially Data Lakes are spreading across many industries at the moment with the hopes that they will provide unprecedented capabilities for data integration and data analytics
• In spite of the popularity and promise of these technology approaches, many early adopters are not seeking immediate solutions to their complex problems. Answers are not simply appearing – this talk will explore this issue more thoroughly
• Of the 4 V’s of Big Data, Data Variety and Data Veracity (uncertainty) are of increasing importance. These can cause barriers to successful integration strategies , which, in turn, can lead to poorly performing analytics.
• The problems of Variety and Veracity can be tackled using a new form of Data Science which combines formal ontologies with statistical heuristics. This talk will explore some key features of these approaches and how they can be developed together in symbiosis – leading to complex models that allow for improved analytics – or as we call it Big Analysis.
• The end result is improved capture of data types/sources, from laboratory instrument data, to clinical data, to regulatory rules & submissions, all the way to business drivers for the enterprise. In the end providing advanced analytics capabilities that can be built as modules and expand across an enterprise.
Scientific Workflows: what do we have, what do we miss?Paolo Romano
Presentation given on June 22, 2013, in Nice, at the CIBB 2013 International Workshop.
In collaboration with Paolo Missier, University of Newcastle upon Tyne, UK
Talk delivered at YOW! Developer Conferences in Melbourne, Brisbane and Sydney Australia on 1-9 December 2016.
Abstract: Governments collect a lot of data. Data on air quality, toxic chemicals, laws and regulations, public health, and the census are intended to be widely distributed. Some data is not for public consumption. This talk focuses on open government data — the information that is meant to be made available for benefit of policy makers, researchers, scientists, industry, community organisers, journalists and members of civil society.
We’ll cover the evolution of Linked Data, which is now being used by Google, Apple, IBM Watson, federal governments worldwide, non-profits including CSIRO and OpenPHACTS, and thousands of others worldwide.
Next we’ll delve into the evolution of the U.S. Environmental Protection Agency’s Open Data service that we implemented using Linked Data and an Open Source Data Platform. Highlights include how we connected to hundreds of billions of open data facts in the world’s largest, open chemical molecules database PubChem and DBpedia.
WHO SHOULD ATTEND
Data scientists, software engineers, data analysts, DBAs, technical leaders and anyone interested in utilising linked data and open government data.
The presentation gives an overview of what metadata is and why it is important. It also addresses the benefits that metadata can bring and offers advice and tips on how to produce good quality metadata and, to close, how EUDAT uses metadata in the B2FIND service.
November 2016
There are high expectations for Linked Government Data—the practice of publishing public sector information on the Web using Linked Data formats. This slideset reviews some of the ongoing work in the US, UK, and within W3C, as well as activities within my institute (DERI, National University of Ireland, Galway).
Within the course, we will present Linked Data as a set of best practices for publishing and connecting structured data on the Web. These best practices have been adopted by an increasing number of data providers over the past years, leading to the creation of a global data space that contains many billions of assertions – the Web of Linked Data.
Linked Open Data Principles, Technologies and ExamplesOpen Data Support
Theoretical and practical introducton to linked data, focusing both on the value proposition, the theory/foundations, and on practical examples. The material is tailored to the context of the EU institutions.
The current status of Linked Open Data (LOD) shows evidence of many datasets available on the Web in RDF. In the meantime, there are still many challenges to overcome by organizations in their journey of publishing five stars datasets on the Web. Those challenges are not only technical, but are also organizational. At this moment where connectionist AI is gaining a wave of popularity with many applications, LOD needs to go beyond the guarantee of FAIR principles. One direction is to build a sustainable LOD ecosystem with FAIR-S principles. In parallel, LOD should serve as a catalyzer for solving societal issues (LOD for Social Good) and personal empowerment through data (Social Linked Data).
EUDAT & OpenAIRE Webinar: How to write a Data Management Plan - July 14, 2016...EUDAT
| www.eudat.eu | 2nd Session: July 14, 2016.
In this webinar, Sarah Jones (DCC) and Marjan Grootveld (DANS) talked through the aspects that Horizon 2020 requires from a DMP. They discussed examples from real DMPs and also touched upon the Software Management Plan, which for some projects can be a sensible addition
A Linked Data Prototype for the Union Catalog of Digital Archives Taiwanandrea huang
Linked data paradigm has provided the potential for any data to link or to be linked with structural information, internally and externally. To improve on current cultural
service of the Union Catalog of Digital Archives Taiwan (catalog.digitalarchives.tw), a linked data prototype is developed and benefited by extending the Art & Architecture Thesaurus (AAT) for a machine-understandable catalog service.
However, knowledge engineering is time and labor consuming, especially for an archive that is non-western based in culture and multidisciplinary in natural. This
makes data semantics of the UCdaT are extremely challenged for mapping to international standards and vocabularies.
At this stage, the triple store is an experimental addition to the existing Union Catalog of Digital Archives Taiwan architecture, and provides semantic links to target collections for relative suggestions. This will guide us in creating a future technical architecture that is scalable to the whole archive level, compliant with learning by doing
guidelines, and preserves the data even that is difficult to be understood fully at present, but at least to be linked by others that may provide third-party’s understandings for their own reuse.
Reinventing Laboratory Data To Be Bigger, Smarter & FasterOSTHUS
• Big Data technologies, especially Data Lakes are spreading across many industries at the moment with the hopes that they will provide unprecedented capabilities for data integration and data analytics
• In spite of the popularity and promise of these technology approaches, many early adopters are not seeking immediate solutions to their complex problems. Answers are not simply appearing – this talk will explore this issue more thoroughly
• Of the 4 V’s of Big Data, Data Variety and Data Veracity (uncertainty) are of increasing importance. These can cause barriers to successful integration strategies , which, in turn, can lead to poorly performing analytics.
• The problems of Variety and Veracity can be tackled using a new form of Data Science which combines formal ontologies with statistical heuristics. This talk will explore some key features of these approaches and how they can be developed together in symbiosis – leading to complex models that allow for improved analytics – or as we call it Big Analysis.
• The end result is improved capture of data types/sources, from laboratory instrument data, to clinical data, to regulatory rules & submissions, all the way to business drivers for the enterprise. In the end providing advanced analytics capabilities that can be built as modules and expand across an enterprise.
Presentation about opportunities and challenges concerning Linked Data at the Open Science Data Cloud NSF PIRE Workshop [1] on 18 July 2012 in Edinburgh, UK.
[1] http://www.opensciencedatacloud.org/osdc-edinburgh-workshop-71612-71712/
FAIRy stories: the FAIR Data principles in theory and in practiceCarole Goble
https://ucsb.zoom.us/meeting/register/tZYod-ippz4pHtaJ0d3ERPIFy2QIvKqjwpXR
FAIRy stories: the FAIR Data principles in theory and in practice
The ‘FAIR Guiding Principles for scientific data management and stewardship’ [1] launched a global dialogue within research and policy communities and started a journey to wider accessibility and reusability of data and preparedness for automation-readiness (I am one of the army of authors). Over the past 5 years FAIR has become a movement, a mantra and a methodology for scientific research and increasingly in the commercial and public sector. FAIR is now part of NIH, European Commission and OECD policy. But just figuring out what the FAIR principles really mean and how we implement them has proved more challenging than one might have guessed. To quote the novelist Rick Riordan “Fairness does not mean everyone gets the same. Fairness means everyone gets what they need”.
As a data infrastructure wrangler I lead and participate in projects implementing forms of FAIR in pan-national European biomedical Research Infrastructures. We apply web-based industry-lead approaches like Schema.org; work with big pharma on specialised FAIRification pipelines for legacy data; promote FAIR by Design methodologies and platforms into the researcher lab; and expand the principles of FAIR beyond data to computational workflows and digital objects. Many use Linked Data approaches.
In this talk I’ll use some of these projects to shine some light on the FAIR movement. Spoiler alert: although there are technical issues, the greatest challenges are social. FAIR is a team sport. Knowledge Graphs play a role – not just as consumers of FAIR data but as active contributors. To paraphrase another novelist, “It is a truth universally acknowledged that a Knowledge Graph must be in want of FAIR data.”
[1] Wilkinson, M., Dumontier, M., Aalbersberg, I. et al. The FAIR Guiding Principles for scientific data management and stewardship. Sci Data 3, 160018 (2016). https://doi.org/10.1038/sdata.2016.18
Towards Lightweight Cyber-Physical Energy Systems using Linked Data, the Web ...Edward Curry
Cyber-Physical Energy Systems (CPES) exploit the potential of information technology to boost energy efficiency while minimising environmental impacts. CPES can help manage energy more efficiently by providing a functional view of the entire energy system so that energy activities can be understood, changed, and reinvented to better support sustainable practices. CPES can be applied at different scales from Smart Grids and Smart Cities to Smart Enterprises and Smart Buildings. Significant technical challenges exist in terms of information management, leveraging real-time sensor data, coordination of the various stakeholders to optimize energy usage.
In this talk I describe an approach to overcome these challenges by re-using the Web standards to quickly connect the required systems within a CPES. The resulting lightweight architecture leverages Web technologies including Linked Data, the Web of Things, and Social Media. The paper describes the fundamentals of the approach and demonstrates it within an Enterprise Energy Management scenario smart building.
Linked data for Enterprise Data IntegrationSören Auer
The Web evolves into a Web of Data. In parallel Intranets of large companies will evolve into Data Intranets based on the Linked Data principles. Linked Data has the potential to complement the SOA paradigm with a light-weight, adaptive data integration approach.
A distributional structured semantic space for querying rdf graph dataAndre Freitas
The vision of creating a Linked Data Web brings together the challenge of allowing queries across highly heterogeneous and distributed datasets. In order to query Linked Data on the Web today, end users need to be aware of which datasets potentially contain the data and also which data model describes these datasets. The process of allowing users to expressively query relationships in RDF while abstracting them from the underlying data model represents a fundamental problem for Web-scale Linked Data consumption. This article introduces a distributional structured semantic space which enables data model independent natural language queries over RDF data. The center of the approach relies on the use of a distributional semantic model to address the level of semantic interpretation demanded to build the data model independent approach. The article analyzes the geometric aspects of the proposed space, providing its description as a distributional structured vector space, which is built upon the Generalized Vector Space Model (GVSM). The final semantic space proved to be flexible and precise under real-world query conditions achieving mean reciprocal rank = 0.516, avg. precision = 0.482 and avg. recall = 0.491.
An introduction to linked data (semantic web) for a Knowledge and Information Network (KIN) webinar. The presentation shows some examples of linked data in action, data visualization, difference between open and linked data and how linkd data is being used in UK gov and local gov.
Sigma EE: Reaping low-hanging fruits in RDF-based data integrationRichard Cyganiak
A presentation I gave at I-Semantics 2010 on Sigma EE, an RDF-based data integration front-end.
Sigma EE is now available for download here: http://sig.ma/?page=help
This is one out of a series of presentations which I have given during a recent trip to the United States. I will make them all public, but content does not vary a lot between some of them
An introduction deck for the Web of Data to my team, including basic semantic web, Linked Open Data, primer, and then DBpedia, Linked Data Integration Framework (LDIF), Common Crawl Database, Web Data Commons.
RO-Crate: A framework for packaging research products into FAIR Research ObjectsCarole Goble
RO-Crate: A framework for packaging research products into FAIR Research Objects presented to Research Data Alliance RDA Data Fabric/GEDE FAIR Digital Object meeting. 2021-02-25
Semantic Web technologies (such as RDF and SPARQL) excel at bringing together diverse data in a world of independent data publishers and consumers. Common ontologies help to arrive at a shared understanding of the intended meaning of data.
However, they don’t address one critically important issue: What does it mean for data to be complete and/or valid? Semantic knowledge graphs without a shared notion of completeness and validity quickly turn into a Big Ball of Data Mud.
The Shapes Constraint Language (SHACL), an upcoming W3C standard, promises to help solve this problem. By keeping semantics separate from validity, SHACL makes it possible to resolve a slew of data quality and data exchange issues.
Presented at the Lotico Berlin Semantic Web Meetup.
The RDF Working Group at W3C has been busy with an update to the RDF standards, and this work is now nearing its end. Here I summarise what's new, what's been controversial, and also give a very brief intro to the RDF data model.
Practical Cross-Dataset Queries with SPARQL (Introduction)Richard Cyganiak
Introduction slide deck for the tutorial on “Practical Cross-Dataset Queries for the Web of Data” we presented at WWW2012. The tutorial homepage, including other presentations, is here: http://latc-project.eu/events/www2012-tutorial-cross-dataset-queries
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Designing Great Products: The Power of Design and Leadership by Chief Designe...
dcat: An RDF vocabulary for interoperability of data catalogues
1. Digital Enterprise Research Institute www.deri.ie
dcat: An RDF vocabulary for
interoperability of data catalogues
Richard Cyganiak, Fadi Maali, Vassilios Peristeras
Copyright 2009 Digital Enterprise Research Institute. All rights reserved.
2. Agenda
Digital Enterprise Research Institute www.deri.ie
Why catalogue interoperability is important
A survey of data catalogues
Introducing the dcat vocabulary
First experiments with integrated catalogue data
Where to take this next?
3. Government data catalogues
Digital Enterprise Research Institute www.deri.ie
Now more than 30 catalogues online
National
U.S., UK, Australia, New Zealand
State level
New South Wales, California, Massachusetts, Maine
Regional and local
New York, San Francisco, London, Vancouver, Kent County
Both official and private initiatives
4. Digital Enterprise Research Institute www.deri.ie
Catalogue websites do not
unlock the full potential of the
collected metadata.
5. Beyond catalogue websites
Digital Enterprise Research Institute www.deri.ie
Querying across catalogs
Overlapping regional coverage – U.S., California, SF
Supra-national catalogs – data.gov.eu?
New user interfaces
Faceted browsing
Specialized UI for geographical/statistical/tabular sub-
sections of a catalogue
Social annotation
Bulk processing of datasets
Search indexes that inspect dataset contents
Update notifications
6. Current state of interoperability
Digital Enterprise Research Institute www.deri.ie
Most major catalogues do expose their contents in
a structured format!
CSV
Atom feeds
RDFa
But using this data is difficult
Different formats for each catalogue
Different metadata fields in each
Metadata fields poorly documented
Contents of metadata fields are inconsistent or do not
match documentation
7. A survey of data catalogues
Digital Enterprise Research Institute www.deri.ie
In-depth review of seven catalogues
data.gov, data.gov.uk, data.gov.nz, data.australia.gov.uk,
datasf.org, data.london.gov.uk, statcentral.ie
Looking at metadata, not into the datasets
10. Direct download links
Digital Enterprise Research Institute www.deri.ie
Download links
Can go straight to the data (Excel, CSV, …)
Or to a splash page or license page
% of direct links
data.london.gov.uk: 100%
data.gov: 95%
datasf.org: 10%
data.gov.uk: 7%
11. The dcat vocabulary
Digital Enterprise Research Institute www.deri.ie
Intended as interoperability standard
Vocabulary expressed in RDF Schema
http://vocab.deri.ie/dcat#
Vocabulary namespace
http://vocab.deri.ie/dcat-overview
Misc information
12. Design notes
Digital Enterprise Research Institute www.deri.ie
Hepp’s Law: An integration ontology must not
introduce distinctions that are finer than the
distinctions made in the data to be integrated.
Focus on the metadata fields that’s available in all/
most catalogues
Require no data cleansing before catalogue can be
published in dcat
Re-use Dublin Core, SKOS, FOAF whenever possible
13. Concepts
Digital Enterprise Research Institute www.deri.ie
dcat:Catalog
dcat:Dataset
dcat:CatalogRecord
dcat:Distribution
subclasses dcat:Feed, dcat:WebService
skos:Concept, skos:ConceptScheme
foaf:Organization
15. Initial experiments
Digital Enterprise Research Institute www.deri.ie
Set up a D2R Server over four catalogues
US, AU, SF, London
http://lab.linkeddata.deri.ie/govcat/
SPARQL interface:
http://lab.linkeddata.deri.ie/govcat/snorql/
Links to Geonames, DBpedia
16. SPARQL across datasets
Digital Enterprise Research Institute www.deri.ie
SELECT ?title ?url
WHERE {
?dataset a dcat:Dataset;
dc:title ?title;
dcat:theme :education;
dcat:distribution ?distribution.
?distribution dcat:downloadURL ?url;
dc:format ?format;
dcat:size ?size.
?size dcat:bytes ?bytes.
FILTER (?bytes<1048576 && ?format=”text/xml”).
}
17. SPARQL query with external data
Digital Enterprise Research Institute www.deri.ie
SELECT ?title
WHERE {
:data.gov dcat:dataset ?dataset.
?dataset dc:title ?title;
dc:publisher ?agency.
?agency dbpedia:budget ?budget.
FILTER (?budget>50000000000)
}
18. Benefits of the dcat standard
Digital Enterprise Research Institute www.deri.ie
Embedded metadata in catalogue web pages
increases findability
Enables decentralised publishing
Enables federated search
Will enable one-click download and installation of
data packages
Serves as manifest file for digital preservation
Applications can be built once and work with
multiple catalogues
19. Where next?
Digital Enterprise Research Institute www.deri.ie
Get feedback on the vocabulary, improve where
necessary
Write up a Guide to using dcat
Explore how to use it with voiD, SDMX+RDF
Get more catalogues to expose dcat format
So far, everything happened in DERI, but we want to
open this up. Where?