Presentation created for the CILIP Cataloguing Interest Group event on Linked Data, 25th November 2013 (http://www.cilip.org.uk/cataloguing-and-indexing-group/events/linked-data-what-cataloguers-need-know-cig-event)
This tutorial explains the Data Web vision, some preliminary standards and technologies as well as some tools and technological building blocks developed by AKSW research group from Universität Leipzig.
This tutorial explains the Data Web vision, some preliminary standards and technologies as well as some tools and technological building blocks developed by AKSW research group from Universität Leipzig.
https://doi.org/10.6084/m9.figshare.11854626.v1
Presented at Dutch National Librarian/Information Professianal Association annual conference 2011 - NVB2011
November 17, 2011
I used these slides in the context of a cultural heritage presentation so the examples are relevant to that community. For example the choice of CIDOC CRM is obvious in that community.
Introduction to linked open data, RDF: the Resource Description Framework, Tools to convert data to RDF, Tools for linking/reconciliation/resolution, Storing and maintaining the data, BBC and Linked Data
Do the LOCAH-Motion: How to Make Bibliographic and Archival Linked DataAdrian Stevenson
Presentation given at the Dev8d Developer Days event at the University of London Students Union, London, UK on 15th February 2011.
The talk was primarily aimed at developers with the assumption that they knew a bit about RDF and Linked Data, so it doesn’t discuss these except in passing. I was mainly trying to give some specifics on the technicalities involved, and what platforms and tools we’re using, so people can follow the same path if they wanted.
More info at http://blogs.ukoln.ac.uk/locah/2011/02/14/locah-lightening-at-dev8d/ and http://wiki.2011.dev8d.org/w/Session-L18
Overview of current developments in web searching. Standard web search is largely unchanged although there are some developments with Ask, Exalead and Live. Most action is in social search, social bookmarking, online video, and some in academic / scholarly resources.
From Feb 19 2014 NISO Virtual Conference: NISO Virtual Conference: The Semantic Web Coming of Age: Technologies and Implementations
Kevin Ford, Semantic Web Applications in Libraries: The Road to BIBFRAME
Linked Open Data Principles, benefits of LOD for sustainable developmentMartin Kaltenböck
Presentation held on 18.09.2013 at the OKCon 2013 in Geneva, Switzerland in the course of the workshop: How Linked Open data supports Sustainable Development and Climate Change Development by Martin Kaltenböck (SWC), Florian Bauer (REEEP) and Jens Laustsen (GBPN).
Transient and persistent RDF views over relational databases in the context o...Nikolaos Konstantinou
As far as digital repositories are concerned, numerous benefits emerge from the disposal of their contents as Linked Open Data (LOD). This leads more and more repositories towards this direction. However, several factors need to be taken into account in doing so, among which is whether the transition needs to be materialized in real-time or in asynchronous time intervals. In this paper we provide the problem framework in the context of digital repositories, we discuss the benefits and drawbacks of both approaches and draw our conclusions after evaluating a set of performance measurements. Overall, we argue that in contexts with infrequent data updates, as is the case with digital repositories, persistent RDF views are more efficient than real-time SPARQL-to-SQL rewriting systems in terms of query response times, especially when expensive SQL queries are involved.
This book explains the Linked Data domain by adopting a bottom-up approach: it introduces the fundamental Semantic Web technologies and building blocks, which are then combined into methodologies and end-to-end examples for publishing datasets as Linked Data, and use cases that harness scholarly information and sensor data. It presents how Linked Data is used for web-scale data integration, information management and search. Special emphasis is given to the publication of Linked Data from relational databases as well as from real-time sensor data streams. The authors also trace the transformation from the document-based World Wide Web into a Web of Data. Materializing the Web of Linked Data is addressed to researchers and professionals studying software technologies, tools and approaches that drive the Linked Data ecosystem, and the Web in general.
This chapter introduces the semantic modeling procedure, detailing its technical characteristics, possibilities and limitations. First, we present the languages that are used for semantic description. We present RDF, RDFS and OWL, describe their expressiveness in terms of describing Web Resources, and the abilities they provide in order to describe, query, administer and manage resources at a semantic layer. Next, we present the vocabularies that are used in order to provide common grounds in understanding and communicating ideas and concepts. The technologies, together with the vocabularies used, altogether comprise the modern landscape of Semantic Web/Linked Data applications and serve as the basis for maintaining, analyzing datasets and building applications on top of them.
In this Chapter, we summarize and discuss the material presented throughout this book. We recapitulate what is presented and discussed in each Chapter. We discuss the most interesting aspects of the Web of Data landscape, highlighting its main contributions, and then continue with a discussion, mentioning our most important observations, including domain-specific benefits in the LOD domain. We conclude the Chapter with a discussion of open research challenges in the Linked Data domain.
https://doi.org/10.6084/m9.figshare.11854626.v1
Presented at Dutch National Librarian/Information Professianal Association annual conference 2011 - NVB2011
November 17, 2011
I used these slides in the context of a cultural heritage presentation so the examples are relevant to that community. For example the choice of CIDOC CRM is obvious in that community.
Introduction to linked open data, RDF: the Resource Description Framework, Tools to convert data to RDF, Tools for linking/reconciliation/resolution, Storing and maintaining the data, BBC and Linked Data
Do the LOCAH-Motion: How to Make Bibliographic and Archival Linked DataAdrian Stevenson
Presentation given at the Dev8d Developer Days event at the University of London Students Union, London, UK on 15th February 2011.
The talk was primarily aimed at developers with the assumption that they knew a bit about RDF and Linked Data, so it doesn’t discuss these except in passing. I was mainly trying to give some specifics on the technicalities involved, and what platforms and tools we’re using, so people can follow the same path if they wanted.
More info at http://blogs.ukoln.ac.uk/locah/2011/02/14/locah-lightening-at-dev8d/ and http://wiki.2011.dev8d.org/w/Session-L18
Overview of current developments in web searching. Standard web search is largely unchanged although there are some developments with Ask, Exalead and Live. Most action is in social search, social bookmarking, online video, and some in academic / scholarly resources.
From Feb 19 2014 NISO Virtual Conference: NISO Virtual Conference: The Semantic Web Coming of Age: Technologies and Implementations
Kevin Ford, Semantic Web Applications in Libraries: The Road to BIBFRAME
Linked Open Data Principles, benefits of LOD for sustainable developmentMartin Kaltenböck
Presentation held on 18.09.2013 at the OKCon 2013 in Geneva, Switzerland in the course of the workshop: How Linked Open data supports Sustainable Development and Climate Change Development by Martin Kaltenböck (SWC), Florian Bauer (REEEP) and Jens Laustsen (GBPN).
Transient and persistent RDF views over relational databases in the context o...Nikolaos Konstantinou
As far as digital repositories are concerned, numerous benefits emerge from the disposal of their contents as Linked Open Data (LOD). This leads more and more repositories towards this direction. However, several factors need to be taken into account in doing so, among which is whether the transition needs to be materialized in real-time or in asynchronous time intervals. In this paper we provide the problem framework in the context of digital repositories, we discuss the benefits and drawbacks of both approaches and draw our conclusions after evaluating a set of performance measurements. Overall, we argue that in contexts with infrequent data updates, as is the case with digital repositories, persistent RDF views are more efficient than real-time SPARQL-to-SQL rewriting systems in terms of query response times, especially when expensive SQL queries are involved.
This book explains the Linked Data domain by adopting a bottom-up approach: it introduces the fundamental Semantic Web technologies and building blocks, which are then combined into methodologies and end-to-end examples for publishing datasets as Linked Data, and use cases that harness scholarly information and sensor data. It presents how Linked Data is used for web-scale data integration, information management and search. Special emphasis is given to the publication of Linked Data from relational databases as well as from real-time sensor data streams. The authors also trace the transformation from the document-based World Wide Web into a Web of Data. Materializing the Web of Linked Data is addressed to researchers and professionals studying software technologies, tools and approaches that drive the Linked Data ecosystem, and the Web in general.
This chapter introduces the semantic modeling procedure, detailing its technical characteristics, possibilities and limitations. First, we present the languages that are used for semantic description. We present RDF, RDFS and OWL, describe their expressiveness in terms of describing Web Resources, and the abilities they provide in order to describe, query, administer and manage resources at a semantic layer. Next, we present the vocabularies that are used in order to provide common grounds in understanding and communicating ideas and concepts. The technologies, together with the vocabularies used, altogether comprise the modern landscape of Semantic Web/Linked Data applications and serve as the basis for maintaining, analyzing datasets and building applications on top of them.
In this Chapter, we summarize and discuss the material presented throughout this book. We recapitulate what is presented and discussed in each Chapter. We discuss the most interesting aspects of the Web of Data landscape, highlighting its main contributions, and then continue with a discussion, mentioning our most important observations, including domain-specific benefits in the LOD domain. We conclude the Chapter with a discussion of open research challenges in the Linked Data domain.
Incremental Export of Relational Database Contents into RDF GraphsNikolaos Konstantinou
In addition to tools offering RDF views over databases, a variety of tools exist that allow exporting database contents into RDF graphs; tools proven that in many cases demonstrate better performance than the former. However, in cases when database contents are exported into RDF, it is not always optimal or even necessary to dump the whole database contents every time. In this paper, the problem of incremental generation and storage of the resulting RDF graph is investigated. An implementation of the R2RML standard is used in order to express mappings that associate tuples from the source database to triples in the resulting RDF graph. Next, a methodology is proposed that enables incremental generation and storage of an RDF graph based on a source relational database, and it is evaluated through a set of performance measurements. Finally, a discussion is presented regarding the authors’ most important findings and conclusions.
An Approach for the Incremental Export of Relational Databases into RDF GraphsNikolaos Konstantinou
Several approaches have been proposed in the literature for offering RDF views over databases. In addition to these, a variety of tools exist that allow exporting database contents into RDF graphs. The approaches in the latter category have often been proved demonstrating better performance than the ones in the former. However, when database contents are exported into RDF, it is not always optimal or even necessary to export, or dump as this procedure is often called, the whole database contents every time. This paper investigates the problem of incremental generation and storage of the RDF graph that is the result of exporting relational database contents. In order to express mappings that associate tuples from the source database to triples in the resulting RDF graph, an implementation of the R2RML standard is subject to testing. Next, a methodology is proposed and described that enables incremental generation and storage of the RDF graph that originates from the source relational database contents. The performance of this methodology is assessed, through an extensive set of measurements. The paper concludes with a discussion regarding the authors' most important findings.
This chapter provides an overview of the methodologies and technologies that support Linked Data designing and publishing. More specifically, this chapter starts with a presentation of the rationale and a discussion about how data can be opened up (i.e. published under an open license). Basic principles are first introduced regarding the cases in which content can be opened up and also, the most common approaches are presented in accomplishing this. Next, we discuss about how data can be modeled, authored, serialized and stored. In this chapter we also provide an overview of the most common technical solutions and widely used software tools that can serve this purpose. Overall, the chapter aims to provide an analysis of the sub-problems into which the Linked Open Data publishing task is to be broken down, namely opening, modeling, linking, processing, and visualizing content, followed by a presentation of the most representative software solutions.
In this chapter, we introduce and discuss the problems that Linked Data solve and the concepts that are related to these problems. We introduce and analyze the basic concepts that are related to the generation of Linked Data and the Semantic Web in general. We provide a brief history of the Semantic Web and the associated evolution of concepts, problem frameworks and solution approaches, all targeted at offering efficient and intelligent solutions to information representation, management and exploitation. More specifically, we introduce the main reasons for the creation of the Semantic Web and the problems that it addresses. Next, we discuss the distinctions between basic terms such as data, information, knowledge, metadata, ontologies, semantic annotations etc. We introduce the notions of interoperability, integration, merging, mapping, and continue with introducing ontologies, reasoners, knowledge bases, all fundamental concepts in the Linked Data ecosystem.
Entity Linking in Queries: Tasks and EvaluationFaegheh Hasibi
Slides for the ICTIR 2015 paper "Entity Linking in Queries: Tasks and Evaluation"
Annotating queries with entities is one of the core problem areas in query understanding. While seeming similar, the task of entity linking in queries is different from entity linking in documents and requires a methodological departure due to the inherent ambiguity of queries. We differentiate between two specific tasks, semantic mapping and interpretation finding, discuss current evaluation methodology, and propose refinements. We examine publicly available datasets for these tasks and introduce a new manually curated dataset for interpretation finding. To further deepen the understanding of task differences, we present a set of approaches for effectively addressing these tasks and report on experimental results.
My Linked Data tutorial presentation that I presented at Semtech 2012.
http://semtechbizsf2012.semanticweb.com/sessionPop.cfm?confid=65&proposalid=4724
In this Chapter, we consider relational databases as a data source for the generation of Linked Data, given that they constitute one of the most popular data storage media, containing huge data volumes that feed the vast majority of information systems worldwide. In this context, we review the related literature and reveal the main motivations that fuel the relevant approaches, and the benefits that arise from their application. We present a categorization of approaches that map relational databases to the Semantic Web and identify tool implementations that extract RDF graphs from relational database instances. We also sketch a proof-of-concept use case scenario regarding how a repository with scholarly information can be converted to a Linked Data endpoint. The Chapter ends with a discussion of the open issues and future outlook for the problem of RDF generation from relational databases.
Presentation at ELAG 2011, European Library Automation Group Conference, Prague, Czech Republic. 25th May 2011
http://elag2011.techlib.cz/en/815-lifting-the-lid-on-linked-data/
Google's recent announcement that it will support the use of microformats in their search opens up new possibilities for librarians and library technologists to support the goals of the semantic web; namely to provide better access, reuse and recombinations of library resources and services on the open web. This lightning talk introduces the semantic web and semantic markup technologies.
The Semantic Web is about to grow up. By efforts such as the Linked Open Data initiative, we finally find ourselves at the edge of a Web of Data becoming reality. Standards such as OWL 2, RIF and SPARQL 1.1 shall allow us to reason with and ask complex structured queries on this data, but still they do not play together smoothly and robustly enough to cope with huge amounts of noisy Web data. In this talk, we discuss open challenges relating to querying and reasoning with Web data and raise the question: can the emerging Web of Data ever catch up with the now ubiquitous HTML Web?
This presentation is an introduction to RDFa, as the fourth assignment of the IST 681 in iSchool, Syracuse University. The presentation is made by Kai Li, who is a library student in Syracuse University..
These slides introduce SPARQL, the ‘SELECT’ query in SPARQL, and show how you can use relatively straightforward SELECT queries on the British Library’s BNB (British National Bibliography) SPARQL endpoint
Presentation made at the RLUK "Introduction to the European library" event September 2013 (http://www.rluk.ac.uk/content/rluk-introduction-european-library-24-sep-2013). Introduces linked data, hack days, and gives examples of applications built at hack days and similar events/initiatives using library data
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
2. URIs
Friday, 22 November 13
URIs fundamental to Linked Data - can do RDF without creating/coining your own URIs but
not really linked data “Use URIs as names for things”, “Use HTTP URIs so people can look up
those names” (http://www.w3.org/DesignIssues/LinkedData)
3. http://
www.amazon.co.uk/
Pride-Prejudice-PenguinClassics-Austen/dp/
0141439513
Friday, 22 November 13
What does this identify?
Doesn’t identify (as you might expect) Pride and Prejudice, but rather identifies the Amazon
web page that describes the Penguin Classics edition of Pride and Prejudice. This may seem
like splitting hairs, but if you want to start to make statements about things using their
identifiers it is very important. I might want to state that the author of Pride and Prejudice is
Jane Austen. If I say:
http://www.amazon.co.uk/Pride-Prejudice-Penguin-Classics-Austen/dp/0141439513 is
authored by Jane Austen, then strictly I’m saying Jane Austen wrote the web page, rather than
the book described by the web page.
6. Cool URIs
Friday, 22 November 13
TBL - “Cool URIs don’t change”
Avoid including specific technology and variables in your URIs
Some debate as to whether opaque or readable URIs are better - I suspect no right answer
To some extent we must also embrace (as on the web) “broken links aren’t the end of the
world” - but they are a PITA
8. Choosing Vocabularies
(...or Ontologies?)
Friday, 22 November 13
Terms used somewhat interchangeably, although ‘ontology’ probably slightly more formal.
Problem from library perspective is that ‘vocabulary’ in this context doesn’t mean a list of
terms (like LCSH for example) but a schema for describing types of things
Shared ontologies are a way of agreeing we are describing the same type of thing.
To give a concrete example - Dublin Core is a widely used vocabulary. It contains a property
of ‘creator’ (with a URI of http://purl.org/dc/terms/creator) - all users of this property in
their data are working from a common definition (although this clearly doesn’t stop misuse!)
10. RDFS
http://www.w3.org/2000/01/rdf-schema
Friday, 22 November 13
RDF Schema - generally used for describing other vocabularies!
Note especially that rdfs:label turns up a lot in data and is often the place the actual literal
string you are interested in is recorded
11. OWL
http://www.w3.org/2002/07/owl
Friday, 22 November 13
The confusingly named ‘Web Ontology Language’ (working group quotes AA Milne in
justification) - also underlying ontology that allows you to put constraints on other
ontologies
But also the home of the infamous ‘sameAs’ statement which allows you to say one thing is
the same as another thing in linked data
13. Dublin Core Terms
http://purl.org/dc/terms
Friday, 22 November 13
Important to note this is not just the 15 elements that might spring to mind when you think
of DC, although some of those are the most widely used - Title and Creator especially
14. SKOS
http://www.w3.org/2004/02/skos/core
Friday, 22 November 13
Simple Knowledge Organization System
For any structured ‘vocabulary’ (not in RDF sense)
For example an entry in NAF would be a SKOS Concept. As would be any Authorized Library
of Congress Subject Heading
SKOS has a ‘prefLabel’ property which is sometimes used as the ‘display’ label (as opposed to
rdfs:label)
15. Other Vocabularies
• Bibliographic Ontology a.k.a BIBO (http://
purl.org/ontology/bibo)
• Bio (http://vocab.org/bio/0.1/.html)
• FRBR (http://vocab.org/frbr/core.html)
• ISBD (http://iflastandards.info/ns/isbd)
• CRM (http://erlangen-crm.org/current)
Friday, 22 November 13
All of these seeing some use in the ‘lodlam’ (linked open data in libraries, archives and
museums) space. CRM being used extensively by the British Museum http://
collection.britishmuseum.org)
... and more ... e.g. SPAR ontologies (http://sempublishing.sourceforge.net)
16. <http://data.lib.cam.ac.uk/id/entry/cambrdgedb_1000346> <http://purl.org/dc/terms/title>
"Early medieval history of Kashmir";
<http://purl.org/dc/terms/identifier> "UkCU1000346";
<http://purl.org/dc/terms/language> <http://id.loc.gov/vocabulary/iso639-2/eng>;
<http://RDVocab.info/ElementsplaceOfPublication> <http://id.loc.gov/vocabulary/
countries/ii>;
<http://iflastandards.info/ns/isbd/elements/P1016> "New Delhi".
Friday, 22 November 13
Sample data (truncated) from Cambridge University Library
You can see that it uses DC, RDA and ISBD vocabularies/ontologies in single record
description
18. Static files
Friday, 22 November 13
Once you have data represented in RDF you can ‘publish’ this by simply putting the RDF
file(s) in an accessible place on a web server.
This is like authoring ‘static’ html pages and uploading them - e.g. via ftp
Publishing Linked Data in this way is simple, but lacks sophistication and probably works
better for small data sets - not for millions of triples that we might typically expect in library
data - although there are probably arguments that much could be achieved by one RDF file
per ‘record’
Large RDF files are sometimes published this way in conjunction with more sophisticated
access to allow for easy download of large amount of data etc.
Example of publishing RDF as a simple static file: http://www.meanboyfriend.com/
overdue_ideas/middlemash.rdf (for background see http://www.meanboyfriend.com/
overdue_ideas/2009/10/middlemash-middlemarch-middlemap/)
Example of publishing large RDF dump: http://www.bl.uk/bibliographic/
download.html#basicbnb
See also http://linkeddatabook.com/editions/1.0/#htoc66
19. Dynamically generated
views
Friday, 22 November 13
Typically based on data stored in a triple store (what’s a triple store? basically a kind of
database designed specifically to store RDF triples) or a more traditional database.
More like the way a blog like Wordpress works than a static html page
Software generates the ‘views’ (which can be HTML views of the data, or RDF in one or more
serialisations, or other formats). The view you get is sometimes determined by the URL you
use, or based on the type of request made to the web server which allows you to specify the
format you want (this is called ‘content negotiation’ and is part of HTTP)
Lots of different ways of doing this - just as to publish HTML you can use Wordpress, Blogger,
Drupal, other Content Mangement Systems etc. etc.
See:
http://linkeddatabook.com/editions/1.0/#htoc68
http://linkeddatabook.com/editions/1.0/#htoc69
http://linkeddatabook.com/editions/1.0/#htoc70
http://linkeddatabook.com/editions/1.0/#htoc71
20. Embedded in HTML
Friday, 22 November 13
Publishing structured data embedded in HTML is becoming more common - this can be done
with or without it being ‘linked data’. However there are ways of publishing ‘linked data’ in
this way. The one that seems to have most momentum at the moment is ‘schema.org’ (http://
schema.org) which is backed by Google/Yahoo/Yandex/Bing and others.
Whether ‘schema.org’ markup is linked data probably depends exactly how you use it. There
is a mapping of schema.org to “RDFa Lite” which is an initiative from W3C (http://
www.w3.org/TR/rdfa-lite/) to allow embedding of RDF in web pages
See
http://linkeddatabook.com/editions/1.0/#htoc67
21. Linking it up
Friday, 22 November 13
Key aspect of ‘linked data’ is ... the ‘links’.
Back to Linked Data design issues statement by TBL “Include links to other URIs. so that they
can discover more things.” (http://www.w3.org/DesignIssues/LinkedData)
Typical to establish your own links first, then establish ‘sameAs’ statements to equivalent
URIs elsewhere.
Not necessarily easy to establish equivalence between things (this is part of the point of
moving to identifiers - to try to make this easier)
Sometimes can do this via existing identifiers
Sometimes need to do some lookup via text strings (e.g. with id.loc.gov this is an approach
you can take)
Sometimes takes more work...
Tools that can help - OpenRefine (https://github.com/OpenRefine/OpenRefine/wiki) and SILK
(http://wifo5-03.informatik.uni-mannheim.de/bizer/silk/)
26. Is it Open?
Friday, 22 November 13
Not going to say a lot about this but ... what licence is used in it’s publication?
BL published BNB under CC0 declaration - puts it in the public domain
Archives Hub did the same
Cambridge ended up with a mixture of ‘attribution’ licence (ODC-BY) and public domain
(ODC-PDDL) - depending on the rights they had over their metadata (see http://
data.lib.cam.ac.uk/datasets.php)
Europeana goes with CC0
DPLA says ‘we don’t beleive copyright applies to metadata ... but if it does then we put it in
the public domain’ (http://dp.la/info/wp-content/uploads/2013/04/
DPLAMetadataPolicy.pdf)
28. Crawling
Friday, 22 November 13
“due to the likelihood of scalability problems with on-the-fly link
traversal and federated querying, it may transpire that widespread
crawling and caching will become the norm in making data from a
large number of data sources available”
If we envisage a distributed bibliographic data environment, it is this
approach we’d need to take to build the equivalent of an OPAC.
http://linkeddatabook.com/editions/1.0/#htoc84
This type of approach is what Google does! See also the
‘CommonCrawl’ http://commoncrawl.org which is trying to make a
public version of a web crawl available - the equivalent could be done
for linked data (or they may turn out to be the same thing)
Definitely a challenging area - see my blog post http://
www.meanboyfriend.com/overdue_ideas/2012/08/what-to-do-withlinked-data/
29. Follow your nose & Just
in time
Friday, 22 November 13
One of the key aspects of linked data is that the links enable to take a ‘follow your nose’
approach to the available links. For some applications this is all that is needed
For example I wrote a bookmarklet (a way of adding functionality/data to websites by clicking
a browser bookmark) using this approach - see http://www.meanboyfriend.com/
overdue_ideas/2011/07/compose-yourself/
It works as long as you can return enough information quickly enough. It wouldn’t work for
(e.g.) a search application where you can’t really index all the relevant information ‘just in
time’, but it could work where you wanted to enhance the display of a record in an OPAC (or
equivalent) ‘just in time’ when a person views the record.
30. Federated Queries
Friday, 22 November 13
Idea being send out queries to distributed linked data sources using SPARQL (a query
language for RDF Triple stores).... anyone who has dealt with federated search in libraries will
know the challenges that this can bring!
http://linkeddatabook.com/editions/1.0/#htoc84