These slides were presented as part of a W3C tutorial at the CSHALS 2010 conference (http://www.iscb.org/cshals2010). The slides are adapted from a longer introduction to the Semantic Web available at http://www.slideshare.net/LeeFeigenbaum/semantic-web-landscape-2009 .
A PDF version of the slides is available at http://thefigtrees.net/lee/sw/cshals/cshals-w3c-semantic-web-tutorial.pdf .
Create Linked Open Data (LOD) Microthesauri using Art & Architecture Thesaurus (AAT) LOD. View and manage options by a non-techy person. Everyone can use, create,
derive from, & map to AAT microthesauri and make the digital collection become LOD-ready dataset.
This invited keynote at the Social Computing Track at WI-IAT21 gives an introduction to Knowledge Graphs and how they are built collaboratively by us. It gives also presents a brief analysis of the links in Wikidata.
Data integration, data interoperation and data quality are major challenges that continue to haunt enterprises. Every enterprise either by choice or by chance has created massive silos of data in different formats, with duplications and quality issues.
Knowledge graphs have proven to be a viable solution to address the integration and interoperation problem. Semantic technologies in particular provide an intelligent way of creating an abstract layer for the enterprise data model and mapping of siloed data to that model, allowing a smooth integration and a common view of the data.
Technologies like OWL (Web Ontology Language) and RDF (Resource Description Framework) are the back bone of semantics for knowledge graph implementation. Enterprises use OWL to build an ontology model to create a common definition for concepts and how they are connected to each other in their specific domain.
They then use RDF to create a triple format representation of their data by mapping it to the Ontology. This approach makes their data smart and machine understandable.
But how can enterprises control and validate the quality of this mapped data? Furthermore, how can they use this one abstract representation of data to meet all their different business requirements? Different departments, different LoBs and different business branches all have their own data needs, creating a new challenge to be tackled by the enterprise.
In this talk we will look at how the power of SHACL (SHAPES and Constraints Language), a W3C standard for defining constraint sets over data; complements the two core semantic technologies OWL and RDF. What are the similarities, the overlaps and the differences.
We will talk about how SHACL gives enterprises the power to reuse, customize and validate their data for various scenarios, uses cases and business requirements; making the application of semantics even more practical.
Information Extraction and Linked Data CloudDhaval Thakker
In the media industry there is a great emphasis on providing descriptive metadata as part of the media assets to the consumers. Information extraction (IE) is considered an important tool for metadata generation process and its performance largely depend on the knowledge base it utilizes. The advances in the “Linked Data Cloud” research provide a great opportunity for generating such knowledge base that benefit from the participation of wider community. In this talk, I will discuss our experiences of utilizing Linked Data Cloud in conjunction with a GATE-based IE system.
Create Linked Open Data (LOD) Microthesauri using Art & Architecture Thesaurus (AAT) LOD. View and manage options by a non-techy person. Everyone can use, create,
derive from, & map to AAT microthesauri and make the digital collection become LOD-ready dataset.
This invited keynote at the Social Computing Track at WI-IAT21 gives an introduction to Knowledge Graphs and how they are built collaboratively by us. It gives also presents a brief analysis of the links in Wikidata.
Data integration, data interoperation and data quality are major challenges that continue to haunt enterprises. Every enterprise either by choice or by chance has created massive silos of data in different formats, with duplications and quality issues.
Knowledge graphs have proven to be a viable solution to address the integration and interoperation problem. Semantic technologies in particular provide an intelligent way of creating an abstract layer for the enterprise data model and mapping of siloed data to that model, allowing a smooth integration and a common view of the data.
Technologies like OWL (Web Ontology Language) and RDF (Resource Description Framework) are the back bone of semantics for knowledge graph implementation. Enterprises use OWL to build an ontology model to create a common definition for concepts and how they are connected to each other in their specific domain.
They then use RDF to create a triple format representation of their data by mapping it to the Ontology. This approach makes their data smart and machine understandable.
But how can enterprises control and validate the quality of this mapped data? Furthermore, how can they use this one abstract representation of data to meet all their different business requirements? Different departments, different LoBs and different business branches all have their own data needs, creating a new challenge to be tackled by the enterprise.
In this talk we will look at how the power of SHACL (SHAPES and Constraints Language), a W3C standard for defining constraint sets over data; complements the two core semantic technologies OWL and RDF. What are the similarities, the overlaps and the differences.
We will talk about how SHACL gives enterprises the power to reuse, customize and validate their data for various scenarios, uses cases and business requirements; making the application of semantics even more practical.
Information Extraction and Linked Data CloudDhaval Thakker
In the media industry there is a great emphasis on providing descriptive metadata as part of the media assets to the consumers. Information extraction (IE) is considered an important tool for metadata generation process and its performance largely depend on the knowledge base it utilizes. The advances in the “Linked Data Cloud” research provide a great opportunity for generating such knowledge base that benefit from the participation of wider community. In this talk, I will discuss our experiences of utilizing Linked Data Cloud in conjunction with a GATE-based IE system.
Analytics on Big Knowledge Graphs Deliver Entity Awareness and Help Data LinkingOntotext
A presentation of Ontotext’s CEO Atanas Kiryakov, given during Semantics 2018 - an annual conference that brings together researchers and professionals from all over the world to share knowledge and expertise on semantic computing.
[Conference] Cognitive Graph Analytics on Company Data and NewsOntotext
Atanas Kiryakov, Ontotext's CEO, presented at the Data Day Texas 2018 conference, which took place in Austin, TX, USA, on January 27th.
Ontotext's talk was part of the Graph Day Sessions and its focus was 'Cognitive graph analytics on company data and news', aiming to demonstrate the power of Graph Analytics to create links between various datasets and lead to knowledge discovery.
Haystack 2018 - Algorithmic Extraction of Keywords Concepts and VocabulariesMax Irwin
Presentation as given to the Haystack Conference, which outlines research and techniques for automatic extraction of keywords, concepts, and vocabularies from text corpora.
The Next Generation of AI-powered SearchTrey Grainger
What does it really mean to deliver an "AI-powered Search" solution? In this talk, we’ll bring clarity to this topic, showing you how to marry the art of the possible with the real-world challenges involved in understanding your content, your users, and your domain. We'll dive into emerging trends in AI-powered Search, as well as many of the stumbling blocks found in even the most advanced AI and Search applications, showing how to proactively plan for and avoid them. We'll walk through the various uses of reflected intelligence and feedback loops for continuous learning from user behavioral signals and content updates, also covering the increasing importance of virtual assistants and personalized search use cases found within the intersection of traditional search and recommendation engines. Our goal will be to provide a baseline of mainstream AI-powered Search capabilities available today, and to paint a picture of what we can all expect just on the horizon.
This presentation was given at the Balisage 2017 conference, and provides an overview of three key RDF standards for constraint modeling, annotation and the use of data frames and cubes in RDF.
The task of keyword extraction is to automatically identify a set of terms that best describe the document. Automatic keyword extraction establishes a foundation for various natural language processing applications: information retrieval, the automatic indexing and classification of documents, automatic summarization and high-level semantic description, etc. Although the keyword extraction applications usually work on single documents (document-oriented task), keyword extraction is also applicable to a more demanding task, i.e. the keyword extraction from a whole collection of documents or from an entire web site, or from tweets from Twitter. In the era of big-data, obtaining an effective and efficient method for automatic keyword extraction from huge amounts of multi-topic textual sources is of high importance.
We proposed a novel Selectivity-Based Keyword Extraction (SBKE) method, which extracts keywords from the source text represented as a network. The node selectivity value is calculated from a weighted network as the average weight distributed on the links of a single node and is used in the procedure of keyword candidate ranking and extraction. The selectivity slightly outperforms an extraction based on the standard centrality measures. Therefore, the selectivity and its modification – generalized selectivity as the node centrality measures are included in the SBKE method. Selectivity-based extraction does not require linguistic knowledge as it is derived purely from statistical and structural information of the network and it can be easily ported to new languages and used in a multilingual scenario. The true potential of the proposed SBKE method is in its generality, portability and low computation costs, which positions it as a strong candidate for preparing collections which lack human annotations for keyword extraction. Testing of the portability of the SBKE was tested on Croatian, Serbian and English texts – more precisely it was developed on Croatian News and ported for extraction from parallel abstracts of scientific publication in the Serbian and English languages.
The constructed parallel corpus of scientific abstracts with annotated keywords allows a better comparison of the performance of the method across languages since we have the controlled experimental environment and data. The achieved keyword extraction results measured with an F1 score are 49.57% for English and 46.73% for the Serbian language, if we disregard keywords that are not present in the abstracts. In case that we evaluate against the whole keyword set, the F1 scores are 40.08% and 45.71% respectively. This work shows that SBKE can be easily ported to new a language, domain and type of text in the sense of its structure. Still, there are drawbacks – the method can extract only the words that appear in the text.
Natural Language Search with Knowledge Graphs (Haystack 2019)Trey Grainger
To optimally interpret most natural language queries, it is necessary to understand the phrases, entities, commands, and relationships represented or implied within the search. Knowledge graphs serve as useful instantiations of ontologies which can help represent this kind of knowledge within a domain.
In this talk, we'll walk through techniques to build knowledge graphs automatically from your own domain-specific content, how you can update and edit the nodes and relationships, and how you can seamlessly integrate them into your search solution for enhanced query interpretation and semantic search. We'll have some fun with some of the more search-centric use cased of knowledge graphs, such as entity extraction, query expansion, disambiguation, and pattern identification within our queries: for example, transforming the query "bbq near haystack" into
{ filter:["doc_type":"restaurant"], "query": { "boost": { "b": "recip(geodist(38.034780,-78.486790),1,1000,1000)", "query": "bbq OR barbeque OR barbecue" } } }
We'll also specifically cover use of the Semantic Knowledge Graph, a particularly interesting knowledge graph implementation available within Apache Solr that can be auto-generated from your own domain-specific content and which provides highly-nuanced, contextual interpretation of all of the terms, phrases and entities within your domain. We'll see a live demo with real world data demonstrating how you can build and apply your own knowledge graphs to power much more relevant query understanding within your search engine.
Presentation of the Semantic Knowledge Graph research paper at the 2016 IEEE 3rd International Conference on Data Science and Advanced Analytics (Montreal, Canada - October 18th, 2016)
Abstract—This paper describes a new kind of knowledge representation and mining system which we are calling the Semantic Knowledge Graph. At its heart, the Semantic Knowledge Graph leverages an inverted index, along with a complementary uninverted index, to represent nodes (terms) and edges (the documents within intersecting postings lists for multiple terms/nodes). This provides a layer of indirection between each pair of nodes and their corresponding edge, enabling edges to materialize dynamically from underlying corpus statistics. As a result, any combination of nodes can have edges to any other nodes materialize and be scored to reveal latent relationships between the nodes. This provides numerous benefits: the knowledge graph can be built automatically from a real-world corpus of data, new nodes - along with their combined edges - can be instantly materialized from any arbitrary combination of preexisting nodes (using set operations), and a full model of the semantic relationships between all entities within a domain can be represented and dynamically traversed using a highly compact representation of the graph. Such a system has widespread applications in areas as diverse as knowledge modeling and reasoning, natural language processing, anomaly detection, data cleansing, semantic search, analytics, data classification, root cause analysis, and recommendations systems. The main contribution of this paper is the introduction of a novel system - the Semantic Knowledge Graph - which is able to dynamically discover and score interesting relationships between any arbitrary combination of entities (words, phrases, or extracted concepts) through dynamically materializing nodes and edges from a compact graphical representation built automatically from a corpus of data representative of a knowledge domain.
The International Federation of Library Associations and Institutions (IFLA) is responsible for the development and maintenance of International Standard Bibliographic Description (ISBD), UNIMARC, and the "Functional Requirements" family for bibliographic records (FRBR), authority data (FRAD), and subject authority data (FRSAD). ISBD underpins the MARC family of formats used by libraries world-wide for many millions of catalog records, while FRBR is a relatively new model optimized for users and the digital environment. These metadata models, schemas, and content rules are now being expressed in the Resource Description Framework language for use in the Semantic Web.
This webinar provides a general update on the work being undertaken. It describes the development of an Application Profile for ISBD to specify the sequence, repeatability, and mandatory status of its elements. It discusses issues involved in deriving linked data from legacy catalogue records based on monolithic and multi-part schemas following ISBD and FRBR, such as the duplication which arises from copy cataloging and FRBRization. The webinar provides practical examples of deriving high-quality linked data from the vast numbers of records created by libraries, and demonstrates how a shift of focus from records to linked-data triples can provide more efficient and effective user-centered resource discovery services.
Building a semantic search system - one that can correctly parse and interpret end-user intent and return the ideal results for users’ queries - is not an easy task. It requires semantically parsing the terms, phrases, and structure within queries, disambiguating polysemous terms, correcting misspellings, expanding to conceptually synonymous or related concepts, and rewriting queries in a way that maps the correct interpretation of each end user’s query into the ideal representation of features and weights that will return the best results for that user. Not only that, but the above must often be done within the confines of a very specific domain - ripe with its own jargon and linguistic and conceptual nuances.
This talk will walk through the anatomy of a semantic search system and how each of the pieces described above fit together to deliver a final solution. We'll leverage several recently-released capabilities in Apache Solr (the Semantic Knowledge Graph, Solr Text Tagger, Statistical Phrase Identifier) and Lucidworks Fusion (query log mining, misspelling job, word2vec job, query pipelines, relevancy experiment backtesting) to show you an end-to-end working Semantic Search system that can automatically learn the nuances of any domain and deliver a substantially more relevant search experience.
This tutorial explains the Data Web vision, some preliminary standards and technologies as well as some tools and technological building blocks developed by AKSW research group from Universität Leipzig.
Analytics on Big Knowledge Graphs Deliver Entity Awareness and Help Data LinkingOntotext
A presentation of Ontotext’s CEO Atanas Kiryakov, given during Semantics 2018 - an annual conference that brings together researchers and professionals from all over the world to share knowledge and expertise on semantic computing.
[Conference] Cognitive Graph Analytics on Company Data and NewsOntotext
Atanas Kiryakov, Ontotext's CEO, presented at the Data Day Texas 2018 conference, which took place in Austin, TX, USA, on January 27th.
Ontotext's talk was part of the Graph Day Sessions and its focus was 'Cognitive graph analytics on company data and news', aiming to demonstrate the power of Graph Analytics to create links between various datasets and lead to knowledge discovery.
Haystack 2018 - Algorithmic Extraction of Keywords Concepts and VocabulariesMax Irwin
Presentation as given to the Haystack Conference, which outlines research and techniques for automatic extraction of keywords, concepts, and vocabularies from text corpora.
The Next Generation of AI-powered SearchTrey Grainger
What does it really mean to deliver an "AI-powered Search" solution? In this talk, we’ll bring clarity to this topic, showing you how to marry the art of the possible with the real-world challenges involved in understanding your content, your users, and your domain. We'll dive into emerging trends in AI-powered Search, as well as many of the stumbling blocks found in even the most advanced AI and Search applications, showing how to proactively plan for and avoid them. We'll walk through the various uses of reflected intelligence and feedback loops for continuous learning from user behavioral signals and content updates, also covering the increasing importance of virtual assistants and personalized search use cases found within the intersection of traditional search and recommendation engines. Our goal will be to provide a baseline of mainstream AI-powered Search capabilities available today, and to paint a picture of what we can all expect just on the horizon.
This presentation was given at the Balisage 2017 conference, and provides an overview of three key RDF standards for constraint modeling, annotation and the use of data frames and cubes in RDF.
The task of keyword extraction is to automatically identify a set of terms that best describe the document. Automatic keyword extraction establishes a foundation for various natural language processing applications: information retrieval, the automatic indexing and classification of documents, automatic summarization and high-level semantic description, etc. Although the keyword extraction applications usually work on single documents (document-oriented task), keyword extraction is also applicable to a more demanding task, i.e. the keyword extraction from a whole collection of documents or from an entire web site, or from tweets from Twitter. In the era of big-data, obtaining an effective and efficient method for automatic keyword extraction from huge amounts of multi-topic textual sources is of high importance.
We proposed a novel Selectivity-Based Keyword Extraction (SBKE) method, which extracts keywords from the source text represented as a network. The node selectivity value is calculated from a weighted network as the average weight distributed on the links of a single node and is used in the procedure of keyword candidate ranking and extraction. The selectivity slightly outperforms an extraction based on the standard centrality measures. Therefore, the selectivity and its modification – generalized selectivity as the node centrality measures are included in the SBKE method. Selectivity-based extraction does not require linguistic knowledge as it is derived purely from statistical and structural information of the network and it can be easily ported to new languages and used in a multilingual scenario. The true potential of the proposed SBKE method is in its generality, portability and low computation costs, which positions it as a strong candidate for preparing collections which lack human annotations for keyword extraction. Testing of the portability of the SBKE was tested on Croatian, Serbian and English texts – more precisely it was developed on Croatian News and ported for extraction from parallel abstracts of scientific publication in the Serbian and English languages.
The constructed parallel corpus of scientific abstracts with annotated keywords allows a better comparison of the performance of the method across languages since we have the controlled experimental environment and data. The achieved keyword extraction results measured with an F1 score are 49.57% for English and 46.73% for the Serbian language, if we disregard keywords that are not present in the abstracts. In case that we evaluate against the whole keyword set, the F1 scores are 40.08% and 45.71% respectively. This work shows that SBKE can be easily ported to new a language, domain and type of text in the sense of its structure. Still, there are drawbacks – the method can extract only the words that appear in the text.
Natural Language Search with Knowledge Graphs (Haystack 2019)Trey Grainger
To optimally interpret most natural language queries, it is necessary to understand the phrases, entities, commands, and relationships represented or implied within the search. Knowledge graphs serve as useful instantiations of ontologies which can help represent this kind of knowledge within a domain.
In this talk, we'll walk through techniques to build knowledge graphs automatically from your own domain-specific content, how you can update and edit the nodes and relationships, and how you can seamlessly integrate them into your search solution for enhanced query interpretation and semantic search. We'll have some fun with some of the more search-centric use cased of knowledge graphs, such as entity extraction, query expansion, disambiguation, and pattern identification within our queries: for example, transforming the query "bbq near haystack" into
{ filter:["doc_type":"restaurant"], "query": { "boost": { "b": "recip(geodist(38.034780,-78.486790),1,1000,1000)", "query": "bbq OR barbeque OR barbecue" } } }
We'll also specifically cover use of the Semantic Knowledge Graph, a particularly interesting knowledge graph implementation available within Apache Solr that can be auto-generated from your own domain-specific content and which provides highly-nuanced, contextual interpretation of all of the terms, phrases and entities within your domain. We'll see a live demo with real world data demonstrating how you can build and apply your own knowledge graphs to power much more relevant query understanding within your search engine.
Presentation of the Semantic Knowledge Graph research paper at the 2016 IEEE 3rd International Conference on Data Science and Advanced Analytics (Montreal, Canada - October 18th, 2016)
Abstract—This paper describes a new kind of knowledge representation and mining system which we are calling the Semantic Knowledge Graph. At its heart, the Semantic Knowledge Graph leverages an inverted index, along with a complementary uninverted index, to represent nodes (terms) and edges (the documents within intersecting postings lists for multiple terms/nodes). This provides a layer of indirection between each pair of nodes and their corresponding edge, enabling edges to materialize dynamically from underlying corpus statistics. As a result, any combination of nodes can have edges to any other nodes materialize and be scored to reveal latent relationships between the nodes. This provides numerous benefits: the knowledge graph can be built automatically from a real-world corpus of data, new nodes - along with their combined edges - can be instantly materialized from any arbitrary combination of preexisting nodes (using set operations), and a full model of the semantic relationships between all entities within a domain can be represented and dynamically traversed using a highly compact representation of the graph. Such a system has widespread applications in areas as diverse as knowledge modeling and reasoning, natural language processing, anomaly detection, data cleansing, semantic search, analytics, data classification, root cause analysis, and recommendations systems. The main contribution of this paper is the introduction of a novel system - the Semantic Knowledge Graph - which is able to dynamically discover and score interesting relationships between any arbitrary combination of entities (words, phrases, or extracted concepts) through dynamically materializing nodes and edges from a compact graphical representation built automatically from a corpus of data representative of a knowledge domain.
The International Federation of Library Associations and Institutions (IFLA) is responsible for the development and maintenance of International Standard Bibliographic Description (ISBD), UNIMARC, and the "Functional Requirements" family for bibliographic records (FRBR), authority data (FRAD), and subject authority data (FRSAD). ISBD underpins the MARC family of formats used by libraries world-wide for many millions of catalog records, while FRBR is a relatively new model optimized for users and the digital environment. These metadata models, schemas, and content rules are now being expressed in the Resource Description Framework language for use in the Semantic Web.
This webinar provides a general update on the work being undertaken. It describes the development of an Application Profile for ISBD to specify the sequence, repeatability, and mandatory status of its elements. It discusses issues involved in deriving linked data from legacy catalogue records based on monolithic and multi-part schemas following ISBD and FRBR, such as the duplication which arises from copy cataloging and FRBRization. The webinar provides practical examples of deriving high-quality linked data from the vast numbers of records created by libraries, and demonstrates how a shift of focus from records to linked-data triples can provide more efficient and effective user-centered resource discovery services.
Building a semantic search system - one that can correctly parse and interpret end-user intent and return the ideal results for users’ queries - is not an easy task. It requires semantically parsing the terms, phrases, and structure within queries, disambiguating polysemous terms, correcting misspellings, expanding to conceptually synonymous or related concepts, and rewriting queries in a way that maps the correct interpretation of each end user’s query into the ideal representation of features and weights that will return the best results for that user. Not only that, but the above must often be done within the confines of a very specific domain - ripe with its own jargon and linguistic and conceptual nuances.
This talk will walk through the anatomy of a semantic search system and how each of the pieces described above fit together to deliver a final solution. We'll leverage several recently-released capabilities in Apache Solr (the Semantic Knowledge Graph, Solr Text Tagger, Statistical Phrase Identifier) and Lucidworks Fusion (query log mining, misspelling job, word2vec job, query pipelines, relevancy experiment backtesting) to show you an end-to-end working Semantic Search system that can automatically learn the nuances of any domain and deliver a substantially more relevant search experience.
This tutorial explains the Data Web vision, some preliminary standards and technologies as well as some tools and technological building blocks developed by AKSW research group from Universität Leipzig.
Understanding RDF: the Resource Description Framework in Context (1999)Dan Brickley
Dan Brickley, 3rd European Commission Metadata Workshop, Luxemburg, April 12th 1999
Understanding RDF: the Resource Description Framework in Context
http://ilrt.org/discovery/2001/01/understanding-rdf/
Linked Open Data Libraries Archives Museums. This presentation is a basic overview of what LOD is and what technologies are needed to ensure the metadata around your collections is machine readable.
Semantic - Based Querying Using Ontology in Relational Database of Library Ma...dannyijwest
The traditional Web stores huge amount of data in the form of Relational Databases (RDB) as it is good at
storing objects and relationships between them. Relational Databases are dynamic in nature which allows
bringing tables together helping user to search for related material across multiple tables. RDB are
scalable to expand as the data grows. The RDB uses a Structured Query Language called SQL to access
the databases for several data retrieval purposes. As the world is moving today from the Syntactic form to
Semantic form and the Web is also taking its new form of Semantic Web. The Structured Query of the RDB
on web can be a Semantic Query on Semantic Web.
This talk introduces the concepts of web 3.0 technology and how they relate to related technologies such as Internet of Things (IoT), Grid Computing and the Semantic Web:
• A short history of web technologies:
o Web 1.0: Publishing static information with links for human consumption.
o Web 2.0: Publishing dynamic information created by users, for human consumption.
o Web 3.0: Publishing all kinds of information with links between data items, for machine consumption.
• Standardization of protocols for description of any type of data (RDF, N3, Turtle).
• Standardization of protocols for the consumption of data in “the grid” (SPARQL).
• Standardization of protocols for rules (RIF).
• Comparison with the evolution of technologies related to data bases.
• Comparison of IoT solutions based on web 2.0 and web 3.0 technologies.
• Distributed solutions vs centralized solutions..
• Security
• Extensions of Peer-to-peer protocols (XMPP).
• Advantages of solutions based on web 3.0 and standards (IETF, XSF).
Duration of talk: 1-2 hours with questions.
Similar to CSHALS 2010 W3C Semanic Web Tutorial (20)
Brief look at data segmenting decisions and use of Semantic Web technologies within Anzo. Presented at the 2011 W3C Linked Enterprise Data Patterns workshop.
Intro to the Semantic Web Landscape - 2011LeeFeigenbaum
An introduction to the Semantic Web landscape as it stands near the end of 2011. Includes an introduction to the core technologies in the Semantic Web technology stack.
This material was presented at the November, 2011, Cambridge Semantic Web meetup.
Evolution Towards Web 3.0: The Semantic WebLeeFeigenbaum
This was a lecture I presented at Professor Stuart Madnick's class, "Evolution Towards Web 3.0" at the MIT Sloan School of Management on April 21, 2011. Please follow along with the speaker notes which add significant commentary to the slides.
These slides are a brief update on the status of the work of the current SPARQL Working Group. "SPARQL 1.1" collectively refers to the upcoming versions of the SPARQL query language, SPARQL update language, and other deliverables of the 2nd (current) SPARQL Working Group.
"SPARQL Cheat Sheet" is a short collection of slides intended to act as a guide to SPARQL developers. It includes the syntax and structure of SPARQL queries, common SPARQL prefixes and functions, and help with RDF datasets.
The "SPARQL Cheat Sheet" is intended to accompany the SPARQL By Example slides available at http://www.cambridgesemantics.com/2008/09/sparql-by-example/ .
These slides were originally a tutorial presented for the SIG preceding the May 2009 meeting of the PRISM Forum.
They attempt to give a survey of the technologies, tools, and state of the world with respect to the Semantic Web as of the first half of 2009.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Epistemic Interaction - tuning interfaces to provide information for AI support
CSHALS 2010 W3C Semanic Web Tutorial
1. The Semantic Web LandscapeA Practical Introduction Lee Feigenbaum VP Technology & Standards, Cambridge Semantics Co-chair, W3C SPARQL Working Group For CSHALS 2010 Tutorial Attendees February 24, 2010
2. The W3C HCLS interest group set out to use Semantic Web technologies to receive precise answers to a complex question: A Motivating Example: Drug Discovery Find me genes involved in signal transduction that are related to pyramidal neurons.
8. A Semantic Web Approach (cont’d) …(trivially) spans several databases…
9. A Semantic Web Approach (cont’d) …to deliver targeted results…
10. Agreement on common terms and relationships Incremental, flexible data structure Good-enough modeling Query interface tailored to the data model What’s the trick?
13. Semantic Web Web of Data Giant Global Graph Data Web Web 3.0 Linked Data Web Semantic Data Web Branding
14. “The Semantic Web” a.k.a “Linked Open Data” Augments the World Wide Web Represents the Web’s information in a machine-readable fashion Enables… …targeted search …data browsing …automated agents What is it & why do we care? (1) World Wide Web : Web pages :: The Semantic Web : Data
15. “Semantic Web technologies” A family of technology standards that ‘play nice together’, including: Flexible data model Expressive ontology language Distributed query language Drive Web sites, enterprise applications What is it & why do we care? (2) The technologies enable us to build applications and solutions that were not possible, practical, or feasible traditionally.
16. A common set of technologies: ...enables diverse uses ...encourages interoperability A coherent set of technologies: …encourage incremental application …provide a substantial base for innovation A standard set of technologies: ...reduces proprietary vendor lock-in ...encourages many choices for tool sets A Common & Coherent Set of Technology Standards
19. As technologies & tools have evolved, Semantic Web advocates have progressed through stages: 2010: Where we are
20. 2010: Where we’re not Image from Trey Ideker via Enoch Huang Semantic Web technologies are not a ‘magic crank’ for discovering new drugs (or solving other problems, for that matter)!
21. 2010: Where we’re not (cont’d) XML vs. RDF? “Ontology” vs. “ontology”? Data integration vs. reasoning vs. KBs vs. search vs. app. development vs. … Semantic Web vs. Linked Data? The Semantic Web still suffers from confusing and conflicting messaging, each of which asserts it’s “correct”.
22. 2010: Where we’re not (cont’d) People with appropriate skill sets for designing & building Semantic Web solutions are not widely available.
23. 2010: Where we’re not (cont’d) We don’t yet have standard solutions for privacy, trust, probability, and other elements of the Semantic Web vision.
27. RDF is… A schema-less data model that features unambiguous identifiers and named relations between pairs of resources.
28. RDF is… A labeled, directed graph of relations between resources and literal values. RDF graphs are collections of triples Triples are made up of a subject, a predicate, and an object Resources and relationships are named with URIs predicate subject object
29. “Lee Feigenbaum works for Cambridge Semantics” “Lee Feigenbaum was born in 1978” “Cambridge Semantics is headquartered in Massachusetts” Example RDF triples works for born in headquartered Lee Feigenbaum Cambridge Semantics Lee Feigenbaum Cambridge Semantics 1978 Massachusetts
30. Triples connect to form graphs headquartered lives in Massachusetts born in capital works for Lee Feigenbaum Cambridge Semantics Boston 1978
31. The graph data structure makes merging datawith shared identifiers trivial Triples act as a least common denominatorfor expressing data URIs for naming remove ambiguity …the same identifier means the same thing Why RDF? What’s different here?
33. RDF is the model, for which there are several concrete syntaxes: RDF/XML – standard, complex XML syntax Turtle – common, textual, triples-oriented syntax N3 – more expressive superset of Turtle N-Triples – textual, line-oriented, useful for streaming What does RDF look like? When writing RDF by hand and in many guides, examples, and discussions these days, you’ll see Turtle most often.
34. Write a triple by writing its parts separated by spaces (subject predicate object) A Bit of Turtle @prefix ex: <http://example.org/myvocab/> . @prefix geo: <http://geonames.example/> . ex:LeeFeigenbaumex:employerex:CambridgeSemantics . ex:LeeFeigenbaumex:birthYear 1978 . ex:CambridgeSemanticsex:headquartersgeo:BostonMA . geo:BostonMAex:population 574000 .
37. SPARQL is… A SQL-like language for querying sets of RDF graphs.
38. SPARQL is… A simple protocol for issuing queries and receiving results over HTTP. So… Every SPARQL client works with every SPARQL server!
39. SPARQL lets us: Pull information from structured and semi-structured data. Explore data by discovering unknown relationships. Query and search an integrated view of disparate data sources. Glue separate software applications together by transforming data from one vocabulary to another. Why SPARQL?
40. Dealer 1 Dealer 2 Dealer 3 Employee Directory ERP / Budget System Web EPA Fuel Efficiency Spreadsheet SPARQL Query Engine What automobiles get more than 25 miles per gallon, fit within my department’s budget, and can be purchased at a dealer located within 10 miles of one of my employees? SELECT ?automobile WHERE { ?automobile a ex:Car ; epa:mpg ?mpg ; ex:dealer ?dealer . ?employee a ex:Employee ; geo:loc ?loc . ?dealer geo:loc ?dealerloc . FILTER(?mpg > 25 && geo:dist(?loc, ?dealerloc) <= 10) . } Web dashboard SPARQL query
43. 3 pieces of the Semantic Web technology stack are about describing a domain well enough to capture (some of) the meaning of resources and relationships in the domain RDF Schema OWL RIF From the explicit to the inferred Apply knowledge to data to get more data.
45. Elements of: Vocabulary (defining terms) I define a relationship called “prescribed dose.” Schema (defining types) “prescribed dose” relates “treatments” to “dosages” (my prescribed dose is 2mg; therefore 2mg is a dosage) Taxonomy (defining hierarchies) Any “doctor” is a “medical professional” (therefore Dr. Brown is a medical professional) RDF Schema is…
47. Elements of ontology Same/different identity “author” and “auteur” are the same relation two resources with the same “ISBN” are the same “book” More expressive type definitions A “cycle” is a “vehicle” with at least one “wheel” A “bicycle” is a “cycle” with exactly two “wheels” More expressive relation definitions “sibling” is a symmetric predicate the value of the “favorite dwarf” relation must be one of “happy”, “sleepy”, “sneezy”, “grumpy”, “dopey”, “bashful”, “doc” OWL is…
48. A class is a (named) collection of things with similar attributes OWL: Rich Class Definitions
49. A class is a (named) collection of things with similar attributes OWL: Rich Class Definitions
50. A class is a (named) collection of things with similar attributes OWL: Rich Class Definitions
53. Standard representation for exchanging sets of logical and business rules Logical rules A buyer buys an item from a seller if the seller sells the item to the buyer A customer becomes a "Gold" customer as soon as his cumulative purchases during the current year top $5000 Production rules Customers that become "Gold" customers must be notified immediately, and a golden customer card will be printed and sent to them within one week For shopping carts worth more than $1000, "Gold" customers receive an additional discount of 10% of the total amount RIF is…
54. Fantasy Land Architecture Ontology / Schema + Custom UI Custom UI Custom UI Custom UI Custom UI Custom UI
69. Semantic Web Tools In 2010, there are a wide variety of open-source and commercial Semantic Web tools available.
70. Triple stores Built on relational database Native RDF store Development libraries Full-featured application servers Types of RDF Tools Most RDF tools contain some elements of each of these.
72. Query engines Things that can run queries Most RDF stores provide a SPARQL engine Query rewriters E.g. to query relational databases (more later) Endpoints Things that accept queries on the Web and return results Client libraries Things that make it easy to ask queries Types of SPARQL Tools
73. Community-maintained list of query engines http://esw.w3.org/topic/SparqlImplementations Publicly accessible SPARQL endpoints http://esw.w3.org/topic/SparqlEndpoints Michael Bergman’s Sweet Tools searchable list: http://www.mkbergman.com/?page_id=325 Finding SPARQL Tools
79. What about… everything else? Standards don’t yet exist, but many tools exist to derive RDF and/or run SPARQL queries against other sources of data.
85. On the Web Google, Yahoo! Best Buy NY Times US Government UK Government Where is it being used?
86. Industries Oil & Gas (integration, classification) Finance (structured data, ontologies, XBRL) Publishing (metadata) Government (structured data, metadata, classification) Libraries & museums (metadata, classification) IT (rapid application development & evolution) Where is it being used?
87. Health Care Cleveland Clinic Clinical research Data integration, classification (= better search) UT School of Health Public health surveillance SAPPHIRE—classification, ontology-driven development Various Clinical Decision Support Agile, rule-driven, scalable in the face of change Where is it being used?
88. Life Sciences Agile knowledgebases at Pfizer Target assessment at Eli Lilly Integrated information links at Novartis Astra Zeneca, J&J, UCB, … Where is it being used? CSHALS chronicles many of these uses and many more.
90. These are horizontal, enabling technologies. But they apply particularly well to problems with these characteristics: Heterogeneous data from multiple sources Increasing reliance on connections within this data Rapidly changing information needs Significant early-mover advantage Large amounts of data that would benefit from classification Why are Semantic Web technologies appropriate for the life sciences? Many tactical and strategic challenges in the life sciences industry feature these traits.
92. Getting Started with Semantic Web technologies Goal: quick tactical wins on the path to large strategic value Be sure to consider the operational ramifications Who does what differently? Ideal Semantic Web projects/applications have an incremental path towards broad deployment that generates demonstrable value along the way
93. Look beyond the core Semantic Web capabilities and consider: integration with existing enterprise systems development & extension models deployment, logging, maintenance, backup tooling user experience Choose practical, enterprise-ready tools If you choose to build new components and assemble existing components together, it’s quite likely you’ll end up reinventing the wheel.
94. What level of expertise is necessary? Technologies only? Technologies + API? Technologies + tooling? Tooling only? … How will we acquire the expertise? In-house (and if so, how?) Vendor services 3rd-party services Open-source community Plan for Acquiring Expertise
95. I’m always happy to field questions & engage in discussion: lee@cambridgesemantics.com Thanks & Discussion
Editor's Notes
One of the goals of this tutorial is to de-mystify the all of the names of technologies, tools, projects, etc. that swirl around the Semantic Web story.And since I saw that as I researched this presentation, everyone seems to like this particular Gary Larson cartoon, it behooved me to include it.
The good – emphasize the importance of the foundational layers (URIs and RDF) ; emphasizes the long-term roadmap/vision of what’s needed for the Semantic WebThe bad – implies that perhaps things can’t be taken serious until all the pieces are in place ; implies an order to the research ; various versions of the cake tell different stories (importance of XML, absence of query, lack of UI/application layer, …)Valentin Zacharias wrote about the “infamy” part of the layer cake here: http://www.valentinzacharias.de/blog/2007/04/ban-semantic-web-layer-cake.html
The Ontology/ontology dichotomy is captured well by Jim Hendler at http://www.cs.rpi.edu/%7Ehendler/presentations/SemTech2008-2Towers.pdf
Definition.
Prescriptive.
Descriptive.
Formal.
The first is as opposed to relational tables or XML schemas where the schema needs to be explicitly adjusted to accommodate whatever data is being merged.The second is due to the expressivity of the model – can handle lists, trees, n-ary relations, etc.The third is as opposed to table & column identifiers or XML attribute names.
Definition.
Prescriptive.
Descriptive.
Descriptive (part 2). This is leagues ahead of the situation with SQL!