This webinar in the course of the LOD2 webinar series will present the release 2.0 of the LOD2 stack, which contains updates to the components Ontowiki, Silk
* the assisting sparql editor SPARQLED (DERI),
* the LOD enabled Open Refine (previously Google Refine) (ZEMANTA),
* the extended version of SILK with link suggestion management from LATC (DERI),
* the rdfAuthor library which allows to manage structured information from RDFa-enhanced websites (ULEI),
* the SPARQLPROXY which is a PHP based forward proxy for remote access to SPARQL end points (ULEI)
Release 2.0 contains also a first contributed debian package for a component which is maintained by a group outside the LOD2 consortium. With the help of ULEI a package for the STANBOL engine (http://stanbol.apache.org/) has been contributed.
If you are interested in Linked (Open) Data principles and mechanisms, LOD tools & services and concrete use cases that can be realised using LOD then join us in the free LOD2 webinar series!
http://lod2.eu/BlogPost/webinar-series
Understanding the lock manager internals via the fb_lock_print utility
This session will provide a short introduction to the Firebird lock manager and its usage patterns. It will describe how the lock manager can affect the performance of highly loaded systems and outlines the possible bottlenecks and other problems like unexpected lock-ups/freezes that may require special analysis. The structure of the lock table will also be explained.
It will also include a detailed description of the fb_lock_print utility and its usage that will enable the research of issues that are related to the lock manager. A few practical examples illustrating how to analyze the utility output will be provided. This session is mainly of interest to Classic Server users and DBAs.
In this presentation, we will discuss about the various connecting devices for networking. We will define the various terminologies like bridge, router, gateways, internet and ISP.
To know more about Welingkar School’s Distance Learning Program and courses offered, visit:
http://www.welingkaronline.org/distance-learning/online-mba.html
Understanding the lock manager internals via the fb_lock_print utility
This session will provide a short introduction to the Firebird lock manager and its usage patterns. It will describe how the lock manager can affect the performance of highly loaded systems and outlines the possible bottlenecks and other problems like unexpected lock-ups/freezes that may require special analysis. The structure of the lock table will also be explained.
It will also include a detailed description of the fb_lock_print utility and its usage that will enable the research of issues that are related to the lock manager. A few practical examples illustrating how to analyze the utility output will be provided. This session is mainly of interest to Classic Server users and DBAs.
In this presentation, we will discuss about the various connecting devices for networking. We will define the various terminologies like bridge, router, gateways, internet and ISP.
To know more about Welingkar School’s Distance Learning Program and courses offered, visit:
http://www.welingkaronline.org/distance-learning/online-mba.html
Born from the wish to make linking tractable, the Link Discovery Framework for Metric Spaces (LIMES) is tailored towards the time-efficient and lossless discovery of links across knowledge bases. LIMES is an extensible declarative framework that encapsulates manifold algorithms dedicated to the processing of structured data of any sort. Built with extensibility and easy integration in mind, LIMES allows implementing applications that integrate, consume and/or generate Linked Data. Within LOD2, it will be used for discovering links between knowledge bases.
This webinar will be presented by the LOD2 Partner: University of Leipzig (ULEI), Germany.
Rising from non-existence a few short years ago, Node.js is already attracting the accolades and disdain enjoyed and endured by the Ruby and Rails community just a short time ago. It overtook Rails as the most popular Github repository in 2011 and was selected by InfoWorld for the Technology of the Year Award in 2012. This presentation explains the basic theory and programming model central to Node's approach and will help you understand the resulting benefits and challenges it presents. You can also watch this presentation at http://bit.ly/1362UGA
How Enterprise Architecture & Knowledge Graph Technologies Can Scale Business...Semantic Web Company
Organising data, for most of us, means Excel spreadsheets and folders upon folders. Knowledge graph technology, however, organises data in ways similar to the brain – through context and relations. By connecting your data, you (and also machines) are able to gain context within your knowledge, helping you to make informed decisions based on all of the information you already have.
So, how can enterprises benefit from this and scale?
PwC Sr. Research Fellow for Emerging Tech, Alan Morrison, and Sebastian Gabler, Head of Sales of Semantic Web Company tackle the importance of Enterprise Knowledge Graphs and how these technologies scale business efficiency.
Learn about:
• Application-centric development to data-centric approaches
• How enterprise architects learn how to benefit from knowledge graphs: use cases
• Learn which use cases fit well to which type of graph, and which technologies are involved
• Understand how RDF helps with data integration.
• What is AI-assisted entity linking?
• Understand data virtualisation vs. materialisation
- Learn to understand what knowledge graphs are for
- Understand the structure of knowledge graphs (and how it relates to taxonomies and ontologies)
- Understand how knowledge graphs can be created using manual, semi-automatic, and fully automatic methods.
- Understand knowledge graphs as a basis for data integration in companies
- Understand knowledge graphs as tools for data governance and data quality management
- Implement and further develop knowledge graphs in companies
- Query and visualize knowledge graphs (including SPARQL and SHACL crash course)
- Use knowledge graphs and machine learning to enable information retrieval, text mining and document classification with the highest precision
- Develop digital assistants and question and answer systems based on semantic knowledge graphs
- Understand how knowledge graphs can be combined with text mining and machine learning techniques
- Apply knowledge graphs in practice: Case studies and demo applications
More Related Content
Similar to LOD2 Webinar: The 2nd release of the LOD2 stack
Born from the wish to make linking tractable, the Link Discovery Framework for Metric Spaces (LIMES) is tailored towards the time-efficient and lossless discovery of links across knowledge bases. LIMES is an extensible declarative framework that encapsulates manifold algorithms dedicated to the processing of structured data of any sort. Built with extensibility and easy integration in mind, LIMES allows implementing applications that integrate, consume and/or generate Linked Data. Within LOD2, it will be used for discovering links between knowledge bases.
This webinar will be presented by the LOD2 Partner: University of Leipzig (ULEI), Germany.
Rising from non-existence a few short years ago, Node.js is already attracting the accolades and disdain enjoyed and endured by the Ruby and Rails community just a short time ago. It overtook Rails as the most popular Github repository in 2011 and was selected by InfoWorld for the Technology of the Year Award in 2012. This presentation explains the basic theory and programming model central to Node's approach and will help you understand the resulting benefits and challenges it presents. You can also watch this presentation at http://bit.ly/1362UGA
How Enterprise Architecture & Knowledge Graph Technologies Can Scale Business...Semantic Web Company
Organising data, for most of us, means Excel spreadsheets and folders upon folders. Knowledge graph technology, however, organises data in ways similar to the brain – through context and relations. By connecting your data, you (and also machines) are able to gain context within your knowledge, helping you to make informed decisions based on all of the information you already have.
So, how can enterprises benefit from this and scale?
PwC Sr. Research Fellow for Emerging Tech, Alan Morrison, and Sebastian Gabler, Head of Sales of Semantic Web Company tackle the importance of Enterprise Knowledge Graphs and how these technologies scale business efficiency.
Learn about:
• Application-centric development to data-centric approaches
• How enterprise architects learn how to benefit from knowledge graphs: use cases
• Learn which use cases fit well to which type of graph, and which technologies are involved
• Understand how RDF helps with data integration.
• What is AI-assisted entity linking?
• Understand data virtualisation vs. materialisation
- Learn to understand what knowledge graphs are for
- Understand the structure of knowledge graphs (and how it relates to taxonomies and ontologies)
- Understand how knowledge graphs can be created using manual, semi-automatic, and fully automatic methods.
- Understand knowledge graphs as a basis for data integration in companies
- Understand knowledge graphs as tools for data governance and data quality management
- Implement and further develop knowledge graphs in companies
- Query and visualize knowledge graphs (including SPARQL and SHACL crash course)
- Use knowledge graphs and machine learning to enable information retrieval, text mining and document classification with the highest precision
- Develop digital assistants and question and answer systems based on semantic knowledge graphs
- Understand how knowledge graphs can be combined with text mining and machine learning techniques
- Apply knowledge graphs in practice: Case studies and demo applications
Deep Text Analytics - How to extract hidden information and aboutness from textSemantic Web Company
- Deep Text Analytics (DTA) is an application of Semantic AI
- DTA fuses methods and algorithms taken from language modeling, corpus linguistics, machine learning, knowledge representation and the semantic web result into Deep Text Analytics methods
- Main areas of use cases for DTA are Information retrieval, NLU, Question answering, and Recommender Systems
Leveraging Knowledge Graphs in your Enterprise Knowledge Management SystemSemantic Web Company
Knowledge graphs and graph-based data in general are becoming increasingly important for addressing various data management challenges in industries such as financial services, life sciences, healthcare or energy.
At the core of this challenge is the comprehensive management of graph-based data, ranging from taxonomy to ontology management to the administration of comprehensive data graphs along with a defined governance framework. Various data sources are integrated and linked (semi) automatically using NLP and machine learning algorithms. Tools for securing high data quality and consistency are an integral part of such a platform.
PoolParty 7.0 can now handle a full range of enterprise data management tasks. Based on agile data integration, machine learning and text mining, or ontology-based data analysis, applications are developed that allow knowledge workers, marketers, analysts or researchers a comprehensive and in-depth view of previously unlinked data assets.
At the heart of the new release is the PoolParty GraphEditor, which complements the Taxonomy, Thesaurus, and Ontology Manager components that have been around for some time. All in all, data engineers and subject matter experts can now administrate and analyze enterprise-wide and heterogeneous data stocks with comfortable means, or link them with the help of artificial intelligence.
Unified views of business-critical information across all customer-facing processes and HR-related tasks are most relevant for decision makers.
In this talk we present a SharePoint extension that supports the automatic linking of unstructured content like Word documents with structured information from other databases, such as statistical data. As a result, decision makers have knowledge portals based on linked data at their fingertips.
While the importance of managed metadata and Term Store is clear to most SharePoint architects, the significance of a semantic layer outside of the content silos has not yet been explored systematically.
We will present a four-layered content architecture and will take a close look on some of the aspects of the semantic layer and its integration with SharePoint:
- Keeping Term Store and the semantic layer in sync
- Automatic tagging of SharePoint content
- Use of graph databases to store tags
- Entity-centric search & analytics applications
Metadata is most often stored per data source, and therefore it is meaningless outside of the silo. In this presentation, we will give a live demo of a SharePoint extension that makes use of an explicit semantic layer based on standards. This approach builds the basis to start linking data across the silos in a most agile way.
The resulting knowledge graph can start on a small scale, to develop continuously and to grow with the requirements. In this presentation we will give an example to illustrate how initially disconnected HR-related data (CVs in SharePoint; statistical data from labour market; skills and competencies taxonomies; salary spreadsheets) gets linked automatically, and is then made available through an extensive search & analytics application.
Slides based on a workshop held at SEMANTiCS 2018 in Vienna. Introduces a methodology for knowledge graph management based on Semantic Web standards, ranging from taxonomies over ontologies, mappings, graph and entity linking. Further topics covered: Semantic AI and machine learning, text mining, and semantic search.
Semantic Artificial Intelligence is the fusion of various types of AI, incl. symbolic AI, reasoning, and machine learning techniques like deep learning. At the same time, Semantic AI has a strong focus on data management and data governance. With the 'wedding' of various AI techniques new promises are made, but also fundamental approaches like 'Explainable AI (XAI)', knowledge graphs, or Linked Data are more strongly focused.
Bringing Machine Learning and Knowledge Graphs Together
Six Core Aspects of Semantic AI:
- Hybrid Approach
- Data Quality
- Data as a Service
- Structured Data Meets Text
- No Black-box
- Towards Self-optimizing Machines
The PoolParty Semantic Classifier is a component of the Semantic Suite, which makes use of machine learning in combination with Knowledge Graphs.
We discuss the potential of the fusion of machine learning, neuronal networks, and knowledge graphs based on use cases and this concrete technology offering.
We introduce the term 'Semantic AI' that refers to the combined usage of various AI methods.
Machines learn better with Semantics!
See how taxonomy management and the maintenance of knowledge graphs benefit from machine learning and corpus analysis, and how, in return, machine learning gets improved when using semantic knowledge models for further enrichment.
A quick introduction to taxonomies, and how they relate to ontologies and knowledge graph. See how they can serve as part of a semantic layer in your information architecture. Learn which use cases can be developed based on this.
PoolParty GraphSearch - The Fusion of Search, Recommendation and AnalyticsSemantic Web Company
See how Cognitive Search works when based on Semantic Knowledge Graphs.
We showcase the latest developments and new features of PoolParty GraphSearch:
- Navigate a semantic knowledge graph
- Ontology-based data access (OBDA)
- Search over various search spaces: Ontology-driven facets including hierarchies
- Sophisticated autocomplete including context information
- Custom views on entity-centric and document-centric search results
- Linked data: put various tagging services such as TRIT or PoolParty Extractor in series and benefit from comprehensive semantic enrichment
- Statistical charts to explain results from unified data repositories quickly
- Plug-in system for various recommendation and matchmaking algorithms
This talk discusses how companies can apply semantic technologies to build cognitive applications. It examines the role of semantic technologies within the larger Artificial Intelligence (AI) technology ecosystem, with the aim of raising awareness of different solution approaches.
To succeed in a digital and increasingly self-service-oriented business environment, companies can no longer rely solely on IT professionals. Solutions like the PoolParty Semantic Suite utilize domain experts and business users to shape the cognitive intelligence of knowledge-driven applications.
Cognitive solutions essentially mimic how the human brain works. The search for cognitive solutions has challenged computer scientists for more than six decades. The research has matured to the extent that it has moved out of the laboratory and is now being applied in a range of knowledge-intensive industries.
There is no such thing as a single, all-encompassing “AI technology.” Rather, the large global professional technology community and software vendors are continuously developing a broad set of methods and tools for natural language processing and advanced data analytics. They are creating a growing library of machine learning algorithms to enhance the automated learning capabilities of computer systems. These emerging technologies need to be customized or combined with complementary solutions as semantic knowledge graphs, depending on the use case.
A hybrid approach to cognitive computing, employing both the statistical and knowledge base models, will have a critical influence on the development of applications. Highly automated data processing based on sophisticated machine-learning algorithms must give end user the option to independently modify the functioning of smart applications in order to overcome the disadvantages associated with ‘black-box’ approaches.
This talk will give an overview over state-of-the-art smart applications, which are becoming a fusion of search, recommendation, and question-answer machines. We will cover specific use cases in focused knowledge domains, and we will discuss how this approach allows for AI-enabled use cases and application scenarios that are currently highly prioritized by corporate and digital business players.
In this engaging, 1-hour webinar (hosted by http://www.poolparty.biz and http://www.mekon.com), you will learn how to tailor information chunks to readers’ unique needs. We will talk about:
- Benefits and principles of granular structured content, and how to start preparing your own content for this new architecture.
- Best practices for linking structured content to standards-based taxonomies, and some pitfalls to avoid
- The underlying semantic architecture that you can work toward for a truly mature and scalable approach to linking content and data
- Key use cases that you can apply to your own organization
See how you can configure your linked data eco-system based on PoolParty's semantic middleware configurator. Benefit from Shadow Concept Extraction by making implicit knowledge visible. Combine knowledge graphs with machine learning and integrate semantics into your enterprise information systems.
Technical Deep Dive: Learn more about the most complete Semantic Middleware on the market. See how to integrate semantic services into your Enterprise Information Systems.
Taxonomies and Ontologies – The Yin and Yang of Knowledge ModellingSemantic Web Company
See how ontologies and taxonomies can play together to reach the ultimate goal, which is the cost-efficient creation and maintenance of an enterprise knowledge graph. The knowledge modelling methodology is supported by approaches taken from NLP, data science, and machine learning.
This talk addresses two questions: “How can the quality of taxonomies be defined?” and “How can it be measured?” See how quality criteria vary depending on how a taxonomy is applied, such as automatic content classification in ecommerce or a knowledge graph for data integration in enterprises. Distinguish between formal quality, structural properties, content coverage, and network topology. Investigate the advantages of standards-based and machine-processable SKOS taxonomies to be able to measure the quality of taxonomies automatically, as well as several tools and techniques for quality assessment.
Consistency is crucial to a good user experience. Designers go to great lengths to create and test consistent visual designs. The structural design of an information environment, which is of equal importance to a good user experience, is too often ignored. Blumauer presents a “four-layered content architecture” for making sense of any information environment by clearly distinguishing between the content, metadata, and semantic layers and the navigation logic. He discusses several use cases for a taxonomy-driven user experience such as personalization or dynamically created topic pages.
2. LOD2 is a large-scale integrating project co-funded by the European
Commission within the FP7 Information and Communication Technologies
Work Programme. This 4-year project comprises leading Linked Open
Data technology researchers, companies, and service providers. Coming
from across 12 countries the partners are coordinated by the Agile
Knowledge Engineering and Semantic Web Research Group at the
University of Leipzig, Germany.
LOD2 will integrate and syndicate Linked Data with existing large-scale
applications. The project shows the benefits in the scenarios of Media and
Publishing, Corporate Data intranets and eGovernment.
3. Once per month the LOD2 webinar series offer a free webinar about
tools and services along the Linked Open Data Life Cycle.
Stay with us and learn more about acquisition, editing, composing,
connected applications – and finally publishing Linked Open Data.
5. 1. The LOD2 Stack: What, how, when?
2. Demo: enrichment with geospatial coordinates
3. Year 3 focus: a specialized LOD2 Stack for the Statistical Office
4. Questions
11. LOD2 stack contribution process
Debian
Component Component Owner
Package
LOD2 stack maintainer
LOD2 Stack
testing
feedback
• Does the Debian package works?
• Does it break flows | UI ? Valid?
LOD2 Stack
Stable
12. LOD2 stack prerequisites & support
• The component providers are responsible for the Debian package
– the knowledge and experience with packaging increases
– Improves the overall quality of the individual components
• The LOD2 stack standardizes on Ubuntu 12.04 LTS, Precise edition
– Other Linux distributions are in principle possible
– Our partner I2G, Poland, successfully installed a version on Debian
• Requires 2GB RAM, better at least 4GB
– Many components are tomcat based web-apps
– Footprint can be reduced by only installing the needed applications for your case
• Technical feedback on the stack via support-stack@lod2.eu
13.
14. Demo Scenario
• Linked Data has the ability to merge and enrich your data.
• Show that with a few steps this promise comes close to reality.
16. The demo scenario
1. Upload the courts from
vocabulary.wolterskluwer.de
2. Extract data from the German DBpedia
3. Link the datasets
4. Exploit the links to
enrich the courts
5. Display the result
17. The demo scenario
1. Upload the courts from
vocabulary.wolterskluwer.de SPARQLED
PoolParty 2. Extract data from the German DBpedia
3. Link the datasets
4. Exploit the links to Virtuoso
enrich the courts
SILK
5. Display the result
Semantic
Spatial Browser
18.
19. More inter component integration
• The consortium’s goal is higher inter component integration
– Smoothen information flow between components 2012
2012
• It is a challenge!
– Integration in a general context does not lead to high end-user satisfaction
• Focus on one domain
– Create supportive end-user process flows for this case
– They will be described and very likely many of them are applicable for other domains.
20. The Statistical Office
Selected because …
• many partners deal with statistical data in their projects
– The LOD2 consortium is involved in all key aspects:
• standarization of the schema (DataCube)
• publishing statistical data (e.g. http://eurostat.linked-statistics.org,
http://scoreboard.lod2.eu, http://rs.ckan.net/dataset?q=rdf )
• Tools (e.g. Cubeviz)
• real world statistical data is accessible for experimentation
• it offers excellent potential to combine components from LOD2 stack
• the data integration capabilities of RDF can be exploited
• the Linked Data paradigm is an enabler to go beyond classical statistics
management
21. example scenario
Downloading Cleaning Visualization
tabular data RDF RDF
Cleaning Enrichment
tabular of the Publishing
data data
Transforming
Harmonizing
tabular data into
DataCube
22. example scenario
Downloading ORE
Cleaning Visualization
tabular data RDF CubeViz
RDF
Sieve
Cleaning
LOD enabled Enrichment
tabular
SILK
of the Publishing
Open Refine CKAN
data Limes
data
Transforming
Harmonizing
PoolParty
tabular data into
DataCube
23. The Serbian Statistical Office using the LOD2 Stack
• Our partner IMP collaborates with the Serbian National Statistical Office to
make their information available to the public as Linked Data.
• The next video demonstrates how LOD2 Stack components are used
24.
25. Publink 2013
• Free Linked Open Data Consultancy for government related organizations
– provides your organization with information and coaching around publishing Linked Open Data
• More details & application info at http://lod2.eu/Article/Publink.html
26. Jingle R.E.M., Martin Kaltenb ck, Florian Kondert
Coordination Thomas Thurner
Martin Kaltenb ck
Moderation Martin Kaltenb ck
Presented by Bert Van Nuffelen
LOD2 STACK is realized by the effort from many persons in the LOD2 consortium:
Sebastian T, Valentina, Uros, Vuk, Helmut, Hugh, Robert, Mateja, Jan and many more ...
27. Hope you enjoyed staying with us – if you need more detailed
information, visit us at www.lod2.eu and let us know how we can
improve to meet your expectations!
Don’t forget to register for our next webinar
Jan 2013 - Zemanta
feb 2013 – CKAN and publicdata.eu (Open Knowledge Foundation)
Have a great day and don’t forget ...