The document discusses using aspect-oriented techniques to modularize the development of the DBpedia ontology. It introduces aspect-oriented programming and ontologies. It proposes representing cross-cutting concerns like provenance, quality metadata, and alignments to external ontologies as aspects in the DBpedia ontology. This would enhance the development process while keeping the aspects separate and optional. The document outlines how aspects and pointcuts can be defined and connected to target modules using annotations. This allows modeling overlapping concerns and dynamically selecting ontology modules.
Talk at the 3rd DBpedia Community Meeting in Dublin about the integration of the Web Protégé ontology editor into DBpedia by the Corporate Semantic Web group at Freie Universität Berlin.
An Introduction to the Open Archives Initiative Object Reuse and Exchange (OA...Jenn Riley
Riley, Jenn. "An Introduction to the Open Archives Initiative Object Reuse and Exchange (OAI-ORE)." Digital Library Program Brown Bag Presentation, November 19, 2008.
There are 4 SPARQL query forms: SELECT, ASK, CONSTRUCT, and DESCRIBE. Each form serves a different purpose. SELECT returns variable bindings and is equivalent to an SQL SELECT query. ASK returns a boolean for whether a pattern matches or not. CONSTRUCT returns an RDF graph constructed from templates. DESCRIBE returns an RDF graph that describes resources found. Beyond their basic uses, the forms can be used for tasks like indexing, transformation, validation, and prototyping user interfaces.
Solving text search problems with Ruby on RailsAndrii Gladkyi
Briefly described typical test search tasks (e.g. full text search, phrase match) and possible solutions.
Provided review of the most popular text search gems.
Mind The Gap - Mapping a domain model to a RESTful API - OReilly SACon 2018, ...Tom Hofte
The document provides an agenda and overview for a 3.5 hour conference session on mapping a domain model to a RESTful web API. The session will cover discovering the domain model, mapping it to REST resources and operations, and other REST modeling topics. It introduces key REST concepts like resources, URIs, HTTP methods, and HATEOAS. It also discusses best practices like the Richardson Maturity Model and API style guides. Interactive exercises are planned to have attendees practice domain discovery and REST modeling for a sample airline booking case study.
How do you combine comprehensive analysis running on large amount of data with the demand for responsiveness of today's api services?
This talk illustrates one of recipes that we currently use at ING to tackle this problem. Our analytical stack combines machine learning algorithms running on hadoop cluster and api services executed by an akka cluster.
Cassandra is used as a 'latency adapter' between the fast and the slow path. Our api services are executed by the akka/spray layer. Those services consume both live data sources as well as intermediate results as promoted by the hadoop layer via cassandra. This approach allows us to provide internal api services which are both complete and responsive.
Talk at the 3rd DBpedia Community Meeting in Dublin about the integration of the Web Protégé ontology editor into DBpedia by the Corporate Semantic Web group at Freie Universität Berlin.
An Introduction to the Open Archives Initiative Object Reuse and Exchange (OA...Jenn Riley
Riley, Jenn. "An Introduction to the Open Archives Initiative Object Reuse and Exchange (OAI-ORE)." Digital Library Program Brown Bag Presentation, November 19, 2008.
There are 4 SPARQL query forms: SELECT, ASK, CONSTRUCT, and DESCRIBE. Each form serves a different purpose. SELECT returns variable bindings and is equivalent to an SQL SELECT query. ASK returns a boolean for whether a pattern matches or not. CONSTRUCT returns an RDF graph constructed from templates. DESCRIBE returns an RDF graph that describes resources found. Beyond their basic uses, the forms can be used for tasks like indexing, transformation, validation, and prototyping user interfaces.
Solving text search problems with Ruby on RailsAndrii Gladkyi
Briefly described typical test search tasks (e.g. full text search, phrase match) and possible solutions.
Provided review of the most popular text search gems.
Mind The Gap - Mapping a domain model to a RESTful API - OReilly SACon 2018, ...Tom Hofte
The document provides an agenda and overview for a 3.5 hour conference session on mapping a domain model to a RESTful web API. The session will cover discovering the domain model, mapping it to REST resources and operations, and other REST modeling topics. It introduces key REST concepts like resources, URIs, HTTP methods, and HATEOAS. It also discusses best practices like the Richardson Maturity Model and API style guides. Interactive exercises are planned to have attendees practice domain discovery and REST modeling for a sample airline booking case study.
How do you combine comprehensive analysis running on large amount of data with the demand for responsiveness of today's api services?
This talk illustrates one of recipes that we currently use at ING to tackle this problem. Our analytical stack combines machine learning algorithms running on hadoop cluster and api services executed by an akka cluster.
Cassandra is used as a 'latency adapter' between the fast and the slow path. Our api services are executed by the akka/spray layer. Those services consume both live data sources as well as intermediate results as promoted by the hadoop layer via cassandra. This approach allows us to provide internal api services which are both complete and responsive.
Applied Machine learning using H2O, python and R WorkshopAvkash Chauhan
Note: Get all workshop content at - https://github.com/h2oai/h2o-meetups/tree/master/2017_02_22_Seattle_STC_Meetup
Basic knowledge of R/python and general ML concepts
Note: This is bring-your-own-laptop workshop. Make sure you bring your laptop in order to be able to participate in the workshop
Level: 200
Time: 2 Hours
Agenda:
- Introduction to ML, H2O and Sparkling Water
- Refresher of data manipulation in R & Python
- Supervised learning
---- Understanding liner regression model with an example
---- Understanding binomial classification with an example
---- Understanding multinomial classification with an example
- Unsupervised learning
---- Understanding k-means clustering with an example
- Using machine learning models in production
- Sparkling Water Introduction & Demo
Serverless with Spring Cloud Function, Knative and riff #SpringOneTour #s1tToshiaki Maki
This document summarizes a presentation about serverless computing using Spring Cloud Function, Knative, and riff. It discusses what serverless computing is, an overview of Spring Cloud Function for developing serverless applications, and how Knative and riff can be used as platforms to deploy serverless workloads on Kubernetes. Code examples are provided to demonstrate invoking functions via HTTP and messaging with Spring Cloud Function and deploying functions to Knative and riff.
Scio is a Scala API for Google Cloud Dataflow that provides a simplified wrapper compared to native Dataflow APIs. It allows Spotify to process large datasets for tasks like personalized music recommendations using a functional programming style. Scio handles tasks like computing word counts and PageRank on Dataflow and is used by Spotify to generate weekly recommendations from 100GB of data and analyze user conversion patterns from 150GB datasets. The goal of Scio is to make Dataflow more usable and scalable for data processing while maintaining simplicity over optimization.
SWORD is a protocol for depositing content into repositories. It is a lightweight profile of the Atom Publishing Protocol that defines a set of mandatory and optional parameters for repository deposit. The SWORD protocol has been implemented in several repositories including DSpace, EPrints, IntraLibrary and Fedora. It also has several Java client implementations. SWORD aims to provide a standard way for content to be deposited into repositories from a variety of sources through a simple web service interface.
A Smarter Pig: Building a SQL interface to Pig using Apache CalciteSalesforce Engineering
This document summarizes a presentation about building a SQL interface for Apache Pig using Apache Calcite. It discusses using Calcite's query planning framework to translate SQL queries into Pig Latin scripts for execution on HDFS. The presenters describe their work at Salesforce using Calcite for batch querying across data sources, and outline their process for creating a Pig adapter for Calcite, including implementing Pig-specific operators and rules for translation. Lessons learned include that Calcite provides flexibility but documentation could be improved, and examples from other adapters were helpful for their implementation.
The document describes designing and implementing a database-centric REST API using PL/SQL and Node.js. It discusses designing the API resources and operations, creating a formal API specification, developing a mock implementation, and connecting a Node.js application to an Oracle database using the node-oracledb driver. The implementation exposes a database PL/SQL package containing employee data as JSON structures via the REST API. Push notifications are also implemented to update clients in real-time of changes in the database, such as new votes in an election.
This document provides an overview of REST and discusses why REST is used. It introduces key REST concepts like resources, operations, and JSON payloads. It then demonstrates how to create a RESTful server using a framework, highlighting the complexity involved. Finally, it discusses handling business logic and security with REST and introduces an alternative approach using DatTricityMation that aims to simplify REST development.
The document discusses the development of SWORD (Simple Web-service Offering Repository Deposit), a standard for depositing content into repositories. It describes how SWORD was motivated by the need for a common deposit interface and outlines its goals of improving repository population and interoperability. The document also reviews SWORD's technical outputs, including deposit clients and protocols, and discusses lessons learned around maintaining momentum in standard development.
SAP Business Objects XIR3.0/3.1, BI 4.0 & 4.1 Course Content
SAP Business Objects Web Intelligence and BI Launch Pad 4.0
Introducing Web Intelligence
BI launch pad: What's new in 4.0
Customizing BI launch pad
Creating Web Intelligence Documents with Queries
Restricting Data Returned by a Query
Report Design in the Java Report Panel
Enhancing the Presentation of Reports
Formatting Reports
Creating Formulas and Variables
Synchronizing Data
Analyzing Data
Drilling
Filtering data
Alerts
Input Control
Scheduling (email)
Data Refresh introduction
Sharing Web Intelligence Documents
SAP Business Objects BI Information Design Tool 4.0
Create a project
Create a connection to a relational database
Create a data foundation based on a single source relational database
Create a business layer based on a single relational data source
Publish a new universe file based on a single data source
Retrieve a universe from a repository location
Publish a universe to a local folder
Retrieve a universe from a local folder
Open a local project
Delete a local project
Convert a repository universe from a UNV to a UNX
Convert a local universe from a UNV to a UNX
Connecting to Data Sources
Create a connection shortcut
View and filter data source values in the connection editor
Create a connection to an OLAP data source
Create a BICS connection to SAP BW for client tools
Create a relational connection to SQL Server using OLEDB providers
Building the Structure of a Universe
Arrange tables in a data foundation
View table values in a data foundation
View values from multiple tables in a data foundation
Filter table values in a data foundation
Filter values from multiple tables in a data foundation
Apply a wildcard to filter table values in a data foundation
Apply a wildcard to filter values from multiple tables in a data foundation
Sort and re-order table columns in a data foundation
Edit table values in a data foundation
Create an equi-join, theta join, outer join, shortcut join
Create a self-restricting join using a column filter
Modify and remove a column filter
Detect join cardinalities in a data foundation
Manually set join cardinalities in a data foundation
Refresh the structure of a universe
Creating the Business Layer of a Universe
Create business layer folders and subfolders
Create a business layer folder and objects automatically from a table
Create a business layer subfolder and objects automatically from a table
Create dimension objects automatically from a table
Create a dimension, attribute , measure
Hide folders and objects in a business layer
Organize folders and subfolders in a business layer
View table and object dependencies
Create a custom navigation path
Create a dimensional business layer from an OLAP data source
Copy and paste folders and objects in a business layer
Filtering Data in Objects
Create a pre-defined
SubSift: a novel application of the vector space model to support the academi...Simon Price
Paper presentation at the Workshop on Applications of Pattern Analysis, August 2011, Windsor. SubSift matches submitted conference or journal papers to potential peer reviewers based on the similarity between the paper's abstract and the reviewer's publications as found in online bibliographic databases such as Google Scholar. Using concepts from information retrieval including a bag-of-words representation and cosine similarity, the SubSift tools were originally created to streamline the peer review process for the ACM SIGKDD'09 data mining conference. This paper describes how these tools were subsequently developed and deployed in the form of web services designed to support not only peer review but also personalised data discovery and mashups. SubSift has already been used by several major data mining conferences and interesting applications in other fields are now emerging.
Mopuru Babu has over 9 years of experience in software development using Java technologies and 3 years experience in Hadoop development. He has extensive experience designing, developing, and deploying multi-tier and enterprise-level distributed applications. He has expertise in technologies like Hadoop, Hive, Pig, Spark, and frameworks like Spring and Struts. He has worked on both small and large projects for clients in various industries.
The document discusses the REST (Representational State Transfer) architectural style. It defines key REST concepts like resources, representations, self-descriptive messages, and hypermedia as the engine of application state. It also outlines different REST sub-styles and constraints like client-server architecture, statelessness, and uniform interfaces. The document provides examples of how to design RESTful systems using services as resources and hiding domain models behind active resources.
Why apache Flink is the 4G of Big Data Analytics FrameworksSlim Baltagi
This document provides an overview and agenda for a presentation on Apache Flink. It begins with an introduction to Apache Flink and how it fits into the big data ecosystem. It then explains why Flink is considered the "4th generation" of big data analytics frameworks. Finally, it outlines next steps for those interested in Flink, such as learning more or contributing to the project. The presentation covers topics such as Flink's APIs, libraries, architecture, programming model and integration with other tools.
A collection of OSGi/Equinox bundles/components for development of extensible multiuser Web applications with complex domain model and application logic.
Data Scientists and Machine Learning practitioners, nowadays, seem to be churning out models by the dozen and they continuously experiment to find ways to improve their accuracies. They also use a variety of ML and DL frameworks & languages , and a typical organization may find that this results in a heterogenous, complicated bunch of assets that require different types of runtimes, resources and sometimes even specialized compute to operate efficiently.
But what does it mean for an enterprise to actually take these models to "production" ? How does an organization scale inference engines out & make them available for real-time applications without significant latencies ? There needs to be different techniques for batch (offline) inferences and instant, online scoring. Data needs to be accessed from various sources and cleansing, transformations of data needs to be enabled prior to any predictions. In many cases, there maybe no substitute for customized data handling with scripting either.
Enterprises also require additional auditing and authorizations built in, approval processes and still support a "continuous delivery" paradigm whereby a data scientist can enable insights faster. Not all models are created equal, nor are consumers of a model - so enterprises require both metering and allocation of compute resources for SLAs.
In this session, we will take a look at how machine learning is operationalized in IBM Data Science Experience (DSX), a Kubernetes based offering for the Private Cloud and optimized for the HortonWorks Hadoop Data Platform. DSX essentially brings in typical software engineering development practices to Data Science, organizing the dev->test->production for machine learning assets in much the same way as typical software deployments. We will also see what it means to deploy, monitor accuracies and even rollback models & custom scorers as well as how API based techniques enable consuming business processes and applications to remain relatively stable amidst all the chaos.
Speaker
Piotr Mierzejewski, Program Director Development IBM DSX Local, IBM
The document proposes a Semantic DESCription as a Service (SemDESCaaS) concept to enable semantic annotations for resources independent of their type. It extends the existing DESCaaS concept to generate semantic descriptions using a Resource Model Translator. SemDESCaaS implementations would be Web services that provide interlinked semantic descriptions in ontologies and WSDL formats for any resource via URLs following a pattern. The concept is conceptualized and future work involves prototyping it and adapting it to additional use cases.
The document outlines the course content for an OBIEE course, which includes topics such as data warehousing concepts, dimensional modeling, the OBIEE overview, repository basics, building the physical layer, the business model and mapping layer, the presentation layer, utilities and wizards, caching, config files, variables, security, analysis, dashboards, prompts, administering the presentation catalog, using OBIEE delivers, the MUDE, catalog manager, and additional features such as FAQs, real-time repositories, performance tuning, scenarios, resumes, certification FAQs, and interview preparation.
The document discusses data discovery, conversion, integration and visualization using RDF. It covers topics like ontologies, vocabularies, data catalogs, converting different data formats to RDF including CSV, XML and relational databases. It also discusses federated SPARQL queries to integrate data from multiple sources and different techniques for visualizing linked data including analyzing relationships, events, and multidimensional data.
More Related Content
Similar to Modular Development of the DBpedia Ontology with Ontology Aspects and Web Protégé
Applied Machine learning using H2O, python and R WorkshopAvkash Chauhan
Note: Get all workshop content at - https://github.com/h2oai/h2o-meetups/tree/master/2017_02_22_Seattle_STC_Meetup
Basic knowledge of R/python and general ML concepts
Note: This is bring-your-own-laptop workshop. Make sure you bring your laptop in order to be able to participate in the workshop
Level: 200
Time: 2 Hours
Agenda:
- Introduction to ML, H2O and Sparkling Water
- Refresher of data manipulation in R & Python
- Supervised learning
---- Understanding liner regression model with an example
---- Understanding binomial classification with an example
---- Understanding multinomial classification with an example
- Unsupervised learning
---- Understanding k-means clustering with an example
- Using machine learning models in production
- Sparkling Water Introduction & Demo
Serverless with Spring Cloud Function, Knative and riff #SpringOneTour #s1tToshiaki Maki
This document summarizes a presentation about serverless computing using Spring Cloud Function, Knative, and riff. It discusses what serverless computing is, an overview of Spring Cloud Function for developing serverless applications, and how Knative and riff can be used as platforms to deploy serverless workloads on Kubernetes. Code examples are provided to demonstrate invoking functions via HTTP and messaging with Spring Cloud Function and deploying functions to Knative and riff.
Scio is a Scala API for Google Cloud Dataflow that provides a simplified wrapper compared to native Dataflow APIs. It allows Spotify to process large datasets for tasks like personalized music recommendations using a functional programming style. Scio handles tasks like computing word counts and PageRank on Dataflow and is used by Spotify to generate weekly recommendations from 100GB of data and analyze user conversion patterns from 150GB datasets. The goal of Scio is to make Dataflow more usable and scalable for data processing while maintaining simplicity over optimization.
SWORD is a protocol for depositing content into repositories. It is a lightweight profile of the Atom Publishing Protocol that defines a set of mandatory and optional parameters for repository deposit. The SWORD protocol has been implemented in several repositories including DSpace, EPrints, IntraLibrary and Fedora. It also has several Java client implementations. SWORD aims to provide a standard way for content to be deposited into repositories from a variety of sources through a simple web service interface.
A Smarter Pig: Building a SQL interface to Pig using Apache CalciteSalesforce Engineering
This document summarizes a presentation about building a SQL interface for Apache Pig using Apache Calcite. It discusses using Calcite's query planning framework to translate SQL queries into Pig Latin scripts for execution on HDFS. The presenters describe their work at Salesforce using Calcite for batch querying across data sources, and outline their process for creating a Pig adapter for Calcite, including implementing Pig-specific operators and rules for translation. Lessons learned include that Calcite provides flexibility but documentation could be improved, and examples from other adapters were helpful for their implementation.
The document describes designing and implementing a database-centric REST API using PL/SQL and Node.js. It discusses designing the API resources and operations, creating a formal API specification, developing a mock implementation, and connecting a Node.js application to an Oracle database using the node-oracledb driver. The implementation exposes a database PL/SQL package containing employee data as JSON structures via the REST API. Push notifications are also implemented to update clients in real-time of changes in the database, such as new votes in an election.
This document provides an overview of REST and discusses why REST is used. It introduces key REST concepts like resources, operations, and JSON payloads. It then demonstrates how to create a RESTful server using a framework, highlighting the complexity involved. Finally, it discusses handling business logic and security with REST and introduces an alternative approach using DatTricityMation that aims to simplify REST development.
The document discusses the development of SWORD (Simple Web-service Offering Repository Deposit), a standard for depositing content into repositories. It describes how SWORD was motivated by the need for a common deposit interface and outlines its goals of improving repository population and interoperability. The document also reviews SWORD's technical outputs, including deposit clients and protocols, and discusses lessons learned around maintaining momentum in standard development.
SAP Business Objects XIR3.0/3.1, BI 4.0 & 4.1 Course Content
SAP Business Objects Web Intelligence and BI Launch Pad 4.0
Introducing Web Intelligence
BI launch pad: What's new in 4.0
Customizing BI launch pad
Creating Web Intelligence Documents with Queries
Restricting Data Returned by a Query
Report Design in the Java Report Panel
Enhancing the Presentation of Reports
Formatting Reports
Creating Formulas and Variables
Synchronizing Data
Analyzing Data
Drilling
Filtering data
Alerts
Input Control
Scheduling (email)
Data Refresh introduction
Sharing Web Intelligence Documents
SAP Business Objects BI Information Design Tool 4.0
Create a project
Create a connection to a relational database
Create a data foundation based on a single source relational database
Create a business layer based on a single relational data source
Publish a new universe file based on a single data source
Retrieve a universe from a repository location
Publish a universe to a local folder
Retrieve a universe from a local folder
Open a local project
Delete a local project
Convert a repository universe from a UNV to a UNX
Convert a local universe from a UNV to a UNX
Connecting to Data Sources
Create a connection shortcut
View and filter data source values in the connection editor
Create a connection to an OLAP data source
Create a BICS connection to SAP BW for client tools
Create a relational connection to SQL Server using OLEDB providers
Building the Structure of a Universe
Arrange tables in a data foundation
View table values in a data foundation
View values from multiple tables in a data foundation
Filter table values in a data foundation
Filter values from multiple tables in a data foundation
Apply a wildcard to filter table values in a data foundation
Apply a wildcard to filter values from multiple tables in a data foundation
Sort and re-order table columns in a data foundation
Edit table values in a data foundation
Create an equi-join, theta join, outer join, shortcut join
Create a self-restricting join using a column filter
Modify and remove a column filter
Detect join cardinalities in a data foundation
Manually set join cardinalities in a data foundation
Refresh the structure of a universe
Creating the Business Layer of a Universe
Create business layer folders and subfolders
Create a business layer folder and objects automatically from a table
Create a business layer subfolder and objects automatically from a table
Create dimension objects automatically from a table
Create a dimension, attribute , measure
Hide folders and objects in a business layer
Organize folders and subfolders in a business layer
View table and object dependencies
Create a custom navigation path
Create a dimensional business layer from an OLAP data source
Copy and paste folders and objects in a business layer
Filtering Data in Objects
Create a pre-defined
SubSift: a novel application of the vector space model to support the academi...Simon Price
Paper presentation at the Workshop on Applications of Pattern Analysis, August 2011, Windsor. SubSift matches submitted conference or journal papers to potential peer reviewers based on the similarity between the paper's abstract and the reviewer's publications as found in online bibliographic databases such as Google Scholar. Using concepts from information retrieval including a bag-of-words representation and cosine similarity, the SubSift tools were originally created to streamline the peer review process for the ACM SIGKDD'09 data mining conference. This paper describes how these tools were subsequently developed and deployed in the form of web services designed to support not only peer review but also personalised data discovery and mashups. SubSift has already been used by several major data mining conferences and interesting applications in other fields are now emerging.
Mopuru Babu has over 9 years of experience in software development using Java technologies and 3 years experience in Hadoop development. He has extensive experience designing, developing, and deploying multi-tier and enterprise-level distributed applications. He has expertise in technologies like Hadoop, Hive, Pig, Spark, and frameworks like Spring and Struts. He has worked on both small and large projects for clients in various industries.
The document discusses the REST (Representational State Transfer) architectural style. It defines key REST concepts like resources, representations, self-descriptive messages, and hypermedia as the engine of application state. It also outlines different REST sub-styles and constraints like client-server architecture, statelessness, and uniform interfaces. The document provides examples of how to design RESTful systems using services as resources and hiding domain models behind active resources.
Why apache Flink is the 4G of Big Data Analytics FrameworksSlim Baltagi
This document provides an overview and agenda for a presentation on Apache Flink. It begins with an introduction to Apache Flink and how it fits into the big data ecosystem. It then explains why Flink is considered the "4th generation" of big data analytics frameworks. Finally, it outlines next steps for those interested in Flink, such as learning more or contributing to the project. The presentation covers topics such as Flink's APIs, libraries, architecture, programming model and integration with other tools.
A collection of OSGi/Equinox bundles/components for development of extensible multiuser Web applications with complex domain model and application logic.
Data Scientists and Machine Learning practitioners, nowadays, seem to be churning out models by the dozen and they continuously experiment to find ways to improve their accuracies. They also use a variety of ML and DL frameworks & languages , and a typical organization may find that this results in a heterogenous, complicated bunch of assets that require different types of runtimes, resources and sometimes even specialized compute to operate efficiently.
But what does it mean for an enterprise to actually take these models to "production" ? How does an organization scale inference engines out & make them available for real-time applications without significant latencies ? There needs to be different techniques for batch (offline) inferences and instant, online scoring. Data needs to be accessed from various sources and cleansing, transformations of data needs to be enabled prior to any predictions. In many cases, there maybe no substitute for customized data handling with scripting either.
Enterprises also require additional auditing and authorizations built in, approval processes and still support a "continuous delivery" paradigm whereby a data scientist can enable insights faster. Not all models are created equal, nor are consumers of a model - so enterprises require both metering and allocation of compute resources for SLAs.
In this session, we will take a look at how machine learning is operationalized in IBM Data Science Experience (DSX), a Kubernetes based offering for the Private Cloud and optimized for the HortonWorks Hadoop Data Platform. DSX essentially brings in typical software engineering development practices to Data Science, organizing the dev->test->production for machine learning assets in much the same way as typical software deployments. We will also see what it means to deploy, monitor accuracies and even rollback models & custom scorers as well as how API based techniques enable consuming business processes and applications to remain relatively stable amidst all the chaos.
Speaker
Piotr Mierzejewski, Program Director Development IBM DSX Local, IBM
The document proposes a Semantic DESCription as a Service (SemDESCaaS) concept to enable semantic annotations for resources independent of their type. It extends the existing DESCaaS concept to generate semantic descriptions using a Resource Model Translator. SemDESCaaS implementations would be Web services that provide interlinked semantic descriptions in ontologies and WSDL formats for any resource via URLs following a pattern. The concept is conceptualized and future work involves prototyping it and adapting it to additional use cases.
The document outlines the course content for an OBIEE course, which includes topics such as data warehousing concepts, dimensional modeling, the OBIEE overview, repository basics, building the physical layer, the business model and mapping layer, the presentation layer, utilities and wizards, caching, config files, variables, security, analysis, dashboards, prompts, administering the presentation catalog, using OBIEE delivers, the MUDE, catalog manager, and additional features such as FAQs, real-time repositories, performance tuning, scenarios, resumes, certification FAQs, and interview preparation.
The document discusses data discovery, conversion, integration and visualization using RDF. It covers topics like ontologies, vocabularies, data catalogs, converting different data formats to RDF including CSV, XML and relational databases. It also discusses federated SPARQL queries to integrate data from multiple sources and different techniques for visualizing linked data including analyzing relationships, events, and multidimensional data.
Similar to Modular Development of the DBpedia Ontology with Ontology Aspects and Web Protégé (20)
Modular Development of the DBpedia Ontology with Ontology Aspects and Web Protégé
1. Modular Development of the
DBpedia Ontology with Ontology
Aspects and Web Protégé
Ralph Schäfermeier, Adrian Paschke,
Alexandru Todor
Corporate Semantic Web
Institute for Computer Science
Freie Universität Berlin
4th DBpedia Community Meeting
@BIS 2015 in Poznan
2. Ralph Schäfermeier, Adrian Paschke Aspect-Orientation in the DBpedia ontology DBpedia Community Meeting 2015 Poznan
• Motivation
• Aspect-Oriented Programming
• Aspect-Oriented Ontologies
• Aspects in the DBpedia ontology
Outline
3. Ralph Schäfermeier, Adrian Paschke Aspect-Orientation in the DBpedia ontology DBpedia Community Meeting 2015 Poznan
• Provenance metadata
• Quality metadata
• different with different languages
• Alignment to external ontologies
• possibly contradicting
• Enhance overall development/evolution process
• but unobtrusive
Motivation
4. Ralph Schäfermeier, Adrian Paschke Aspect-Orientation in the DBpedia ontology DBpedia Community Meeting 2015 Poznan
• Used for separation of cross-cutting concerns
• Decomposition of a system based on functional and non-
functional requirements
• Generally provides semantics for:
• describing a module
• re-combining modules at runtime
Aspect-Oriented Programming
5. Ralph Schäfermeier, Adrian Paschke Aspect-Orientation in the DBpedia ontology DBpedia Community Meeting 2015 Poznan
Example: Authentication
void makeDeposit(Account a, float amount) {
AuthService as = getAuthService ();
if (!as.authenticated(a.user))
as.authenticate (a.user);
a.balance +=amount;
}
void makeWithdrawal(Account a, float amount) {
AuthService as = getAuthService ();
if (!as.authenticated(a.user))
as.authenticate (a.user);
a.balance −= amount;
}
6. Ralph Schäfermeier, Adrian Paschke Aspect-Orientation in the DBpedia ontology DBpedia Community Meeting 2015 Poznan
Example: Authentication
void makeDeposit(Account a, float amount) {
AuthService as = getAuthService ();
if (!as.authenticated(a.user))
as.authenticate (a.user);
a.balance +=amount;
}
void makeWithdrawal(Account a, float amount) {
AuthService as = getAuthService ();
if (!as.authenticated(a.user))
as.authenticate (a.user);
a.balance −= amount;
}
Actual
business logic
7. Ralph Schäfermeier, Adrian Paschke Aspect-Orientation in the DBpedia ontology DBpedia Community Meeting 2015 Poznan
Example: Authentication
void makeDeposit(Account a, float amount) {
AuthService as = getAuthService ();
if (!as.authenticated(a.user))
as.authenticate (a.user);
a.balance +=amount;
}
void makeWithdrawal(Account a, float amount) {
AuthService as = getAuthService ();
if (!as.authenticated(a.user))
as.authenticate (a.user);
a.balance −= amount;
} Re-occuring authentication concern
→ strong inter-dependencies
8. Ralph Schäfermeier, Adrian Paschke Aspect-Orientation in the DBpedia ontology DBpedia Community Meeting 2015 Poznan
Example: Authentication Aspect
Aspect Authentication () {
AuthService as = getAuthService ();
if (!as.authenticated(a.user))
as.authenticate (a.user);
}
void makeWithdrawal(Account a, float amount) {
a.balance −= amount;
}
void makeDeposit(Account a, float amount) {
a.balance +=amount;
}
Separation
into self-
contained
aspect
9. Ralph Schäfermeier, Adrian Paschke Aspect-Orientation in the DBpedia ontology DBpedia Community Meeting 2015 Poznan
Example: Authentication Aspect
Aspect Authentication () {
AuthService as = getAuthService ();
if (!as.authenticated(a.user))
as.authenticate (a.user);
}
@before Authentication
void makeWithdrawal(Account a, float amount) {
a.balance −= amount;
}
@before Authentication
void makeDeposit(Account a, float amount) {
a.balance +=amount;
}
Separation
into self-
contained
aspect
Connection via
"Join Points"
10. Ralph Schäfermeier, Adrian Paschke Aspect-Orientation in the DBpedia ontology DBpedia Community Meeting 2015 Poznan
• A pointcut is a set of join points defined by quantification
Two Principles: Quantification and Obliviousness
8m(p1, . . . , pn) 2 M :
s(m(p1, . . . , pn)) ! (m(p1, . . . , pn) ! a(p1, . . . , pn))
[Steimann 2005, Rashid 2006]
m(p1 , . . . , pn ) ∈ M: a method adhering to the signature m(p1, . . . , pn),
M: set of all methods defined in the software system
s: a query specifying a matching criterion,
a(p1, . . . , pn): the execution of the aspect with all the parameters of
each method, respectively
11. Ralph Schäfermeier, Adrian Paschke Aspect-Orientation in the DBpedia ontology DBpedia Community Meeting 2015 Poznan
• A pointcut is a set of join points defined by quantification
Two Principles: Quantification and Obliviousness
8m(p1, . . . , pn) 2 M :
s(m(p1, . . . , pn)) ! (m(p1, . . . , pn) ! a(p1, . . . , pn))
[Steimann 2005, Rashid 2006]
• Example:
void makeDeposit(Account a, float amount) {…}
void makeWithdrawal(Account a, float amount) {…}
@Aspect class AuthenticationAspect () {
@Before(”execution(*.make*(Account,float))”)
void authenticate(Account a, float amount) {
AuthService as = getAuthService () ;
if (!as.authenticated(a.user ))
as.authenticate (a.user ) ;
} }
Pointcut = abstract
definition of a set of
join points
12. Ralph Schäfermeier, Adrian Paschke Aspect-Orientation in the DBpedia ontology DBpedia Community Meeting 2015 Poznan
• Improvement of reasoning and query result retrieval
performance
• Scalability for ontology evolution and maintenance
• Complexity management
• Amelioration of understandability
• (Partial) reuse
• Context-awareness
• Personalization
[Parent and Spaccapietra, 2009]
Modular Ontologies: Motivation
13. Ralph Schäfermeier, Adrian Paschke Aspect-Orientation in the DBpedia ontology DBpedia Community Meeting 2015 Poznan
Aspect-Oriented Ontology Development:
Overview
´e ontology editor4
.
s described in Section 2.4, the connections between aspects and their targe
established by using an extensional description (by providing a complete
), or intensionally (by specifying the properties of the targets using an a
or quantification). Functionality for the first variant involves manually creat
priate annotation axioms (see Figure 2, middle).
Query based
aspect annotation
Query Manual annotation/
selected editing
context in tool
Original ontology
Ontology with
aspect annotations
Aspect-based
module selection
Aspect names
or descriptions
Ontology
module
2. Our approach to aspect-oriented ontology modularization. Axioms of the original ontolo
otated with entities from an external aspect ontology (center). Axiom selection is based on que
ed manually. Module extraction happens dynamically, as a particular part of the ontology is r
Definition Selection
14. Ralph Schäfermeier, Adrian Paschke Aspect-Orientation in the DBpedia ontology DBpedia Community Meeting 2015 Poznan
Ontology of Aspects
Aspect
Graph
Advice Pointcut
hasAdvice
rdf:type
rdfs:domain
rdfs:range
hasPointcut
rdf:type
owl:equivalentClass
rdf:type
hasAdvice some Advice
hasPointcut some Pointcut
≡
rdfs:range
FAO
ntology
se.
(b) Custom extension of the
original ontology for repre-
senting a temporal aspect.
(c) Modeling of the extension as an aspect by using an
axiom annotation.
"2008-02-18T00:00:00"
hasBeginning
Interval_1ingdom
recognizedBy
Kosovo
Recognition_1
self_governing
rdf:type
DateTimeInterval
rdf:type
rdf:type
recognizingEntity validity
Aspect
BuiltIn
Aspect
Resoning
Complexity
Access
Restriction
External
Aspect
Compatibility
Provenance
Based
Trust
≡ Provenance
Trust
rdfs:range
Customer
Specific
Feature
<<annotation>>
hasAspect
<<annotation>>
…
rdfs:subPropertyOf
. . .
15. Ralph Schäfermeier, Adrian Paschke Aspect-Orientation in the DBpedia ontology DBpedia Community Meeting 2015 Poznan
Aspect-Oriented Ontologies
16. Ralph Schäfermeier, Adrian Paschke Aspect-Orientation in the DBpedia ontology DBpedia Community Meeting 2015 Poznan
RDF example: Provenance
@prefix : <http://www.example.org/aspect123#> .
@prefix this: <http://www.example.org/aspect123> .
...
this: {
this: a aspect:Aspect .
aspect:hasPointcut :pointcut .
aspect:hasAdvice :advice .
}
:pointcut {
dbpedia:Barack_Obama dbpedia_prop:spouse dbpedia:Michelle_Obama.
}
:advice {
:pointcut prov:generatedAtTime "2004-07-29T10:38:00Z"^^xsd:dateTime .
:pointcut prov:entity <http://de.wikipedia.org/wiki/Barack_Obama>
:pointcut prov:wasAttributedTo _:a .
_:a foaf:homepage <http://de.wikipedia.org/wiki/user:Mathias_Schindler>
}
17. Ralph Schäfermeier, Adrian Paschke Aspect-Orientation in the DBpedia ontology DBpedia Community Meeting 2015 Poznan
Generating views by querying aspects with named
graphs
@prefix aspect: <http://www.corporate-semantic
web.de/ontologies/aspects#> .
prefix : <…>
select ?G ?s ?p ?o where {
{graph ?G {: a aspect:Aspect}} UNION
{graph ?H {: a aspect:Aspect
{: aspect:hasPointcut ?G} UNION {:
aspect:hasAdvice ?G}}}
graph ?G {?s ?p ?o}
}
18. Ralph Schäfermeier, Adrian Paschke Aspect-Orientation in the DBpedia ontology DBpedia Community Meeting 2015 Poznan
Arbitrarily many overlapping concerns
encapsulated in aspects
19. Ralph Schäfermeier, Adrian Paschke Aspect-Orientation in the DBpedia ontology DBpedia Community Meeting 2015 Poznan
Arbitrarily many overlapping concerns
encapsulated in aspects
:pointcut prov:generatedAtTime
"2004-07-29T10:38:00Z"^^xsd:dateTime .
:pointcut prov:entity <http://de.wikipedia.org/
wiki/Barack_Obama>
:pointcut prov:wasAttributedTo _:a .
_:a foaf:homepage <http://de.wikipedia.org/wiki/
user:Mathias_Schindler>
provenance
20. Ralph Schäfermeier, Adrian Paschke Aspect-Orientation in the DBpedia ontology DBpedia Community Meeting 2015 Poznan
Arbitrarily many overlapping concerns
encapsulated in aspects
:pointcut prov:generatedAtTime
"2004-07-29T10:38:00Z"^^xsd:dateTime .
:pointcut prov:entity <http://de.wikipedia.org/
wiki/Barack_Obama>
:pointcut prov:wasAttributedTo _:a .
_:a foaf:homepage <http://de.wikipedia.org/wiki/
user:Mathias_Schindler>
provenance temporal semantics
21. Ralph Schäfermeier, Adrian Paschke Aspect-Orientation in the DBpedia ontology DBpedia Community Meeting 2015 Poznan
Arbitrarily many overlapping concerns
encapsulated in aspects
:pointcut prov:generatedAtTime
"2004-07-29T10:38:00Z"^^xsd:dateTime .
:pointcut prov:entity <http://de.wikipedia.org/
wiki/Barack_Obama>
:pointcut prov:wasAttributedTo _:a .
_:a foaf:homepage <http://de.wikipedia.org/wiki/
user:Mathias_Schindler>
http://trustyuri.net/
RAq2h8A83lGbT2015ua85K
temporal semanticsprovenance
digital signature
22. Ralph Schäfermeier, Adrian Paschke Aspect-Orientation in the DBpedia ontology DBpedia Community Meeting 2015 Poznan
Arbitrarily many overlapping concerns
encapsulated in aspects
:pointcut prov:generatedAtTime
"2004-07-29T10:38:00Z"^^xsd:dateTime .
:pointcut prov:entity <http://de.wikipedia.org/
wiki/Barack_Obama>
:pointcut prov:wasAttributedTo _:a .
_:a foaf:homepage <http://de.wikipedia.org/wiki/
user:Mathias_Schindler>
http://trustyuri.net/
RAq2h8A83lGbT2015ua85K
schema.org
DUL
…
temporal semanticsprovenance
digital signature aligments to
external ontologies
23. Ralph Schäfermeier, Adrian Paschke Aspect-Orientation in the DBpedia ontology DBpedia Community Meeting 2015 Poznan
Protégé-Plugin
24. Ralph Schäfermeier, Adrian Paschke Aspect-Orientation in the DBpedia ontology DBpedia Community Meeting 2015 Poznan
Web Protégé: Aspect as Modeling Context
• List of
preconfigured
named aspects
• Selected
aspects are
added to all
axioms
25. Ralph Schäfermeier, Adrian Paschke Aspect-Orientation in the DBpedia ontology DBpedia Community Meeting 2015 Poznan
• universal
• unobtrusive
• use would be optional
• easy to use for ontology modelers
• reasoning on meta-model for modules possible (e.g.
module selection based on complex time constraints).
Benefits
26. Ralph Schäfermeier, Adrian Paschke Aspect-Orientation in the DBpedia ontology DBpedia Community Meeting 2015 Poznan
Thanks
27. Ralph Schäfermeier, Adrian Paschke Aspect-Orientation in the DBpedia ontology DBpedia Community Meeting 2015 Poznan
[d'Aquin 2012] Mathieu d’Aquin. Modularizing Ontologies. In Mari Carmen Suárez-
Figueroa, Asunción Gómez- Pérez, Enrico Motta, and Aldo Gangemi, editors, Ontology
Engineering in a Networked World, pages 213–233. Springer Berlin Heidelberg, 2012
[Priss 2008] Uta Priss. Facet-like Structures in Computer Science. Axiomathes, 18(2):243–
255, June 2008.
[Steimann 2005] Friedrich Steimann. Domain Models Are Aspect Free. In Lionel Briand and
Clay Williams, editors, Model Driven Engineering Languages and Systems, number 3713 in
Lecture Notes in Computer Science, pages 171–185. Springer Berlin Heidelberg, January
2005.
[Rashid 2006] Awais Rashid and Ana Moreira. Domain Models Are NOT Aspect Free. In
Oscar Nierstrasz, Jon Whittle, David Harel, and Gianna Reggio, editors, Model Driven
Engineering Languages and Sys- tems, number 4199 in Lecture Notes in Computer Science,
pages 155–169. Springer Berlin Heidelberg, January 2006.
[Konev 2009] Boris Konev, Carsten Lutz, Dirk Walther, and Frank Wolter. Formal Properties
of Modularisation. In Heiner Stuckenschmidt, Christine Parent, and Stefano Spaccapietra,
editors, Modular Ontologies, number 5445 in Lecture Notes in Computer Science, pages
25–66. Springer Berlin Heidelberg, January 2009.
References