Some tools developed at OEG (Ontology Engineering Group) for facilitating ontology engineering activities as evaluation, documentation, releasing and publication.
Make our Scientific Datasets Accessible and Interoperable on the WebFranck Michel
The presentation investigates the challenges that we must face to share scientific datasets on the Web following the Linked Open Data principles. We present the standards of the Semantic Web and investigate how they can help address those challenges. We give tips as to how to choose vocabularies to describe data and metadata, link datasets to other related datasets by making appropriate alignments, translate existing data sources to RDF and publish it on the Web as linked data.
These slides were presented at the "graph databases in life sciences workshop". There is an accompanying Neo4j guide that will walk you through importing data into Neo4j using web services form a number of databases at EMBL-EBI.
https://github.com/simonjupp/importing-lifesci-data-into-neo4j
Data integration is intrinsic to how modern research is undertaken in areas such as genomics, drug development and personalised medicine. To better enable this integration a large number of biomedical ontologies have been developed to provide standard semantics for describing metadata. There are now several hundred biomedical ontologies in widespread use that describe concepts such as genes, molecules, drugs and diseases. This amounts to millions of terms that are interconnected via relationships that naturally form a graph of biomedical terminology.
The Ontology Lookup Service (OLS) (http://www.ebi.ac.uk/ols) integrates over 160 ontologies and provide a central point for the biomedical community to query and visualise ontologies. OLS also provide a RESTful API over the ontologies that is used in high-throughput data annotation pipelines. OLS is built on top of a Neo4j database that provides efficient indexes for extracting ontological relationships. We have developed generic tools for loading RDF/OWL ontologies into Neo4j where the indexes are optimised for serving common ontology queries. We are now moving to adopt graph database more widely in applications relating to ontology mapping prediction and recommendation systems for data annotation.
Make our Scientific Datasets Accessible and Interoperable on the WebFranck Michel
The presentation investigates the challenges that we must face to share scientific datasets on the Web following the Linked Open Data principles. We present the standards of the Semantic Web and investigate how they can help address those challenges. We give tips as to how to choose vocabularies to describe data and metadata, link datasets to other related datasets by making appropriate alignments, translate existing data sources to RDF and publish it on the Web as linked data.
These slides were presented at the "graph databases in life sciences workshop". There is an accompanying Neo4j guide that will walk you through importing data into Neo4j using web services form a number of databases at EMBL-EBI.
https://github.com/simonjupp/importing-lifesci-data-into-neo4j
Data integration is intrinsic to how modern research is undertaken in areas such as genomics, drug development and personalised medicine. To better enable this integration a large number of biomedical ontologies have been developed to provide standard semantics for describing metadata. There are now several hundred biomedical ontologies in widespread use that describe concepts such as genes, molecules, drugs and diseases. This amounts to millions of terms that are interconnected via relationships that naturally form a graph of biomedical terminology.
The Ontology Lookup Service (OLS) (http://www.ebi.ac.uk/ols) integrates over 160 ontologies and provide a central point for the biomedical community to query and visualise ontologies. OLS also provide a RESTful API over the ontologies that is used in high-throughput data annotation pipelines. OLS is built on top of a Neo4j database that provides efficient indexes for extracting ontological relationships. We have developed generic tools for loading RDF/OWL ontologies into Neo4j where the indexes are optimised for serving common ontology queries. We are now moving to adopt graph database more widely in applications relating to ontology mapping prediction and recommendation systems for data annotation.
Research Data Sharing: A Basic FrameworkPaul Groth
Some thoughts on thinking about data sharing. Prepared for the 2016 LERU Doctoral Summer School - Data Stewardship for Scientific Discovery and Innovation.
http://www.dtls.nl/fair-data/fair-data-training/leru-summer-school/
Why do they call it Linked Data when they want to say...?Oscar Corcho
The four Linked Data publishing principles established in 2006 seem to be quite clear and well understood by people inside and outside the core Linked Data and Semantic Web community. However, not only when discussing with outsiders about the goodness of Linked Data but also when reviewing papers for the COLD workshop series, I find myself, in many occasions, going back again to the principles in order to see whether some approach for Web data publication and consumption is actually Linked Data or not. In this talk we will review some of the current approaches that we have for publishing data on the Web, and we will reflect on why it is sometimes so difficult to get into an agreement on what we understand by Linked Data. Furthermore, we will take the opportunity to describe yet another approach that we have been working on recently at the Center for Open Middleware, a joint technology center between Banco Santander and Universidad Politécnica de Madrid, in order to facilitate Linked Data consumption.
Research Objects for improved sharing and reproducibilityOscar Corcho
Presentation about the usage of Research Objects to improve scientific experiment sharing and reproducibility, given at the Dagstuhl Perspective Workshop on the intersection between Computer Sciences and Psychology (July 2015)
Engaging Information Professionals in the Process of Authoritative Interlinki...Lucy McKenna
Through the use of Linked Data (LD), Libraries, Archives and Museums (LAMs) have the potential to expose their collections to a larger audience and to allow for more efficient user searches. Despite this, relatively few LAMs have invested in LD projects and the majority of these display limited interlinking across datasets and institutions. A survey was conducted to understand Information Professionals' (IPs') position with regards to LD, with a particular focus on the interlinking problem. The survey was completed by 185 librarians, archivists, metadata cataloguers and researchers. Results indicated that, when interlinking, IPs find the process of ontology and property selection to be particularly challenging, and LD tooling to be technologically complex and unsuitable for their needs.
Our research is focused on developing an authoritative interlinking framework for LAMs with a view to increasing IP engagement in the linking process. Our framework will provide a set of standards to facilitate IPs in the selection of link types, specifically when linking local resources to authorities. The framework will include guidelines for authority, ontology and property selection, and for adding provenance data. A user-interface will be developed which will direct IPs through the resource interlinking process as per our framework. Although there are existing tools in this domain, our framework differs in that it will be designed with the needs and expertise of IPs in mind. This will be achieved by involving IPs in the design and evaluation of the framework. A mock-up of the interface has already been tested and adjustments have been made based on results. We are currently working on developing a minimal viable product so as to allow for further testing of the framework. We will present our updated framework, interface, and proposed interlinking solutions.
Ontology Evaluation: a pitfall-based approach to ontology diagnosisMaría Poveda Villalón
Ontology evaluation, which includes ontology diagnosis and repair, is a complex activity that should be carried out in every ontology development project, because it checks for the technical quality of the ontology. However, there is an important gap between the methodological work about ontology evaluation and the tools that support such an activity. More precisely, not many approaches provide clear guidance about how to diagnose ontologies and how to repair them accordingly.
This thesis aims to advance the current state of the art of ontology evaluation, specifically in the ontology diagnosis activity. The main goals of this thesis are (a) to help ontology engineers to diagnose their ontologies in order to find common pitfalls and (b) to lessen the effort required from them by providing the suitable technological support. This thesis presents the following main contributions:
• A catalogue that describes 41 pitfalls that ontology developers might include in their ontologies.
• A quality model for ontology diagnose that aligns the pitfall catalogue to existing quality models for semantic technologies.
• The design and implementation of 48 methods for detecting 33 out of the 41 pitfalls defined in the catalogue.
• A system called OOPS! (OntOlogy Pitfall Scanner!) that allows ontology engineers to (semi)automatically diagnose their ontologies.
According to the feedback gathered and satisfaction tests carried out, the approach developed and presented in this thesis effectively helps users to increase the quality of their ontologies. At the time of writing this thesis, OOPS! has been broadly accepted by a high number of users worldwide and has been used around 3000 times from 60 different countries. OOPS! is integrated with third-party software and is locally installed in private enterprises being used both for ontology development activities and training courses.
Update on Financial Industry Business Ontology status, as presented to the Open Financial Data Group. Includes description of the canonical reference model (business conceptual ontology) and the principles by which this was built
Jarrar: Ontology Modeling using OntoClean MethodologyMustafa Jarrar
Lecture Description:
Lecture video by Mustafa Jarrar at Birzeit University, Palestine.
See the course webpage at: http://jarrar-courses.blogspot.com/2012/04/aai-spring-jan-may-2012.html and http://www.jarrar.info
Research Data Sharing: A Basic FrameworkPaul Groth
Some thoughts on thinking about data sharing. Prepared for the 2016 LERU Doctoral Summer School - Data Stewardship for Scientific Discovery and Innovation.
http://www.dtls.nl/fair-data/fair-data-training/leru-summer-school/
Why do they call it Linked Data when they want to say...?Oscar Corcho
The four Linked Data publishing principles established in 2006 seem to be quite clear and well understood by people inside and outside the core Linked Data and Semantic Web community. However, not only when discussing with outsiders about the goodness of Linked Data but also when reviewing papers for the COLD workshop series, I find myself, in many occasions, going back again to the principles in order to see whether some approach for Web data publication and consumption is actually Linked Data or not. In this talk we will review some of the current approaches that we have for publishing data on the Web, and we will reflect on why it is sometimes so difficult to get into an agreement on what we understand by Linked Data. Furthermore, we will take the opportunity to describe yet another approach that we have been working on recently at the Center for Open Middleware, a joint technology center between Banco Santander and Universidad Politécnica de Madrid, in order to facilitate Linked Data consumption.
Research Objects for improved sharing and reproducibilityOscar Corcho
Presentation about the usage of Research Objects to improve scientific experiment sharing and reproducibility, given at the Dagstuhl Perspective Workshop on the intersection between Computer Sciences and Psychology (July 2015)
Engaging Information Professionals in the Process of Authoritative Interlinki...Lucy McKenna
Through the use of Linked Data (LD), Libraries, Archives and Museums (LAMs) have the potential to expose their collections to a larger audience and to allow for more efficient user searches. Despite this, relatively few LAMs have invested in LD projects and the majority of these display limited interlinking across datasets and institutions. A survey was conducted to understand Information Professionals' (IPs') position with regards to LD, with a particular focus on the interlinking problem. The survey was completed by 185 librarians, archivists, metadata cataloguers and researchers. Results indicated that, when interlinking, IPs find the process of ontology and property selection to be particularly challenging, and LD tooling to be technologically complex and unsuitable for their needs.
Our research is focused on developing an authoritative interlinking framework for LAMs with a view to increasing IP engagement in the linking process. Our framework will provide a set of standards to facilitate IPs in the selection of link types, specifically when linking local resources to authorities. The framework will include guidelines for authority, ontology and property selection, and for adding provenance data. A user-interface will be developed which will direct IPs through the resource interlinking process as per our framework. Although there are existing tools in this domain, our framework differs in that it will be designed with the needs and expertise of IPs in mind. This will be achieved by involving IPs in the design and evaluation of the framework. A mock-up of the interface has already been tested and adjustments have been made based on results. We are currently working on developing a minimal viable product so as to allow for further testing of the framework. We will present our updated framework, interface, and proposed interlinking solutions.
Ontology Evaluation: a pitfall-based approach to ontology diagnosisMaría Poveda Villalón
Ontology evaluation, which includes ontology diagnosis and repair, is a complex activity that should be carried out in every ontology development project, because it checks for the technical quality of the ontology. However, there is an important gap between the methodological work about ontology evaluation and the tools that support such an activity. More precisely, not many approaches provide clear guidance about how to diagnose ontologies and how to repair them accordingly.
This thesis aims to advance the current state of the art of ontology evaluation, specifically in the ontology diagnosis activity. The main goals of this thesis are (a) to help ontology engineers to diagnose their ontologies in order to find common pitfalls and (b) to lessen the effort required from them by providing the suitable technological support. This thesis presents the following main contributions:
• A catalogue that describes 41 pitfalls that ontology developers might include in their ontologies.
• A quality model for ontology diagnose that aligns the pitfall catalogue to existing quality models for semantic technologies.
• The design and implementation of 48 methods for detecting 33 out of the 41 pitfalls defined in the catalogue.
• A system called OOPS! (OntOlogy Pitfall Scanner!) that allows ontology engineers to (semi)automatically diagnose their ontologies.
According to the feedback gathered and satisfaction tests carried out, the approach developed and presented in this thesis effectively helps users to increase the quality of their ontologies. At the time of writing this thesis, OOPS! has been broadly accepted by a high number of users worldwide and has been used around 3000 times from 60 different countries. OOPS! is integrated with third-party software and is locally installed in private enterprises being used both for ontology development activities and training courses.
Update on Financial Industry Business Ontology status, as presented to the Open Financial Data Group. Includes description of the canonical reference model (business conceptual ontology) and the principles by which this was built
Jarrar: Ontology Modeling using OntoClean MethodologyMustafa Jarrar
Lecture Description:
Lecture video by Mustafa Jarrar at Birzeit University, Palestine.
See the course webpage at: http://jarrar-courses.blogspot.com/2012/04/aai-spring-jan-may-2012.html and http://www.jarrar.info
Ontology and Ontology Libraries: a Critical StudyDebashisnaskar
The concept of digital library revolutionized its popularity with the development of networking technology. Digital library stores various kind of documents in digitized format that enables user smooth access to these documents at subsidized costs. In the recent past, a similar concept i.e., ontology library has gained popularity among the communities like semantic web, artificial intelligence, information science, philosophy, linguistics, and so forth.
Properties and Individuals in OWL: Reasoning About Family Historyrobertstevens65
Slides used in an advanced OWL tutorial in 2012. The tutorial is based on family history and uses OWL individuals as a first class citizen in the learning.
Webinar in which Mike Bennett describes the unique approach Hypercube applies to modeling business semantics (the method used in creating the EDM Council's FIBO Business Conceptual Ontology). The end result of creating this kind of business conceptual ontology is that a firm will have a single, canonical source of meaning across all its data resources, like a golden copy but in the semantics space - so we sometimes refer to this a "Golden Ontology".
Mike explains the principles for creating an enterprise conceptual ontology. From this webinar you will learn:
3 things you need to know about ontologies
- Words are not Concepts
- Meaning is not Truth
- Syntax is not Semantics
3 things you need to do to build a Golden reference ontology:
- Classification
- Abstraction
- Partitioning
3 ways to use a Golden Ontology
- Querying across legacy data sources
- Mapping and data integration
- Reasoning with Semantic Web applications
An Ontology is a description of things that exist and how they relate to each other. Ontologies and Natural Language Processing (NLP) can often be seen as two sides of the same coin.
WIDOCO: A Wizard for Documenting Ontologiesdgarijo
WIDOCO is a WIzard for DOCumenting Ontologies that guides users through the documentation process of their vocabularies. Given an RDF vocabulary, WIDOCO detects missing vocabulary metadata and creates a documentation with diagrams, human readable descriptions of the ontology terms and a summary of
changes with respect to previous versions of the ontology. The documentation consists on a set of linked enriched HTML pages that can be further extended by end users. WIDOCO is open source and builds on well established Semantic Web tools. So far, it has been used to document more than one hundred ontologies in different domains.
LoCloud Vocabulary Services: Thesaurus management introduction, Walter Koch a...locloud
This presentation provides an introduction to thesaurus management in the LoCloud Vocabulary Services given during the LoCloud training workshops. It provides an introduction to controlled vocabularies, thesaurus for information retrieval and interoperability, to SKOS, multilingual vocabulary issues and to the federated model adopted for thesaurus management within the LoCloud service, which is based in TemaTres. The presentation includes a list of the vocabularies that have been integrated within the LoCloud service. There is also a walk-through of MediaThread and how this was used in the vocabulary management training offered in the workshop.
Presentation made in the context of the FAO AIMS Webinar titled “Knowledge Organization Systems (KOS): Management of Classification Systems in the case of Organic.Edunet” (http://aims.fao.org/community/blogs/new-webinaraims-knowledge-organization-systems-kos-management-classification-systems)
21/2/2014
The objective of this webinar is to provide a brief overview of the Knowledge Organization Systems (KOS) and the tools used for managing them. The presentation will focus on the management of the multilingual Organic.Edunet ontology as a case study. In this context it will present aspects such as the collaborative work, multilinguality needs and update of the concepts using an online KOS management tool (MoKi).
TSS 2017: Terminology and Knowledge Organization SystemsMichael Wetzel
Terminology Summer School 2017
What is a KOS, what are its benefits, typical examples, the role it plays in the Semantic Web. What is the difference between a classification, a taxonomy, a thesaurus, an ontology.
Description of Microservices as building blocks of an Micro-SOA. Result of the EU-funded project LoCloud.
Content:
• Project Context
• LoCloud Microservices
• Vocabulary Microservices
– Technical Insights (examples)
• Microservices and Modern System
Architectures
– Deployment
– Marketplace
Bridging Informal MOOCs & Formal English for Academic Purposes Programmes wit...Alannah Fitzgerald
Presented at the Teaching and Language Corpora (TaLC) Conference in Lancaster on July 23, 2014. Based on collaborative work with the FLAX Language Project (Shaoqun Wu and Ian Witten) and the Language Centre at Queen Mary University of London (Martin Barge, William Tweddle, Saima Sherazi).
Usage of Linked Data: Introduction and Application ScenariosEUCLID project
This presentation introduces the main principles of Linked Data, the underlying technologies and background standards. It provides basic knowledge for how data can be published over the Web, how it can be queried, and what are the possible use cases and benefits. As an example, we use the development of a music portal (based on the MusicBrainz dataset), which facilitates access to a wide range of information and multimedia resources relating to music.
Authoring OWL 2 ontologies with the TEX-OWL syntaxMauro Dragoni
This work describes a new syntax that can be used to write OWL 2 ontologies. The syntax, which is known as TEX-OWL, was developed to address the need for an easy-to-read and easy-to-write plain text syntax. TEX-OWL is
inspired by LaTeX syntax, and covers all construct of OWL 2.
We designed TEX-OWL to be less verbose than the other OWL syntaxes, and easy-to-use especially for quickly developing small-size ontologies with just a text editor.
The important features of the syntax are discussed in this work, and a reference implementation of a Java-based parser and writer is described.
Towards digitizing scholarly communicationSören Auer
Slides of the VIVO 2016 Conference keynote: Despite the availability of ubiquitous connectivity and information technology, scholarly communication has not changed much in the last hundred years: research findings are still encoded in and decoded from linear, static articles and the possibilities of digitization are rarely used. In this talk, we will discuss strategies for digitizing scholarly communication. This comprises in particular: the use of machine-readable, dynamic content; the description and interlinking of research artifacts using Linked Data; the crowd-sourcing of multilingual
educational and learning content. We discuss the relation of these developments to research information systems and how they could become part of an open ecosystem for scholarly communication.
A Bibliographic Playlist: Online Reference, Recommender, & Collaborative Acad...Lorena O'English
PowerPoint presentation I have given at Washington State University about Zotero and Connotea - alternatives to (and expanders beyond) bibliographic management tools such as EndNote. Some slides are hard to read.
Similar to OEG-Tools for supporting Ontology Engineering (20)
Presentation of the "Coming to terms to FAIR semantics" paper for 22nd International Conference on Knowledge Engineering and Knowledge Management (EKAW 2020).
Due to the increasing uptake of semantic technologies, ontologies are becoming part of a growing number of software development projects. As a result, ontology development teams have to combine their activities with software development practices. In this presentation some practices, tools and examples of new trends in ontological engineering are provided.
Presentation for the paper: "Semantic Discovery in the Web of Things" at http://sisinflab.poliba.it/EnWoT/2017/
Abstract:
While the number of things present in the Web grows, the ability of discovering such things in order to successfully interact with them becomes a challenge, mainly due to heterogeneity.
The contribution of this paper is two-fold. First, an ontology-based approach to leverage web things discovery that is transparent to the syntax, protocols and formats used in things interfaces is described. Second, a semantic model for describing web things and how to extract and understand the relevant information for discovery is proposed.
Introduction to Linked Open Vocabularies http://lov.okfn.org/ during EUDAT2017 https://eudat.eu/events/trainings/eudat-semantic-working-group-at-9th-rda-plenary-barcelona-3-4-april-2017
Detecting Good Practices and Pitfalls when Publishing Vocabularies on the Web María Poveda Villalón
The uptake of Linked Data (LD) has promoted the proliferation of datasets and their associated ontologies bringing their semantic to the data being published. These ontologies should be evaluated at different stages, both during their development and their publication. As important as correctly modelling the intended part of the world to be captured in an ontology, is publishing, sharing and facilitating the (re)use of the obtained model. In this paper, 11 evaluation characteristics, with respect to publish, share and facilitate the reuse, are proposed. In particular, 6 good practices and 5 pitfalls are presented, together with their associated detection methods. In addition, a grid-based rating system is generated showing the results of analysing the vocabularies gathered in LOV repository. Both contributions, the set of evaluation characteristics and the grid system, could be useful for ontologists in order to reuse existing LD vocabularies or to check the one being built.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
OEG-Tools for supporting Ontology Engineering
1. OEG-Tools for supporting
Ontology Engineering
Idafen Santana-Pérez
isantana@fi.upm.es
@idafensp
Post-doc at OEG
María Poveda-Villalón
mpoveda@fi.upm.es
@MariaPovedaV
Post-doc at OEG
Daniel Garijo
dgarijo@isi.edu
@dgarijov
Visiting researcher
Speakers by lexicographical order
Madrid, 21st July 2016
8. Table of contents
based on
Evaluation
Documentation
Publication
Registration
integrates
integrates integratesintegrates
Vocabularium2
1
3
9. Vocabularium2 3
Table of contents
based on
Evaluation
Documentation
Publication
Registration
integrates
integrates integratesintegrates
1
10. Adding a vocabulary to vocab.linkeddata.es
• Go to https://github.com/oeg-upm/vocabTest
• Edit Vocabularies.csv file
• Add your vocabulary URI and keywords
• You can remove the previous entries
• Commit changes and wait. You can see your progress here:
http://jarsomatic.linkeddata.es/
You can even do it through the website!
11. Adding a vocabulary to vocab.linkeddata.es
• Once the pull request is created, accept it. Check the results:
https://oeg-upm.github.io/vocabTest/site/
• Edit your vocabulary and repeat until fix.
No vocabulary has been added, let’s check the report:
12. Adding a vocabulary to vocab.linkeddata.es
• Go to https://github.com/oeg-upm/vocabUpdates/
• Edit Vocabularies.csv file
• Add your vocabulary URI and keywords
• You MUST NOT remove the previous entries
• Commit changes and wait. You can see your progress here:
http://jarsomatic.linkeddata.es/. Accept the pull request and
contact Raul for re-deployment.
You can even do it through the website!
17. Publishing an ontology with w3id
https://w3id.org/def/test or
https://w3id.org/vocabulary/test
•Content negotiation enabled
•Can be published within OnToology or in Github
18. Creating a landing page for a repository
Extracted from ontology metadata
Errors produced in generation
• Created by OnToology using VocabLite
• Less restrictive than vocab
• Helps accessing the contents of a repository
19. 2
1
Table of contents
based on
Evaluation
Documentation
Publication
Registration
integrates
integrates integratesintegrates
Vocabularium3
20. Vocabularium
Vocabularium is an application to serve ontologies online including:
1) different serializations (RDF/XML, ttl)
2) HTML documentation
• different languages (en, es, etc.)
3) different versions
Vocabularium does not deal with .htaccess, it implements the content
negotiation processing the http accept headers.
https://github.com/SmartDeveloperHub/sdh-vocabulary
21. Vocabularium
Vocabularium follows a given folder structure for serving the files.
Base URI for all the vocabularies served by an unique instance.
Each instance can serve 1 or more ontologies with the same base URI.
base: http://ont-loupe.linkeddata.es/def/
root: /opt/vocabularium-0.1/loupe/src/ontology
server:
host: 4v.dia.fi.upm.es
port: 8081
docs:
root: /opt/vocabularium-0.1/docs
relativePath: docs
opt
vocabularium-0.1
loupe
src
ontology
loupe-ontology.owl
loupe-ontology.owl
index-en.html
index-es.html
src
ontology
OnToology
loupe
documentation
…
symlink
http://mydomain.linkeddata.es/def/ont1
http://mydomain.linkeddata.es/def/ont2
http://onto.mydomain.linkeddata.es/def/
22. Future
• vocab.linkeddata.es
• Automate the publication of the updated site
• Form to improve the GitHub workflow
• OnToology
• Interface
• Bug fixing
• AR2DTool:
• Improve engine
• Improve GUI
• Widoco
• Bug fixing
• OOPS!
• Extensions
• Improve interface
• Vocabularium
• Issues in GitHub
• No support
24. OEG-Tools for supporting
Ontology Engineering
Idafen Santana-Pérez
isantana@fi.upm.es
@idafensp
Post-doc at OEG
María Poveda-Villalón
mpoveda@fi.upm.es
@MariaPovedaV
Post-doc at OEG
Daniel Garijo
dgarijo@isi.edu
@dgarijov
Visiting researcher
Speakers by lexicographical order
Madrid, 21st July 2016