Paper selezionato presentato a W3C LOD2014 con F. Ciotti, M. Lana, D. Magro, S. Peroni, F. Vitali
“Linked Open Data: where are we?” Archivio Centrale dello Stato (Roma 20 febbraio 2014).
Open Knowledge Foundation Edinburgh meet-up #3Gill Hamilton
Lightning talks by
Gordon Dunsire on library standards and linked data
Gill Hamilton on recent initiatives with open and linked open data at National Library of Scotland
Open Knowledge Foundation Edinburgh meet-up #3Gill Hamilton
Lightning talks by
Gordon Dunsire on library standards and linked data
Gill Hamilton on recent initiatives with open and linked open data at National Library of Scotland
Vocabularies as Linked Data - OUDCE March2014Keith.May
Presentation given as part of OUDCE course in Oxford 04-03-2014 on "Digital Data and Archaeology: Management, Preservation and Publishing.
Acknowledgements to Ceri Binding @Ceribin for many of the slides.
This presentation was provided by Ashley Clark, Northeastern University, during a NISO Virtual Conference on the topic of data curation, held on Wednesday, August 31, 2016
Resource Description Framework (RDF) has entered the metadata scene for libraries in a major way over the last few years. While the promise of its Linked Data capabilities is exciting, the realities of changing data models, encoding practices, and even ontologies can put a check on that excitement. This session will explore these issues and discuss when this is worth doing and how to go about doing it.
Connections that work: Linked Open Data demystifiedJakob .
Keynote given 2014-10-22 at the National Library of Finland at Kirjastoverkkopäivät 2014 (https://www.kiwi.fi/pages/viewpage.action?pageId=16767828) #kivepa2014
Lightening talk for Semantic Web in Libraries (SWIB13) conference at 2013-11-27 about another method of expressing RDF data. See http://gbv.github.io/aREF/ for a preliminary specification.
The Open Knowledge Extraction Challenge focuses on the production of new knowledge aimed at either populating and enriching existing knowledge bases or creating new ones. This means that the defined tasks focus on extracting concepts, individuals, properties, and statements that not necessarily exist already in a target knowledge base, and on representing them according to Semantic Web standard in order to be directly injected in linked datasets and their ontologies. The OKE challenge, has the ambition to advance a reference framework for research on Knowledge Extraction from text for the Semantic Web by re-defining a number of tasks (typically from information and knowledge extraction) by taking into account specific SW requirements. The Challenge is open to everyone from industry and academia.
RDF is a general method to decompose knowledge into small pieces, with some rules about the semantics or meaning of those pieces. The point is to have a method so simple that it can express any fact, and yet so structured that computer applications can do useful things with knowledge expressed in RDF.
Presentation given by Patricia Malcolm-Tompkins at the KikForum
Abstract:
Floras offer a means for the identification and correct naming of plants for a particular region. Kew’s tradition for writing Floras has lead to the production of 6 main African Floras, which together with other monographic work are gradually being digitised and will form part of the online e-floras resource. Having these works in digital form not only enables access to the Floras for users around the world through the internet, but also opens opportunities to improve and reuse this information. In this talk I will cover 3 ways in which the information in Floras and monographs can be used to build interactive identification systems. I will first describe the methodology used for the digitisation of Kew’s legacy works and how this information is now part of an online database for identification and nomenclatural queries (Flora Zambesiaca online). I will then demonstrate how Lucid Phoenix can be used to publish dichotomous keys online, using Phyllanthus key from Flora Zambesiaca as an example. And finally, I will demonstrate how to transfer character base information from a monograph (“Then genus Croton in Madagascar and the Comoro Is.” by A. Radcliffe-Smith - unpublished) into a DELTA/INTKEY system. With this key the user can arrive to a group of less than 10 taxa by using only 5 characters, out of a total of ~150 species of Croton.
Knowledge Patterns for the Web: extraction, transformation, and reuseAndrea Nuzzolese
KPs are an abstraction of frames as introduced by Fillmore and Minsky. KP discovery needs to address two main research problems: the heterogeneity of sources, formats and semantics in the Web (i.e., the knowledge soup problem) and the difficulty to draw relevant boundary around data that allows to capture the meaningful knowledge with respect to a certain context (i.e., the knowledge boundary problem). Hence, we introduce two methods that provide different solutions to these two problems by tackling KP discovery from two different perspectives: (i) the transformation of KP-like artifacts (i.e., top-down defined artifacts that can be compared to KPs, such as FrameNet frames or Ontology Design Patterns) to KPs formalized as OWL2 ontologies; (ii) the bottom-up extraction of KPs by analyzing how data are organized in Linked Data. The two methods address the knowledge soup and boundary problems in different ways. The first method provides a solution to the two aforementioned problems that is based on a purely syntactic transformation step of the original source to RDF followed by a refactoring step whose aim is to add semantics to RDF by select meaningful RDF triples. The second method allows to draw boundaries around RDF in Linked Data by analyzing type paths. A type path is a possible route through an RDF that takes into account the types associated to the nodes of a path. Unfortunately, type paths are not always available. In fact, Linked Data is a knowledge soup because of the heterogeneous semantics of its datasets and because of the limited intentional as well as extensional coverage of ontologies (e.g., DBpedia ontology, YAGO) or other controlled vocabularies (e.g., SKOS, FOAF, etc.). Thus, we propose a solution for enriching Linked Data with additional axioms (e.g., rdf:type axioms) by exploiting the natural language available for example in annotations (e.g. rdfs:comment) or in corpora on which datasets in Linked Data are grounded (e.g. DBpedia is grounded on Wikipedia). Then we present K∼ore, a software architec- ture conceived to be the basis for developing KP discovery systems and designed according to two software architectural styles, i.e, the Component-based and REST. K∼ore is the architectural binding of a set of tools, i.e., K∼tools, which implements the methods for KP transformation and extraction. Finally we provide an example of reuse of KP based on Aemoo, an exploratory search tool which exploits KPs for performing entity summarization.
The Semantic Web is the extension of the World Wide Web that enables people to share content beyond the boundaries of applications and websites.
It has been described in rather different ways: as a utopic vision, as a web of data, or merely as a natural paradigm shift in our daily use of the Web.
Most of all, the Semantic Web has inspired and engaged many people to create innovative and intelligent technologies and applications.
In this presentation we describe the underlying principles and key features of the semantic web along with where and how they fit in with server side and client side technologies supported by ColdFusion.
Vocabularies as Linked Data - OUDCE March2014Keith.May
Presentation given as part of OUDCE course in Oxford 04-03-2014 on "Digital Data and Archaeology: Management, Preservation and Publishing.
Acknowledgements to Ceri Binding @Ceribin for many of the slides.
This presentation was provided by Ashley Clark, Northeastern University, during a NISO Virtual Conference on the topic of data curation, held on Wednesday, August 31, 2016
Resource Description Framework (RDF) has entered the metadata scene for libraries in a major way over the last few years. While the promise of its Linked Data capabilities is exciting, the realities of changing data models, encoding practices, and even ontologies can put a check on that excitement. This session will explore these issues and discuss when this is worth doing and how to go about doing it.
Connections that work: Linked Open Data demystifiedJakob .
Keynote given 2014-10-22 at the National Library of Finland at Kirjastoverkkopäivät 2014 (https://www.kiwi.fi/pages/viewpage.action?pageId=16767828) #kivepa2014
Lightening talk for Semantic Web in Libraries (SWIB13) conference at 2013-11-27 about another method of expressing RDF data. See http://gbv.github.io/aREF/ for a preliminary specification.
The Open Knowledge Extraction Challenge focuses on the production of new knowledge aimed at either populating and enriching existing knowledge bases or creating new ones. This means that the defined tasks focus on extracting concepts, individuals, properties, and statements that not necessarily exist already in a target knowledge base, and on representing them according to Semantic Web standard in order to be directly injected in linked datasets and their ontologies. The OKE challenge, has the ambition to advance a reference framework for research on Knowledge Extraction from text for the Semantic Web by re-defining a number of tasks (typically from information and knowledge extraction) by taking into account specific SW requirements. The Challenge is open to everyone from industry and academia.
RDF is a general method to decompose knowledge into small pieces, with some rules about the semantics or meaning of those pieces. The point is to have a method so simple that it can express any fact, and yet so structured that computer applications can do useful things with knowledge expressed in RDF.
Presentation given by Patricia Malcolm-Tompkins at the KikForum
Abstract:
Floras offer a means for the identification and correct naming of plants for a particular region. Kew’s tradition for writing Floras has lead to the production of 6 main African Floras, which together with other monographic work are gradually being digitised and will form part of the online e-floras resource. Having these works in digital form not only enables access to the Floras for users around the world through the internet, but also opens opportunities to improve and reuse this information. In this talk I will cover 3 ways in which the information in Floras and monographs can be used to build interactive identification systems. I will first describe the methodology used for the digitisation of Kew’s legacy works and how this information is now part of an online database for identification and nomenclatural queries (Flora Zambesiaca online). I will then demonstrate how Lucid Phoenix can be used to publish dichotomous keys online, using Phyllanthus key from Flora Zambesiaca as an example. And finally, I will demonstrate how to transfer character base information from a monograph (“Then genus Croton in Madagascar and the Comoro Is.” by A. Radcliffe-Smith - unpublished) into a DELTA/INTKEY system. With this key the user can arrive to a group of less than 10 taxa by using only 5 characters, out of a total of ~150 species of Croton.
Knowledge Patterns for the Web: extraction, transformation, and reuseAndrea Nuzzolese
KPs are an abstraction of frames as introduced by Fillmore and Minsky. KP discovery needs to address two main research problems: the heterogeneity of sources, formats and semantics in the Web (i.e., the knowledge soup problem) and the difficulty to draw relevant boundary around data that allows to capture the meaningful knowledge with respect to a certain context (i.e., the knowledge boundary problem). Hence, we introduce two methods that provide different solutions to these two problems by tackling KP discovery from two different perspectives: (i) the transformation of KP-like artifacts (i.e., top-down defined artifacts that can be compared to KPs, such as FrameNet frames or Ontology Design Patterns) to KPs formalized as OWL2 ontologies; (ii) the bottom-up extraction of KPs by analyzing how data are organized in Linked Data. The two methods address the knowledge soup and boundary problems in different ways. The first method provides a solution to the two aforementioned problems that is based on a purely syntactic transformation step of the original source to RDF followed by a refactoring step whose aim is to add semantics to RDF by select meaningful RDF triples. The second method allows to draw boundaries around RDF in Linked Data by analyzing type paths. A type path is a possible route through an RDF that takes into account the types associated to the nodes of a path. Unfortunately, type paths are not always available. In fact, Linked Data is a knowledge soup because of the heterogeneous semantics of its datasets and because of the limited intentional as well as extensional coverage of ontologies (e.g., DBpedia ontology, YAGO) or other controlled vocabularies (e.g., SKOS, FOAF, etc.). Thus, we propose a solution for enriching Linked Data with additional axioms (e.g., rdf:type axioms) by exploiting the natural language available for example in annotations (e.g. rdfs:comment) or in corpora on which datasets in Linked Data are grounded (e.g. DBpedia is grounded on Wikipedia). Then we present K∼ore, a software architec- ture conceived to be the basis for developing KP discovery systems and designed according to two software architectural styles, i.e, the Component-based and REST. K∼ore is the architectural binding of a set of tools, i.e., K∼tools, which implements the methods for KP transformation and extraction. Finally we provide an example of reuse of KP based on Aemoo, an exploratory search tool which exploits KPs for performing entity summarization.
The Semantic Web is the extension of the World Wide Web that enables people to share content beyond the boundaries of applications and websites.
It has been described in rather different ways: as a utopic vision, as a web of data, or merely as a natural paradigm shift in our daily use of the Web.
Most of all, the Semantic Web has inspired and engaged many people to create innovative and intelligent technologies and applications.
In this presentation we describe the underlying principles and key features of the semantic web along with where and how they fit in with server side and client side technologies supported by ColdFusion.
Le edizioni digitali nel solco del Web semantico. Un caso di studio: Vespasia...Francesca Tomasi
Intervento su invito per le Giornate internazionali su “Strumenti e applicazioni per la divulgazione del patrimonio linguistico e culturale” (Università di Bologna, 23 aprile 2014).
Ontologies and thesauri. How to answer complex questions using interoperability?Equipex Biblissima
Présentation sur les ontologies et thesauri dans le cadre de la Training School COST-IRHT "La transmission des textes : nouveaux outils, nouvelles approches" (Paris), par Stefanie Gehrke
An introduction to the Joint Information Systems Committee Resource Discovery iKit. Includes a look at controlled vocabularies declared in the Resource Discovery Framework (RDF)/Simple Knowledge Organisation System (SKOS) and wikipedia entries. Presented by Tony Ross at the CILIPS Centenary Conference Branch and Group Day which took place 5 Jun 2008.
https://doi.org/10.6084/m9.figshare.11854626.v1
Presented at Dutch National Librarian/Information Professianal Association annual conference 2011 - NVB2011
November 17, 2011
Keynote presentation for CSWS 2013 Conference in Shanghai, China.
Some slides borrowed from Jan Wielemaker, Guus Schreiber, Jacco van Ossenbruggen, Niels Ockeloen, Antske Fokkens, Serge ter Braake.
Similar to Methods and experiences in cultural heritage enhancement (20)
APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024APNIC
Ellisha Heppner, Grant Management Lead, presented an update on APNIC Foundation to the PNG DNS Forum held from 6 to 10 May, 2024 in Port Moresby, Papua New Guinea.
1.Wireless Communication System_Wireless communication is a broad term that i...JeyaPerumal1
Wireless communication involves the transmission of information over a distance without the help of wires, cables or any other forms of electrical conductors.
Wireless communication is a broad term that incorporates all procedures and forms of connecting and communicating between two or more devices using a wireless signal through wireless communication technologies and devices.
Features of Wireless Communication
The evolution of wireless technology has brought many advancements with its effective features.
The transmitted distance can be anywhere between a few meters (for example, a television's remote control) and thousands of kilometers (for example, radio communication).
Wireless communication can be used for cellular telephony, wireless access to the internet, wireless home networking, and so on.
This 7-second Brain Wave Ritual Attracts Money To You.!nirahealhty
Discover the power of a simple 7-second brain wave ritual that can attract wealth and abundance into your life. By tapping into specific brain frequencies, this technique helps you manifest financial success effortlessly. Ready to transform your financial future? Try this powerful ritual and start attracting money today!
# Internet Security: Safeguarding Your Digital World
In the contemporary digital age, the internet is a cornerstone of our daily lives. It connects us to vast amounts of information, provides platforms for communication, enables commerce, and offers endless entertainment. However, with these conveniences come significant security challenges. Internet security is essential to protect our digital identities, sensitive data, and overall online experience. This comprehensive guide explores the multifaceted world of internet security, providing insights into its importance, common threats, and effective strategies to safeguard your digital world.
## Understanding Internet Security
Internet security encompasses the measures and protocols used to protect information, devices, and networks from unauthorized access, attacks, and damage. It involves a wide range of practices designed to safeguard data confidentiality, integrity, and availability. Effective internet security is crucial for individuals, businesses, and governments alike, as cyber threats continue to evolve in complexity and scale.
### Key Components of Internet Security
1. **Confidentiality**: Ensuring that information is accessible only to those authorized to access it.
2. **Integrity**: Protecting information from being altered or tampered with by unauthorized parties.
3. **Availability**: Ensuring that authorized users have reliable access to information and resources when needed.
## Common Internet Security Threats
Cyber threats are numerous and constantly evolving. Understanding these threats is the first step in protecting against them. Some of the most common internet security threats include:
### Malware
Malware, or malicious software, is designed to harm, exploit, or otherwise compromise a device, network, or service. Common types of malware include:
- **Viruses**: Programs that attach themselves to legitimate software and replicate, spreading to other programs and files.
- **Worms**: Standalone malware that replicates itself to spread to other computers.
- **Trojan Horses**: Malicious software disguised as legitimate software.
- **Ransomware**: Malware that encrypts a user's files and demands a ransom for the decryption key.
- **Spyware**: Software that secretly monitors and collects user information.
### Phishing
Phishing is a social engineering attack that aims to steal sensitive information such as usernames, passwords, and credit card details. Attackers often masquerade as trusted entities in email or other communication channels, tricking victims into providing their information.
### Man-in-the-Middle (MitM) Attacks
MitM attacks occur when an attacker intercepts and potentially alters communication between two parties without their knowledge. This can lead to the unauthorized acquisition of sensitive information.
### Denial-of-Service (DoS) and Distributed Denial-of-Service (DDoS) Attacks
Bridging the Digital Gap Brad Spiegel Macon, GA Initiative.pptxBrad Spiegel Macon GA
Brad Spiegel Macon GA’s journey exemplifies the profound impact that one individual can have on their community. Through his unwavering dedication to digital inclusion, he’s not only bridging the gap in Macon but also setting an example for others to follow.
Multi-cluster Kubernetes Networking- Patterns, Projects and GuidelinesSanjeev Rampal
Talk presented at Kubernetes Community Day, New York, May 2024.
Technical summary of Multi-Cluster Kubernetes Networking architectures with focus on 4 key topics.
1) Key patterns for Multi-cluster architectures
2) Architectural comparison of several OSS/ CNCF projects to address these patterns
3) Evolution trends for the APIs of these projects
4) Some design recommendations & guidelines for adopting/ deploying these solutions.
Methods and experiences in cultural heritage enhancement
1. LOD2014 LINKED OPEN DATA: WHERE ARE WE?
METHODS AND EXPERIENCES IN CULTURAL HERITAGE ENHANCEMENT
Roma, 20th - 21st Feb 2014
Archivio Centrale dello Stato, Roma
Organized by W3C Italy
Francesca Tomasi
University of Bologna
Fabio Ciotti
University of Roma Tor Vergata
Maurizio Lana
University of Piemonte Orientale
Diego Magro
University of Torino
Silvio Peroni
University of Bologna
Fabio Vitali
University of Bologna
2. THE PROJECT
CH and LOD
Our appoach: conversion, extraction, creation
Database conversion into LOD;
Extraction of LOD from XML/TEI texts;
Creation of new ontologies to produce LOD.
The CH domain: people and roles, ancient and modern places, books and archival documents
The aim: best pratices in LOD production and dissemination in the CH domain
Common strategy:
ontologies creation and reuse;
stand-off markup and Open Annotation Data Model
4. ZERI PHOTO ARCHIVE
“is a rich digital catalog, and is today considered one of the most important repertories of Italian art on the web”.
Our mission is to convert the database in LOD:
reengineer the E/R model implemented by the database tables, which contain data according to the Scheda F, into OWL, so as to obtain a first version of an ontology;
iteratively enhance the ontology according to the specifications described by the Scheda F and CIDOC-CRM, (changing the whole conceptual organisation and entity naming of the existing model as less as possible);
by using appropriate scripts transform data originally stored in the database into RDF statements compliant to the OWL ontology developed;
apply automatic and semi-automatic mechanisms to generate links to existing datasets, such as DBpedia and Europeana.
5. ZERI: THE PROCESS ONTOLOGY REUSE AND LOD POPULATION
Scheda F
Photograph
Scheda OA
WorkOfArt
describes
describes
describes
has subject
FRBR Work
FRBR Expression
FRBR Manifestation
FRBR Item
Database Fondazione Zeri
Create the
ontology
from the E/R
Model and the
data in DB
Add links
to LOD
FRBR
6. VESPASIANO, LETTERS A DIGITAL EDITION
A digital annoted (XML/TEI) collection of letters form the XV century sent/received to/by the florentine copyist Vespasiano da Bisticci.
A web environment that focuses on: persons mentioned in the documents; classical latin and greek manuscripts requested/copied/proposed to/by Vespasiano da Bisticci’s school and their description.
The purpose is to identify persons related to manuscripts in order to expose datasets of people related to manuscripts, these last described by technical words.
The XML/TEI annotation (persons, manuscripts and technical terms) has been realized with embedded markup (@ref=”URI”) pointing to stand-off RDF file (with assertion) and controlled form of the names (VIAF, LCA, Geonames, etc.) for managing attributes values.
7. VESPASIANO: THE MODEL RDF SUPPORT TO STAND-OFF ANNOTATION
SUBJECT
PREDICATES
OBJECT
people.rdf#PdM
URI: http://vespasianodabisticciletters/ people/PdM
has_normalized_form
Medici, Piero de’:
Dbpedia: http://eu.dbpedia.org/page/Piero_de_Medici
VIAF: http://viaf.org/viaf/25406033
has_variant_forms
Piero,
Piero di Cosimo de’ Medici,
Principe di Firenze
is_owner_of
manuscripts.rdf#P_SN
manuscripts.rdf#L_D_III
manuscripts.rdf#L_D_IV_E
SUBJECT
PREDICATES
OBJECT
manuscripts.rdf#P_SN
URI: http://vespasianodabisticciletters/ manuscripts/P_SN
has_normalized_form
Plinio, Storia naturale
is_requested_by
is_owned_by
is_copied_by
is_illuminated_by
people.rdf#PdM
people.rdf#PdM
people.rdf#PS
people.rdf#FT
SUBJECT
PREDICATES
OBJECT
lexicon.rdf#min
URI: http://vespasianodabisticciletters/ lexicon/min
has_normalized_form
miniare, miniatura, miniato
is_referred_to
manuscripts.rdf#L_D_IV_E
8. Work in progress
Main aims:
increasing the value of geographic references in latin texts
enabling innovative access to latin works (e.g. through geography)
contributing to the LOD cloud
GEOLAT (PROJECT FUNDED BY COMPAGNIA DI SAN PAOLO)
9. GEOLAT: THE FRAMEWORK
digilibLT
(XML/TEI Resources)
Bibliographic Resources
RDF data
Annotations
Geographic entities RDF data
Bibliographic Resource
Ontology (bro)
Ancient World Geographic Ontology (awgo)
automatic
extraction
computer-aided annotation (Geographic NER)
specified according to
specified according to
specified according to
Open Annotation Data Model (oa)
bridges the gap
Mappings to other datasets (e.g. Pleiades)
10. rdf:type
Primae frugiparos fetus mortalibus aegris dididerunt quondam praeclaro nomine Athenae et recreaverunt vitam legesque rogarunt [...]
De rerum natura – Book VI
GEOLAT:THE MODEL (SIMPLIFIED)
athenaeWord
bro:TextFragment
bro:Book
isPartOf
rdf:type
bro:LiteraryWork
rdf:type
isPartOf
deRerumNatura
athens
awgo:GreekPolis
rdf:type
geographicSpace1
awgo:GeographicSpace
awgo:locatedIn
bro:identifies
anno1
oa:Annotation
oa:hasTarget
trig:Graph
rdf:type
oa:hasBody
rdf:type
DRN_BookVI
rdf:type
pleiades: 579885
skos:closeMatch
11. AN ARCHIVAL ONTOLOGY: PROLES
The Political Roles (PRoles) Ontology is an OWL 2 DL ontology that allows one to represent political role attributions and their possible links to related events by means of particular classes and properties imported and used by several concepts from PRO, n-ary participation pattern and PROV-O.
We are now managing an experiment on Andrea Costa fond, by exploiting the related authority record (http://archivi.ibc.regione.emilia-romagna.it/eac-cpf/IT- ER-IBC-SP00001-0000264), in collaboration with IBC, Soprintendenza per i Beni librari e documentari.
12. PROLES: THE MODEL ONTOLOGY CREATION AND REUSE
The first layer of the PRoles Ontology: role attribution
The third layer of the PRoles Ontology: provenance information
The second layer of the PRoles Ontology: participation to events
13. FINAL REMARKS
The common method:
Ontology reuse;
Definition of new classes and predicates;
Ontology as the basis for LOD creation;
Stand-off markup and OA data model;
LOD cloud population;
Mapping to other datasets
14. THANK YOU! FRANCESCA, FABIO C., MAURIZIO, DIEGO, SILVIO, FABIO V. THE GEOLAT RESEARCH IS FUNDED BY FONDAZIONE COMPAGNIA DI SANPAOLO