Applying Digital Library Metadata StandardsJenn Riley
Riley, Jenn. "Applying Digital Library Metadata Standards." Presentation sponsored by the Private Academic Library Network of Indiana (PALNI), May 9, 2006.
Big Data to SMART Data : Process scenario
Scenario of an implementation of a transformation process of the Data towards exploitable data and representative with treatments of the streaming, the distributed systems, the messages, the storage in an NoSQL environment, a management with an ecosystem Big Data graphic visualization of the data with the technologies:
Apache Storm, Apache Zookeeper, Apache Kafka, Apache Cassandra, Apache Spark and Data-Driven Document.
All about Big Data components and the best tools to ingest, process, store and visualize the data.
This is a keynote from the series "by Developer for Developers" powered by eSolutionsGrup.
Applying Digital Library Metadata StandardsJenn Riley
Riley, Jenn. "Applying Digital Library Metadata Standards." Presentation sponsored by the Private Academic Library Network of Indiana (PALNI), May 9, 2006.
Big Data to SMART Data : Process scenario
Scenario of an implementation of a transformation process of the Data towards exploitable data and representative with treatments of the streaming, the distributed systems, the messages, the storage in an NoSQL environment, a management with an ecosystem Big Data graphic visualization of the data with the technologies:
Apache Storm, Apache Zookeeper, Apache Kafka, Apache Cassandra, Apache Spark and Data-Driven Document.
All about Big Data components and the best tools to ingest, process, store and visualize the data.
This is a keynote from the series "by Developer for Developers" powered by eSolutionsGrup.
"Get Ready for Big Data" presentation from Gilbane Boston 2011; for more details, see http://gilbaneboston.com/conference_program.html#t2 and http://pbokelly.blogspot.com/2011/12/gilbane-boston-2011-big-data.html
Challenging Problems for Scalable Mining of Heterogeneous Social and Informat...BigMine
In today’s interconnected real world, social and informational entities are interconnected, forming gigantic, interconnected, integrated social and information networks. By structuring these data objects into multiple types, such networks become semi-structured heterogeneous social and information networks. Most real world applications that handle big data, including interconnected social media and social networks, medical information systems, online e-commerce systems, or database systems, can be structured into typed, heterogeneous social and information networks. For example, in a medical care network, objects of multiple types, such as patients, doctors, diseases, medication, and links such as visits, diagnosis, and treatments are intertwined together, providing rich information and forming heterogeneous information networks. Effective analysis of large-scale heterogeneous social and information networks poses an interesting but critical challenge.
In this talk, we present a set of data mining scenarios in heterogeneous social and information networks and show that mining typed, heterogeneous networks is a new and promising research frontier in data mining research. However, such mining may raise some serious challenging problems on scalability computation. We identify a set of problems on scalable computation and calls for serious studies on such problems. This includes how to efficiently computation for (1) meta path-based similarity search, (2) rank-based clustering, (3) rank-based classification, (4) meta path-based link/relationship prediction, and (5) topical hierarchies from heterogeneous information networks. We introduce some recent efforts, discuss the trade-offs between query-independent pre-computation vs. query-dependent online computation, and point out some promising research directions.
Information and Integration Management VisionColin Bell
The vision of the Information and Integration Management team at the University of Waterloo captured on a single 'poster' page. Covers: Data Management Environment, Mission + Vision, Information Asset Base, Information Lifecycle, Document Management, Metadata/Meaning, Integration Platform, and Innovation Platform.
This white paper will present the opportunities laid down by
data lake and advanced analytics, as well as, the challenges
in integrating, mining and analyzing the data collected from
these sources. It goes over the important characteristics of
the data lake architecture and Data and Analytics as a
Service (DAaaS) model. It also delves into the features of a
successful data lake and its optimal designing. It goes over
data, applications, and analytics that are strung together to
speed-up the insight brewing process for industry’s
improvements with the help of a powerful architecture for
mining and analyzing unstructured data – data lake.
Enough taking about Big data and Hadoop and let’s see how Hadoop works in action.
We will locate a real dataset, ingest it to our cluster, connect it to a database, apply some queries and data transformations on it , save our result and show it via BI tool.
Lecture at an event "SEEDS Kick-off meeting", FORS, Lausanne, Switzerland.
Related materials: http://www.snf.ch/en/funding/programmes/scopes/Pages/default.aspx
http://seedsproject.ch/?page_id=368
Introduction to Big Data Hadoop Training Online by www.itjobzone.bizITJobZone.biz
Want to learn Hadoop online? This PPT give you Introduction to Big Data Hadoop Training Online by expert trainers at ITJobZone.biz - Start your Hadoop Online training with this Presentation.
The importance of capturing metadata has been a topic of many webinars, teleconferences, and white papers over the last several years. There’s has also been an increasing emphasis on “building metadata repositories”.
Vodafone, Cyberpark ve Türkiye Teknoloji Geliştirme Vakfı işbirliğinde düzenlen etkinlikte büyük veri kavramı, Apache Hadoop Ekosistemi ve Türkiye ve Dünyadaki örnek uygulamalar anlatıldı.
-
1 Haziran 2016 - Onur Karadeli, Mustafa Murat Sever
Dec'2013 webinar from the EUCLID project on managing large volumes of Linked Data
webinar recording at https://vimeo.com/84126769 and https://vimeo.com/84126770
more info on EUCLID: http://euclid-project.eu/
"Get Ready for Big Data" presentation from Gilbane Boston 2011; for more details, see http://gilbaneboston.com/conference_program.html#t2 and http://pbokelly.blogspot.com/2011/12/gilbane-boston-2011-big-data.html
Challenging Problems for Scalable Mining of Heterogeneous Social and Informat...BigMine
In today’s interconnected real world, social and informational entities are interconnected, forming gigantic, interconnected, integrated social and information networks. By structuring these data objects into multiple types, such networks become semi-structured heterogeneous social and information networks. Most real world applications that handle big data, including interconnected social media and social networks, medical information systems, online e-commerce systems, or database systems, can be structured into typed, heterogeneous social and information networks. For example, in a medical care network, objects of multiple types, such as patients, doctors, diseases, medication, and links such as visits, diagnosis, and treatments are intertwined together, providing rich information and forming heterogeneous information networks. Effective analysis of large-scale heterogeneous social and information networks poses an interesting but critical challenge.
In this talk, we present a set of data mining scenarios in heterogeneous social and information networks and show that mining typed, heterogeneous networks is a new and promising research frontier in data mining research. However, such mining may raise some serious challenging problems on scalability computation. We identify a set of problems on scalable computation and calls for serious studies on such problems. This includes how to efficiently computation for (1) meta path-based similarity search, (2) rank-based clustering, (3) rank-based classification, (4) meta path-based link/relationship prediction, and (5) topical hierarchies from heterogeneous information networks. We introduce some recent efforts, discuss the trade-offs between query-independent pre-computation vs. query-dependent online computation, and point out some promising research directions.
Information and Integration Management VisionColin Bell
The vision of the Information and Integration Management team at the University of Waterloo captured on a single 'poster' page. Covers: Data Management Environment, Mission + Vision, Information Asset Base, Information Lifecycle, Document Management, Metadata/Meaning, Integration Platform, and Innovation Platform.
This white paper will present the opportunities laid down by
data lake and advanced analytics, as well as, the challenges
in integrating, mining and analyzing the data collected from
these sources. It goes over the important characteristics of
the data lake architecture and Data and Analytics as a
Service (DAaaS) model. It also delves into the features of a
successful data lake and its optimal designing. It goes over
data, applications, and analytics that are strung together to
speed-up the insight brewing process for industry’s
improvements with the help of a powerful architecture for
mining and analyzing unstructured data – data lake.
Enough taking about Big data and Hadoop and let’s see how Hadoop works in action.
We will locate a real dataset, ingest it to our cluster, connect it to a database, apply some queries and data transformations on it , save our result and show it via BI tool.
Lecture at an event "SEEDS Kick-off meeting", FORS, Lausanne, Switzerland.
Related materials: http://www.snf.ch/en/funding/programmes/scopes/Pages/default.aspx
http://seedsproject.ch/?page_id=368
Introduction to Big Data Hadoop Training Online by www.itjobzone.bizITJobZone.biz
Want to learn Hadoop online? This PPT give you Introduction to Big Data Hadoop Training Online by expert trainers at ITJobZone.biz - Start your Hadoop Online training with this Presentation.
The importance of capturing metadata has been a topic of many webinars, teleconferences, and white papers over the last several years. There’s has also been an increasing emphasis on “building metadata repositories”.
Vodafone, Cyberpark ve Türkiye Teknoloji Geliştirme Vakfı işbirliğinde düzenlen etkinlikte büyük veri kavramı, Apache Hadoop Ekosistemi ve Türkiye ve Dünyadaki örnek uygulamalar anlatıldı.
-
1 Haziran 2016 - Onur Karadeli, Mustafa Murat Sever
Dec'2013 webinar from the EUCLID project on managing large volumes of Linked Data
webinar recording at https://vimeo.com/84126769 and https://vimeo.com/84126770
more info on EUCLID: http://euclid-project.eu/
Usage of Linked Data: Introduction and Application ScenariosEUCLID project
This presentation introduces the main principles of Linked Data, the underlying technologies and background standards. It provides basic knowledge for how data can be published over the Web, how it can be queried, and what are the possible use cases and benefits. As an example, we use the development of a music portal (based on the MusicBrainz dataset), which facilitates access to a wide range of information and multimedia resources relating to music.
Linked data for Enterprise Data IntegrationSören Auer
The Web evolves into a Web of Data. In parallel Intranets of large companies will evolve into Data Intranets based on the Linked Data principles. Linked Data has the potential to complement the SOA paradigm with a light-weight, adaptive data integration approach.
From the Feb 19 2014 NISO Virtual Conference: The Semantic Web Coming of Age: Technologies and Implementations
The Web of Data - Ralph Swick, Domain Lead of the Information and Knowledge Domain at W3C
An introduction deck for the Web of Data to my team, including basic semantic web, Linked Open Data, primer, and then DBpedia, Linked Data Integration Framework (LDIF), Common Crawl Database, Web Data Commons.
This presentation addresses the main issues of Linked Data and scalability. In particular, it provides gives details on approaches and technologies for clustering, distributing, sharing, and caching data. Furthermore, it addresses the means for publishing data trough could deployment and the relationship between Big Data and Linked Data, exploring how some of the solutions can be transferred in the context of Linked Data.
This is part 2 of the ISWC 2009 tutorial on the GoodRelations ontology and RDFa for e-commerce on the Web of Linked Data.
See also
http://www.ebusiness-unibw.org/wiki/Web_of_Data_for_E-Commerce_Tutorial_ISWC2009
This is part 2 of the ISWC 2009 tutorial on the GoodRelations ontology and RDFa for e-commerce on the Web of Linked Data.
See also
http://www.ebusiness-unibw.org/wiki/Web_of_Data_for_E-Commerce_Tutorial_ISWC2009
ith the spread of online banking, increasing competition has elevated the need for providing excellent customer service in the Banking and Insurance sector. Digital also offers insurers new ways to cut costs and an opportunity to bring real additional value to the customer experience.
Semantic web technologies and applications provide the
emantic web technologies and applications for InsTemesgenHabtamu
ith the spread of online banking, increasing competition has elevated the need for providing excellent customer service in the Banking and Insurance sector. Digital also offers insurers new ways to cut costs and an opportunity to bring real additional value to the customer experience.
Semantic web technologies and applications provide the emantic web technologies and applications for Insemantic web technologies and applications for Ins
Sigma EE: Reaping low-hanging fruits in RDF-based data integrationRichard Cyganiak
A presentation I gave at I-Semantics 2010 on Sigma EE, an RDF-based data integration front-end.
Sigma EE is now available for download here: http://sig.ma/?page=help
Repositories are systems to safely store and publish digital objects and their descriptive metadata. Repositories mainly serve their data by using web interfaces which are primarily oriented towards human consumption. They either hide their data behind non-generic interfaces or do not publish them at all in a way a computer can process easily. At the same time the data stored in repositories are particularly suited to be used in the Semantic Web as metadata are already available. They do not have to be generated or entered manually for publication as Linked Data. In my talk I will present a concept of how metadata and digital objects stored in repositories can be woven into the Linked (Open) Data Cloud and which characteristics of repositories have to be considered while doing so. One problem it targets is the use of existing metadata to present Linked Data. The concept can be applied to almost every repository software. At the end of my talk I will present an implementation for DSpace, one of the software solutions for repositories most widely used. With this implementation every institution using DSpace should become able to export their repository content as Linked Data.
This tutorial explains the Data Web vision, some preliminary standards and technologies as well as some tools and technological building blocks developed by AKSW research group from Universität Leipzig.
Le "Lac de données" de l'Ina, un projet pour placer la donnée au cœur de l'or...Gautier Poupeau
Support de l'intervention effectuée au cours de la séance dédiée aux lacs de données du séminaire "Nouveaux paradigmes de l'Archive" organisée par le DICEN-CNAM et les Archives nationales
Visite guidée au pays de la donnée - Du modèle conceptuel au modèle physiqueGautier Poupeau
Ce diaporama est le 3ème d'une série qui vise à donner un panorama de la gestion des données à l'ère du big data et de l'intelligence artificielle. Cette partie s'attache à présenter comment on passe de la modélisation des données jusqu'à leur stockage. Elle dresse un panorama des différentes solutions de stockage de données, en présente les particularités, les forces et les faiblesses.
Visite guidée au pays de la donnée - Traitement automatique des donnéesGautier Poupeau
Ce diaporama est le 2ème d'une série qui vise à donner un panorama de la gestion des données à l'ère du big data et de l'intelligence artificielle. Cette 2ème partie présente le traitement automatique des données : intelligence artificielle, fouille de textes et de données, Traitement automarique de la langue ou des images. Après avoir défini ces différents domaines, cette présentation s'attache à faire le tour des différents outils disponibles pour analyser les contenus audiovisuels.
Visite guidée au pays de la donnée - Introduction et tour d'horizonGautier Poupeau
Ce diaporama est le 1er d'une série qui vise à donner un panorama de la gestion des données à l'ère du big data et de l'intelligence artificielle. Cette 1ère partie revient sur les raisons qui font de la donnée un actif indépendant de notre SI et propose une représentation de la gestion des données
Un modèle de données unique pour les collections de l'Ina, pourquoi ? Comment ?Gautier Poupeau
Support de l'intervention effectuée lors des lundis du numérique de l'INHA le 11 février 2019 sur le projet à l'institut national de l'audiovisuel d'une stratégie orientée données pour la refonte de notre système d'information basée sur la mise au point d'une infrastructure centralisée de stockage et de traitement des données et un modèle de données unique pour mettre en cohérence toutes les données de l'Ina
Big data, Intelligence artificielle, quelles conséquences pour les profession...Gautier Poupeau
Support du Webinaire organisé le 21 février par Ina Expert sur l'évolution du positionnement des professionnels de l'information dans les organisations face aux changements en cours que sont la montée en puissance des données au détriment du document, le big data et l'intelligence artificielle
Aligner vos données avec Wikidata grâce à l'outil Open RefineGautier Poupeau
Tutoriel sous la forme d'un pas à pas pour aligner des données avec Wikidata grâce à l'outil Open Refine. Dans ce tutoriel, les données alignées proviennent de la plateforme HAL récupérées via le Sparql endpoint.
Tutoriel sous forme d'exercices pour découvrir le sparql endpoint mis à disposition par la plateforme HAL, archive ouverte d'article scientifiques de toutes disciplines des institutions de recherches françaises. Attention ! Ce tutoriel a pour pré-requis la connaissance du langage de requêtes SPARQL.
Réalisation d'un mashup de données avec DSS de Dataiku et visualisation avec ...Gautier Poupeau
cf. la première partie : https://www.slideshare.net/lespetitescases/ralisation-dun-mashup-de-donnes-avec-dss-de-dataiku-premire-partie
Tutoriel pour réaliser un mashup à partir de jeux de données libres téléchargés sur data.gouv.fr et Wikidata entre autres avec le logiciel DSS de Dataiku. Cette deuxième partie permet d'aborder le requêtage de Wikidata avec une requête SPARQL puis montre comment relier les jeux de données de data.gouv.fr et les données issues de Wikidata. Enfin, il aborde la visualisation des données via l'application en ligne Palladio.
Ce tutoriel a servi de support de cours au Master 2 "Technologies numériques appliqués à l'histoire" de l'Ecole nationale des chartes lors de l'année universitaire 2016-2017.
Réalisation d'un mashup de données avec DSS de Dataiku - Première partieGautier Poupeau
Cf la seconde partie https://www.slideshare.net/lespetitescases/ralisation-dun-mashup-de-donnes-avec-dss-de-dataiku-et-visualisation-avec-palladio-deuxime-partie
Tutoriel pour réaliser un mashup à partir de jeux de données libres téléchargés sur data.gouv.fr et Wikidata entre autres avec le logiciel DSS de Dataiku. Après une introduction sur la notion de mashup et des exemples, cette première partie s'intéresse à la préparation de deux jeux de données issues de data.gouv.fr et provenant du Centre national du cinéma.
Ce tutoriel a servi de support de cours au Master 2 "Technologies numériques appliqués à l'histoire" de l'Ecole nationale des chartes lors de l'année universitaire 2016-2017.
Diaporama de la présentation faite lors du Talend Connect 2016 sur la stratégie orientée données déployée à l'Institut national de l'audiovisuel (Ina). Pour en savoir plus, vous pouvez lire ce billet de blog : http://www.lespetitescases.net/comment-mettre-la-donnee-au-coeur-du-si
Les technologies du Web appliquées aux données structurées (1ère partie : Enc...Gautier Poupeau
Diaporama de la présentation effectuée au séminaire INRIA IST "Le document à l'heure du Web de données" (Carnac 1er-5 octobre 2012) en compagnie d'Emmanuelle Bermès (aka figoblog)
Les technologies du Web appliquées aux données structurées (2ème partie : Rel...Gautier Poupeau
Diaporama de la présentation effectuée au séminaire INRIA IST "Le document à l'heure du Web de données" (Carnac 1er-5 octobre 2012) en compagnie d'Emmanuelle Bermès (aka figoblog)
Les professionnels de l'information face aux défis du Web de donnéesGautier Poupeau
Diaporama pour une communication donnée dans le cadre de la journée d'études ADBS-EDB, "Quel Web demain ?", 7 avril 2009, http://www.adbs.fr/quel-web-demain--57415.htm
How to use index to highlight social networks
in historical digital corpora ?
Présentation à Digital Humanities, 6 juillet 2006 (Paris).
Attention, c\'est un peu vieilli...
As Europe's leading economic powerhouse and the fourth-largest hashtag#economy globally, Germany stands at the forefront of innovation and industrial might. Renowned for its precision engineering and high-tech sectors, Germany's economic structure is heavily supported by a robust service industry, accounting for approximately 68% of its GDP. This economic clout and strategic geopolitical stance position Germany as a focal point in the global cyber threat landscape.
In the face of escalating global tensions, particularly those emanating from geopolitical disputes with nations like hashtag#Russia and hashtag#China, hashtag#Germany has witnessed a significant uptick in targeted cyber operations. Our analysis indicates a marked increase in hashtag#cyberattack sophistication aimed at critical infrastructure and key industrial sectors. These attacks range from ransomware campaigns to hashtag#AdvancedPersistentThreats (hashtag#APTs), threatening national security and business integrity.
🔑 Key findings include:
🔍 Increased frequency and complexity of cyber threats.
🔍 Escalation of state-sponsored and criminally motivated cyber operations.
🔍 Active dark web exchanges of malicious tools and tactics.
Our comprehensive report delves into these challenges, using a blend of open-source and proprietary data collection techniques. By monitoring activity on critical networks and analyzing attack patterns, our team provides a detailed overview of the threats facing German entities.
This report aims to equip stakeholders across public and private sectors with the knowledge to enhance their defensive strategies, reduce exposure to cyber risks, and reinforce Germany's resilience against cyber threats.
Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...pchutichetpong
M Capital Group (“MCG”) expects to see demand and the changing evolution of supply, facilitated through institutional investment rotation out of offices and into work from home (“WFH”), while the ever-expanding need for data storage as global internet usage expands, with experts predicting 5.3 billion users by 2023. These market factors will be underpinned by technological changes, such as progressing cloud services and edge sites, allowing the industry to see strong expected annual growth of 13% over the next 4 years.
Whilst competitive headwinds remain, represented through the recent second bankruptcy filing of Sungard, which blames “COVID-19 and other macroeconomic trends including delayed customer spending decisions, insourcing and reductions in IT spending, energy inflation and reduction in demand for certain services”, the industry has seen key adjustments, where MCG believes that engineering cost management and technological innovation will be paramount to success.
MCG reports that the more favorable market conditions expected over the next few years, helped by the winding down of pandemic restrictions and a hybrid working environment will be driving market momentum forward. The continuous injection of capital by alternative investment firms, as well as the growing infrastructural investment from cloud service providers and social media companies, whose revenues are expected to grow over 3.6x larger by value in 2026, will likely help propel center provision and innovation. These factors paint a promising picture for the industry players that offset rising input costs and adapt to new technologies.
According to M Capital Group: “Specifically, the long-term cost-saving opportunities available from the rise of remote managing will likely aid value growth for the industry. Through margin optimization and further availability of capital for reinvestment, strong players will maintain their competitive foothold, while weaker players exit the market to balance supply and demand.”
Why I don't use Semantic Web technologies anymore, event if they still influence me ?
1. Why I don’t use anymore semantic Web
technologies, even if they still influence me ?
12th December 2019
Linked Pasts, Bordeaux
Gautier Poupeau ,
gautier.poupeau@gmail.com
@lespetitescases
http://www.lespetitescases.net
10. Producteur
Utilisateur
The system strictly follows the principles of the OAIS model (Open Archival
Information System), including in its architecture.
SPAR Architecture
11. How to store and query metadata ?
A powerfull query
language, accessible
to non-IT staff
Flexibility to describe all the
data and to query them
without any preconceived
idea
Standard, independant of
any software
implementation
RDF model and SPARQL Query Language
12. How metadata is handled within SPAR ?
Step 1
Ingest of digital item
Update manager
Type detection of update
and automatic merge
Control and audit Enrichment
Customizable for the different types
of digital item
Vocabularies
Formats Agents
Service Level
Agreement
Result
A set of files compliant
with SLA
All metadata usefull to
manage file for long term
Step 2
Inventory
Storage and indexation of digital item
Repository
14. Metadata repositories in SPAR
• All master data
• all metadata from METS
manifest
• Rules to store in Selective
repository
• All master data
• a choice of metadata from
METS manifest ;
•All master data
Complete
repository
Selective
repository
Master data
repository
To fix performance issues, we had to adapt our architecture…
15. Outcome of this project
Performance issues
Flexibility
System still in place
BnF remains convinced
of this choice
17. What is Isidore ?
http://isidore.science
• Managed by TGIR Huma-NUM
• 6 445 data sources
• 6 millions of resources indexed in french,
english, spanish
• Use of vocabularies
• Enrichment of resources : automatic
annotation, classification, attribution of
normalized identifiers
21. Make Isidore data available
Enrichment
by Isidore
Data publication
by Isidore
Retrieving by
producers
Processing
by
producers
Data
publication
by producers
Harvesting
by Isidore
to allow a positive feedback
22. Outcome of this project
Complexity issues
Knowledge issues
Appropriation by the
community
Project is an example
"We mostly get in touch with the researchers when things go wrong with the data. And it
often goes wrong for several reasons. But, indeed, there was the question of these standards
giving the researchers a hard time [...] they tell us: but why don’t you just use csv rather than
bother with your semantic web business? " Raphaëlle Lapotre, product manager data.bnf.fr
23. FROM MASHUPS TO LINKED
ENTERPRISE DATA
Breaking silos / linking and bringing consistency to
heterogeneous data
24. Data mashup
Tim Berners Lee, Ora Lassila, James Hendler,
« Semantic Web », Scientific american, 2001
« The real power of semantic
Web will be realized when
people create many programs
that collect Web content from
diverses sources, process the
information and exchange the
results with other programs »
26. Architecture of historical
monuments mashup
Source
principale
Sources complémentaires
Web Service de
géo localisation
AIF
normalisation et
enrichissement
AFS
moteur de
recherche
AFS
Application
Monuments
Historiques
28. Architecture before LED project
SQL Server
DBMS
Structured Data
• Best sales
• Buzz
• Awards
• Reserved Titles
• Events
Professional Directory
• Publishers
• Distributors
• Managers
Quark XPRESS
CMS
File Maker
DBMS
Editorial content
• Articles
• Visuals
Livres Hebdo.fr Web site
Electre.com Web site
• Books
• Authors
• Publishers
• Articles (Reviews)
• Best Sales
• Media relays
• Events
• Articles (web)
• Blogs posts
• Visuals
• Documents
• Events
• Articles (Print)
• Authors
• Books
• Best sales
• Media relays
• Awards
• Reserved Titles
• Events
• Directory
Books
Awards
Articles (Reviews)
Best Sales
Media relays
29. Architecture with LED
SQL Server
DBMS
Structured Data
• Best sales
• Buzz
• Awards
• Reserved Titles
• Events
Professional Directory
• Publishers
• Distributors
• Managers
Quark XPRESS
CMS
File Maker
DBMS
Editorial content
• Articles
• Visuals
Livres Hebdo.fr Web site
Electre.com Web site
• Books
• Authors
• Publishers
• Articles (Reviews)
• Best Sales
• Media relays
• Events
• Articles (web)
• Blogs posts
• Visuals
• Documents
• Events
• Articles (Print)
• Authors
• Books
• Best sales
• Media relays
• Awards
• Reserved Titles
• Events
• Directory
Other internal sources
(works)
Other external sources
free or paid model
New services
New customers
RDF DW
Transform
Agregate
Link
Annotate
30. Outcome of this project
Scalability issues
Complexity/update issues
Skills issues
Maintenability issues
Cost issues
All data are linked and
consistent
Flexibility to manipulate
RDF data
32. The flexibility of the graph model
Benefits and limits of Semantic Web technologies
RDF Graph = absolute freedom
compared with the rigidity of
relational databases
Linking of heterogeneous entities
easily
Graph can evolve over time and its
growth is potentially infinite
Maintainability issues
Model issues
33. The flexibility of the graph model
RDF vs property graph
RDF Property graph
RDF model are based on triple model :
subject-predicat-object
Property graph are based on nodes, edges
and properties of nodes or edges.
34. The flexibility of the graph model
Beyond the limits
Reconciliation between
RDF and property graph ?
Example of RDF*
<<:bob foaf:age 23>> ex:certainty 0.9 .
Example of SPARQL*
SELECT ?p ?a ?c WHERE {
<<?p foaf:age ?a>> ex:certainty ?c .
}
RDF* / SPARQL*
Do you really need RDF model to store data ?
35. Data dissemination / Interoperability / Decentralisation
Contributions and limits of semantic Web technologies
Best solution to achieve
interoperability of data
Linking heterogeneous data
Create bridges between worlds
impossible to reconcile
SPARQL as powerful tool for
querying data
Asynchronous data retrieval
Costs of maintenability
Knowledge issues
Full text search not possible
Structural interoperability
impossible data mappings
36. Data dissemination / Interoperability / Decentralisation
Overcoming the limits
Easy-to-use ontologies
Simple CSV
or JSON/XML dumps
Simple API
What are the possibles
uses ? Who are the users ?
Do we need this level of interoperability?
38. Functionally separate data from their use
• To rethink data models in relation to their
logics and not theiru use
• To acknowledge that some data models are
dedicated to production and storage while
several other models are designed
specifically for data dissemination
39. Technically separate data from their use
• Information System is
organized in layers and
not anymore in silos
• The storage and process
of data are separated
from business
applications
40. An infrastructure to store and process data
4 types of database system to
store all types of data and to
address all types of usage
A process module to interact with
the data and synchronize data
between the different databases
A management module to
abstract the technical
infrastructure and expose logical
data to business applications
41. Thank you for your attention !
Do you have some questions ?
And sorry for this…
I would like to thank very much Emmanuelle Bermès (@figoblog) for the translation of
this keynote !