This document discusses the development of a semantic wiki farm called UfoWiki. UfoWiki aims to provide a user-friendly interface for creating and managing ontology instances within wikis. It uses forms mapped to ontologies to create instances without requiring knowledge of semantic technologies. UfoWiki also links wiki page metadata and content to external datasets using standards like SIOC and MOAT to integrate data within and across wikis and provide enhanced search and querying capabilities. The goal is to unlock structured data within wikis for advanced usage while maintaining usability for non-technical users.
The document discusses modeling social media data using Artefact-Actor-Networks, which combine social networks and artefact networks. Data is obtained from various social media tools and stored using semantic relations between actors and artefacts. The networks are then analyzed and applied to applications like analyzing student social media use. Performance issues are noted when working with large real-world datasets.
Bio2RDF converts over 40 life science databases with over 30 billion triples into semantic web technologies to support biological discovery. It provides interlinked data through SPARQL endpoints in various locations. The presentation discusses Bio2RDF's methodology for converting, providing, and enabling reuse of data based on linked open data principles in order to encourage original data providers to directly publish RDF and link to other data sources.
Soren Auer - LOD2 - creating knowledge out of Interlinked DataOpen City Foundation
The document discusses the LOD2 project which aims to create knowledge from interlinked open data. It focuses on very large RDF data management, knowledge enrichment through interlinking data from different sources, and developing semantic user interfaces. The project uses use cases in media, enterprise, open government data, and public sector contracts. The goal is to develop an integrated Linked Data lifecycle management stack.
The Digital Pompidou Centre project aims to create a new website for the Centre Pompidou using semantic web and linked data principles. This will replace the current website and create a central digital library. The project involves linking cultural data from the museum, libraries, and archives into a unified data model. Key challenges include improving scalability, updating data daily, and gaining institutional support for opening the data.
The document provides an overview of metadata and how it can be used. It discusses different types of metadata including structural, administrative, and descriptive metadata. It also covers how to create metadata by determining content types and attributes, and identifying functionality. Standards like Dublin Core, RDF/RDFa and Schema.org are examined as sources for metadata fields. The workshop teaches best practices for applying metadata to improve search, browsing and other functions.
Unwrapping Tumblr for Writers (Advertising & PR at Marquette University)Mykl Novak
I unwrapped Tumblr with ADPR 2200: Media Writing at Marquette University. This presentation introduces Tumblr, describes posting and engaging on Tumblr and offers 10 tips for writers.
Wikipedia as a Time Machine - WWW 2014 TempWeb Workshop Presentationstewhir
An overview of using Wikipedia time signal data. These are the slides for the TempWeb workshop paper: http://www.stewh.com/wp-content/uploads/2014/02/w14temp07-whiting.pdf
The document discusses modeling social media data using Artefact-Actor-Networks, which combine social networks and artefact networks. Data is obtained from various social media tools and stored using semantic relations between actors and artefacts. The networks are then analyzed and applied to applications like analyzing student social media use. Performance issues are noted when working with large real-world datasets.
Bio2RDF converts over 40 life science databases with over 30 billion triples into semantic web technologies to support biological discovery. It provides interlinked data through SPARQL endpoints in various locations. The presentation discusses Bio2RDF's methodology for converting, providing, and enabling reuse of data based on linked open data principles in order to encourage original data providers to directly publish RDF and link to other data sources.
Soren Auer - LOD2 - creating knowledge out of Interlinked DataOpen City Foundation
The document discusses the LOD2 project which aims to create knowledge from interlinked open data. It focuses on very large RDF data management, knowledge enrichment through interlinking data from different sources, and developing semantic user interfaces. The project uses use cases in media, enterprise, open government data, and public sector contracts. The goal is to develop an integrated Linked Data lifecycle management stack.
The Digital Pompidou Centre project aims to create a new website for the Centre Pompidou using semantic web and linked data principles. This will replace the current website and create a central digital library. The project involves linking cultural data from the museum, libraries, and archives into a unified data model. Key challenges include improving scalability, updating data daily, and gaining institutional support for opening the data.
The document provides an overview of metadata and how it can be used. It discusses different types of metadata including structural, administrative, and descriptive metadata. It also covers how to create metadata by determining content types and attributes, and identifying functionality. Standards like Dublin Core, RDF/RDFa and Schema.org are examined as sources for metadata fields. The workshop teaches best practices for applying metadata to improve search, browsing and other functions.
Unwrapping Tumblr for Writers (Advertising & PR at Marquette University)Mykl Novak
I unwrapped Tumblr with ADPR 2200: Media Writing at Marquette University. This presentation introduces Tumblr, describes posting and engaging on Tumblr and offers 10 tips for writers.
Wikipedia as a Time Machine - WWW 2014 TempWeb Workshop Presentationstewhir
An overview of using Wikipedia time signal data. These are the slides for the TempWeb workshop paper: http://www.stewh.com/wp-content/uploads/2014/02/w14temp07-whiting.pdf
This document provides information on information literacy and internet research skills for Wikipedia editors. It discusses searching for reliable sources, evaluating primary, secondary and tertiary sources, using sources ethically by citing and avoiding plagiarism, and sharing knowledge on Wikipedia. Search engines like Google are best for initial research but have limitations. Libraries provide published sources that may be inaccessible or expensive otherwise. Digital libraries and specialized websites supplement search engine research. Proper attribution of sources is important.
Better Manufacturing Work Instructions and Technical Documentation with DozukiDozuki Software
The approach to work instructions and standardized procedures (SOPs) used in manufacturing hasn't changed in decades – until now. Dozuki's software tool makes it easy to create, manage, and distribute documentation to your line operators and technicians, wherever they may be. Use mobile devices or any internet connected computer to retrieve or modify your documentation.
The next big breakthrough in manufacturing operations is here:
http://www.dozuki.com/solutions/electronic-work-instructions/
The document provides instructions for creating a wiki using the Wetpaint platform. It outlines selection criteria to consider when choosing a wiki platform, and gives steps to plan content, invite collaborators, and add discussion forums. Contact information is provided for help setting up a new wiki.
The document discusses several Gestalt principles and laws including similarity, proximity, closure, continuity, common fate, symmetry, emergence, reification, invariance, and multi-stability. It provides examples of how each principle or law is demonstrated on web pages through the grouping of images, text, links, and other design elements.
Semantic networks are a knowledge representation technique where concepts are represented as nodes in a graph, and relationships between concepts are represented as links between nodes. There are different types of semantic networks, including definitional networks that emphasize subclass relationships, assertional networks for making propositions, and executable networks that can change based on operations. Common semantic relations include IS-A for subclasses, INSTANCE for examples, and HAS-PART for components. While semantic networks provide a natural representation of relationships, they have disadvantages like lack of standard link names and difficulty representing some logical constructs.
This qualitative overview of the Open Health Data initiatives is meant to showcase the importance of open health data, social as well as economic impacts across US, UK and a select set of Western European countries. This overview is not meant to be a comprehensive report on all the global initiatives, funding models and tracking of open health data. There are tremendous efforts across the globe to change our global healthcare system and we believe that open health data is one of the keys to bridge the gap between digital citizens & governments. Also, please note that if your country, initiative or product was not mentioned, it is in no way meant to diminish the impact of the efforts. Please feel free to share, discuss and contribute to the list of ongoing efforts and initiatives on one of our global communities or on openhealthdata.org.
The fifth presentation in the series called Political Ideologies. It is suitable for History and International Relations from Year 9 to university level. It contains the following: Marx, The Capital, Communist Manifesto, dialectical materialism, socialism, forms of Marxism, classical Marxism, the utopians, Hegels, mode of production, Hegel's thesis, Hegelian dialectic, Marx theory of history, stages of Marxism, communism, classless society,
class conflict, exploitation, capitalism, proletariat, the proletarian revolution, orthodox communism, Marxism, Leninism, Stalinism, reification, Frankfurt School.
This presentation gives a brief overview on achievements and challenges of the Data Web and describes different aspects of using the Semantic Data Wiki OntoWiki for Linked Data management.
The document discusses metadata and semantic web technologies. It provides an example of using RDFa to embed metadata in a web page about a book. It also shows how schema.org, microformats, and microdata can be used to add structured metadata. Finally, it discusses linked data and how semantic web technologies allow sharing and linking data on the web.
This document discusses trends driving the emergence of NoSQL databases and provides an overview of NoSQL. The key trends include: (1) rapidly increasing data set sizes, (2) greater connectivity of data, and (3) more semi-structured and decentralized content. These trends have challenged the performance and architecture of traditional relational databases. NoSQL databases emerged in response and come in four main categories: key-value stores, BigTable clones, document databases, and graph databases. Each has a different data model suited to different types of use cases. Looking ahead, the best approach may be "polyglot persistence" using both SQL and NoSQL solutions.
This document discusses using semantic wikis to reduce the steep learning curve in developing semantic web applications. It presents a semantic wiki called Towards Social Webtops that allows for easy publishing, smart data propagation, fast prototyping in the browser, and lightweight concept modeling. The semantic wiki is demonstrated at http://tw.rpi.edu/wiki and includes applications like an RPI map and events calendar, a wine wiki, group information management, and an ontology repository. It addresses challenges in data organization, sharing, personalization, privacy, and provenance through features like RDF modeling, relational modeling, rules, semantic templates and forms, annotation extensions, and remote querying of multiple wikis.
Enabling cross-wikis integration by extending the SIOC ontologyFabrizio Orlandi
This document discusses enabling cross-wiki integration by extending the SIOC (Semantically-Interlinked Online Communities) ontology. It presents an approach to represent wiki structures and social interactions in a unified way using SIOC. An exporter was developed to translate MediaWiki pages into SIOC data following Linked Data principles. Querying this integrated data across wikis and other social platforms was demonstrated. Further work is needed to develop exporters for other wiki platforms and improve modeling of wiki page content and versioning systems.
1) Ontologies play a key role in semantic digital libraries by supporting bibliographic descriptions, extensible resource structures, and community-aware features.
2) Semantic digital libraries integrate information from various metadata sources and provide interoperability between systems using semantics.
3) Key ontologies for digital libraries include bibliographic ontologies, structure description ontologies, and community-aware ontologies that model folksonomies and social semantic collaborative filtering.
SemSearch09 workshop at WWW2009, April 21th 2009- http://km.aifb.uni-karlsruhe.de/ws/semsearch09/ - Paper available at: http://km.aifb.uni-karlsruhe.de/ws/semsearch09/semse2009_25.pdf
The document discusses combining Web 2.0 and Semantic Web approaches in a reviewing site called Revyu. It argues that Web 2.0 data is difficult to reuse and interlink due to siloed formats, while the Semantic Web lacks easy interfaces for non-experts. Revyu allows regular users to create reviews through simple forms, while publishing and integrating the data as linked RDF for other applications to reuse at scale.
The document describes DBpedia, a project that extracts structured data from Wikipedia and makes it available on the Web. DBpedia has extracted over 2.6 million entities from Wikipedia and defined web-dereferenceable identifiers for each. As DBpedia covers many domains, other data sources on the Web have begun linking to DBpedia resources, making DBpedia a central hub. This has resulted in a Web of over 4.7 billion interlinked pieces of data across various domains.
The document discusses the Semantic Web, which aims to extend the current web by giving information well-defined meaning so that computers and people can better cooperate. It was proposed by Tim Berners-Lee as a way to make data on the web more machine-readable. Key components that enable the Semantic Web include RDF, OWL, SPARQL, and linked data. RDF in particular allows structured descriptions of resources through subject-predicate-object triples that can be connected to form graphs. This allows semantic content to be included in web pages and facilitates searching and sharing of information across the web.
RDF Linked Data - Automatic Exchange of BIM ContainersSafe Software
This presentation tells the story, and FME solutions of a Dutch Utility company for the automatic exchange of data containers containing RDF Linked data, BIM, and documents.
The presentation will focus on the non-traditional representation of RDF Linked Data and how this integrates with FME through SPARQL, Apache Jena, and a few customer-built transformers in FME.
This FME solution also uses my Excel switch-based method of directing the data flow (my presentation during the FME World Fair).
Graph databases are well suited for complex, interconnected data. Neo4j is a graph database that represents data as nodes connected by relationships. It allows for complex queries and traversals of graph structures. Unlike relational databases, graph databases can directly model real world networks and relationships without needing to flatten the data.
There has been plenty of hype around the Semanic Web, but will we ever see the vision of intelligent agents working on our behalf? This talk introduces the concepts of the Semantic Web as envisioned by Tim Berners-Lee over 10 years ago and compares that vision to where we have come since then. It includes a discussion of implementations such as XML, RDF, OWL (web ontology language), and SPARQL. After reviewing the design principles and enabling technologies, I plan to show how these techniques can be implemented in WebGUI.
Applying And Extending Semantic Wikis For Semantic Web CoursesAlicia Buske
Applying and Extending Semantic Wikis for Semantic Web Courses
This document describes using semantic wikis for distance learning courses on the Semantic Web. It discusses using Semantic MediaWiki, along with extensions like Semantic Forms and Semantic Friendly Forms, to build ontologies and populate them with data. Students in Master's and Bachelor's courses at Open University develop Semantic Web systems on these semantic wikis as assignments. The document also introduces a new "OWL Wiki Forms" extension for improved ontology editing capabilities.
This slides I've used on talk about Semantic Web use-case. Not all know what exactly Semantic Web is about. So I've created set of slides showing this in a simple and correct way. Use-case slides are removed on this public available slides. Animated version here goo.gl/qKoF6k . Contact me for sources!
This document provides information on information literacy and internet research skills for Wikipedia editors. It discusses searching for reliable sources, evaluating primary, secondary and tertiary sources, using sources ethically by citing and avoiding plagiarism, and sharing knowledge on Wikipedia. Search engines like Google are best for initial research but have limitations. Libraries provide published sources that may be inaccessible or expensive otherwise. Digital libraries and specialized websites supplement search engine research. Proper attribution of sources is important.
Better Manufacturing Work Instructions and Technical Documentation with DozukiDozuki Software
The approach to work instructions and standardized procedures (SOPs) used in manufacturing hasn't changed in decades – until now. Dozuki's software tool makes it easy to create, manage, and distribute documentation to your line operators and technicians, wherever they may be. Use mobile devices or any internet connected computer to retrieve or modify your documentation.
The next big breakthrough in manufacturing operations is here:
http://www.dozuki.com/solutions/electronic-work-instructions/
The document provides instructions for creating a wiki using the Wetpaint platform. It outlines selection criteria to consider when choosing a wiki platform, and gives steps to plan content, invite collaborators, and add discussion forums. Contact information is provided for help setting up a new wiki.
The document discusses several Gestalt principles and laws including similarity, proximity, closure, continuity, common fate, symmetry, emergence, reification, invariance, and multi-stability. It provides examples of how each principle or law is demonstrated on web pages through the grouping of images, text, links, and other design elements.
Semantic networks are a knowledge representation technique where concepts are represented as nodes in a graph, and relationships between concepts are represented as links between nodes. There are different types of semantic networks, including definitional networks that emphasize subclass relationships, assertional networks for making propositions, and executable networks that can change based on operations. Common semantic relations include IS-A for subclasses, INSTANCE for examples, and HAS-PART for components. While semantic networks provide a natural representation of relationships, they have disadvantages like lack of standard link names and difficulty representing some logical constructs.
This qualitative overview of the Open Health Data initiatives is meant to showcase the importance of open health data, social as well as economic impacts across US, UK and a select set of Western European countries. This overview is not meant to be a comprehensive report on all the global initiatives, funding models and tracking of open health data. There are tremendous efforts across the globe to change our global healthcare system and we believe that open health data is one of the keys to bridge the gap between digital citizens & governments. Also, please note that if your country, initiative or product was not mentioned, it is in no way meant to diminish the impact of the efforts. Please feel free to share, discuss and contribute to the list of ongoing efforts and initiatives on one of our global communities or on openhealthdata.org.
The fifth presentation in the series called Political Ideologies. It is suitable for History and International Relations from Year 9 to university level. It contains the following: Marx, The Capital, Communist Manifesto, dialectical materialism, socialism, forms of Marxism, classical Marxism, the utopians, Hegels, mode of production, Hegel's thesis, Hegelian dialectic, Marx theory of history, stages of Marxism, communism, classless society,
class conflict, exploitation, capitalism, proletariat, the proletarian revolution, orthodox communism, Marxism, Leninism, Stalinism, reification, Frankfurt School.
This presentation gives a brief overview on achievements and challenges of the Data Web and describes different aspects of using the Semantic Data Wiki OntoWiki for Linked Data management.
The document discusses metadata and semantic web technologies. It provides an example of using RDFa to embed metadata in a web page about a book. It also shows how schema.org, microformats, and microdata can be used to add structured metadata. Finally, it discusses linked data and how semantic web technologies allow sharing and linking data on the web.
This document discusses trends driving the emergence of NoSQL databases and provides an overview of NoSQL. The key trends include: (1) rapidly increasing data set sizes, (2) greater connectivity of data, and (3) more semi-structured and decentralized content. These trends have challenged the performance and architecture of traditional relational databases. NoSQL databases emerged in response and come in four main categories: key-value stores, BigTable clones, document databases, and graph databases. Each has a different data model suited to different types of use cases. Looking ahead, the best approach may be "polyglot persistence" using both SQL and NoSQL solutions.
This document discusses using semantic wikis to reduce the steep learning curve in developing semantic web applications. It presents a semantic wiki called Towards Social Webtops that allows for easy publishing, smart data propagation, fast prototyping in the browser, and lightweight concept modeling. The semantic wiki is demonstrated at http://tw.rpi.edu/wiki and includes applications like an RPI map and events calendar, a wine wiki, group information management, and an ontology repository. It addresses challenges in data organization, sharing, personalization, privacy, and provenance through features like RDF modeling, relational modeling, rules, semantic templates and forms, annotation extensions, and remote querying of multiple wikis.
Enabling cross-wikis integration by extending the SIOC ontologyFabrizio Orlandi
This document discusses enabling cross-wiki integration by extending the SIOC (Semantically-Interlinked Online Communities) ontology. It presents an approach to represent wiki structures and social interactions in a unified way using SIOC. An exporter was developed to translate MediaWiki pages into SIOC data following Linked Data principles. Querying this integrated data across wikis and other social platforms was demonstrated. Further work is needed to develop exporters for other wiki platforms and improve modeling of wiki page content and versioning systems.
1) Ontologies play a key role in semantic digital libraries by supporting bibliographic descriptions, extensible resource structures, and community-aware features.
2) Semantic digital libraries integrate information from various metadata sources and provide interoperability between systems using semantics.
3) Key ontologies for digital libraries include bibliographic ontologies, structure description ontologies, and community-aware ontologies that model folksonomies and social semantic collaborative filtering.
SemSearch09 workshop at WWW2009, April 21th 2009- http://km.aifb.uni-karlsruhe.de/ws/semsearch09/ - Paper available at: http://km.aifb.uni-karlsruhe.de/ws/semsearch09/semse2009_25.pdf
The document discusses combining Web 2.0 and Semantic Web approaches in a reviewing site called Revyu. It argues that Web 2.0 data is difficult to reuse and interlink due to siloed formats, while the Semantic Web lacks easy interfaces for non-experts. Revyu allows regular users to create reviews through simple forms, while publishing and integrating the data as linked RDF for other applications to reuse at scale.
The document describes DBpedia, a project that extracts structured data from Wikipedia and makes it available on the Web. DBpedia has extracted over 2.6 million entities from Wikipedia and defined web-dereferenceable identifiers for each. As DBpedia covers many domains, other data sources on the Web have begun linking to DBpedia resources, making DBpedia a central hub. This has resulted in a Web of over 4.7 billion interlinked pieces of data across various domains.
The document discusses the Semantic Web, which aims to extend the current web by giving information well-defined meaning so that computers and people can better cooperate. It was proposed by Tim Berners-Lee as a way to make data on the web more machine-readable. Key components that enable the Semantic Web include RDF, OWL, SPARQL, and linked data. RDF in particular allows structured descriptions of resources through subject-predicate-object triples that can be connected to form graphs. This allows semantic content to be included in web pages and facilitates searching and sharing of information across the web.
RDF Linked Data - Automatic Exchange of BIM ContainersSafe Software
This presentation tells the story, and FME solutions of a Dutch Utility company for the automatic exchange of data containers containing RDF Linked data, BIM, and documents.
The presentation will focus on the non-traditional representation of RDF Linked Data and how this integrates with FME through SPARQL, Apache Jena, and a few customer-built transformers in FME.
This FME solution also uses my Excel switch-based method of directing the data flow (my presentation during the FME World Fair).
Graph databases are well suited for complex, interconnected data. Neo4j is a graph database that represents data as nodes connected by relationships. It allows for complex queries and traversals of graph structures. Unlike relational databases, graph databases can directly model real world networks and relationships without needing to flatten the data.
There has been plenty of hype around the Semanic Web, but will we ever see the vision of intelligent agents working on our behalf? This talk introduces the concepts of the Semantic Web as envisioned by Tim Berners-Lee over 10 years ago and compares that vision to where we have come since then. It includes a discussion of implementations such as XML, RDF, OWL (web ontology language), and SPARQL. After reviewing the design principles and enabling technologies, I plan to show how these techniques can be implemented in WebGUI.
Applying And Extending Semantic Wikis For Semantic Web CoursesAlicia Buske
Applying and Extending Semantic Wikis for Semantic Web Courses
This document describes using semantic wikis for distance learning courses on the Semantic Web. It discusses using Semantic MediaWiki, along with extensions like Semantic Forms and Semantic Friendly Forms, to build ontologies and populate them with data. Students in Master's and Bachelor's courses at Open University develop Semantic Web systems on these semantic wikis as assignments. The document also introduces a new "OWL Wiki Forms" extension for improved ontology editing capabilities.
This slides I've used on talk about Semantic Web use-case. Not all know what exactly Semantic Web is about. So I've created set of slides showing this in a simple and correct way. Use-case slides are removed on this public available slides. Animated version here goo.gl/qKoF6k . Contact me for sources!
The document discusses using linked data in Semantic MediaWiki to visualize biological data from multiple sources. It summarizes creating dynamic wiki pages for tens of thousands of gene and disease instances by importing RDF triples and using templates to display the data visually through various charting libraries. Examples of instance pages with interactive charts are provided from the joint Vulcan-Allen Institute Neurowiki project mapping genetic data.
Usage of Linked Data: Introduction and Application ScenariosEUCLID project
This presentation introduces the main principles of Linked Data, the underlying technologies and background standards. It provides basic knowledge for how data can be published over the Web, how it can be queried, and what are the possible use cases and benefits. As an example, we use the development of a music portal (based on the MusicBrainz dataset), which facilitates access to a wide range of information and multimedia resources relating to music.
Producing, publishing and consuming linked data - CSHALS 2013François Belleau
This document discusses lessons learned from the Bio2RDF project for producing, publishing, and consuming linked data. It outlines three key lessons: 1) How to efficiently produce RDF using existing ETL tools like Talend to transform data formats into RDF triples; 2) How to publish linked data by designing URI patterns, offering SPARQL endpoints and associated tools, and registering data in public registries; 3) How to consume SPARQL endpoints by building semantic mashups using workflows to integrate data from multiple endpoints and then querying the mashup to answer questions.
This document summarizes lessons learned from developing semantic wikis. It discusses how semantic wikis differ from traditional wikis by embedding structured metadata and propagating that metadata via semantic queries. It then outlines key features for different user groups, including improved data generation and propagation tools for end users, and light-weight data modeling and fast prototyping for developers. Remaining issues are also discussed, such as managing public and personal data, improving scalability, and data portability and protection across multiple wikis.
Using Semantic Wiki as a Semantic Web WorkbenchJie Bao
Semantic wiki allows for fast prototyping and lightweight concept modeling for application developers. It provides an application workbench built on a wiki interface, RDF triplestore, and extensions. For end users, it offers easy publishing of structured data and smart data propagation between applications and the wiki. Example applications built on the semantic wiki platform include a semantic map, blog, calendar, and literature repository.
Similar to Towards an Interlinked Semantic Wiki Farm (20)
Semwebbers, LODers, what PubSubHubbub can do for you (SemTech)Alexandre Passant
This document discusses how PubSubHubbub (PuSH) can enable real-time notifications of changes on the semantic web and linked data through the use of SPARQL queries. It presents sparqlPuSH, which allows clients to register SPARQL queries and receive notifications of updates through PuSH. SMOB is also introduced as a distributed microblogging platform that uses PuSH to broadcast SPARQL update queries. Twarql is described as a system that extracts entities from Twitter using SPARQL and PuSH.
Dr. Alexandre Passant gave a lightning talk at SemTech2011 in San Francisco on June 6th 2011 about Seevl, a system he created that uses semantic web technologies to provide unique ways of exploring music and connections between artists. Seevl interlinks music data using RDF, ontologies like FOAF and SKOS, stores the data in a graph database, and generates recommendations and search results through SPARQL queries. The system aims to help users find new artists, understand relationships between bands, and explore their cultural and musical universe by going beyond just sounds.
The document discusses the social web and semantic web. It provides examples of the growth of social media like 700 billion minutes spent on Facebook per month and 90 million tweets per day. It also discusses technologies related to the semantic web like RDF, SPARQL, ontologies and vocabularies like FOAF and SIOC. It proposes a vision of a social semantic web that combines semantic web technologies and online communities to link people and data on the web.
The document describes dbrec, a music recommendation system that uses DBpedia. It computes semantic distance over linked data to provide recommendations. It has an ontology, dataset of over 1 million triples, and user interface. An evaluation was conducted against Last.fm, finding dbrec recommendations were unknown 62% of the time but received average ratings of 3.37. Lessons included issues with DBpedia data quality and scaling SPARQL queries against large datasets.
Semwebbers, LODers: What PubSubHubbub can do for you Alexandre Passant
This document discusses PubSubHubbub (PuSH) and how it can be used to provide notifications of changes to structured data defined by SPARQL queries. It introduces sparqlPuSH, which combines SPARQL, SPARQL Update and PuSH to enable proactive notifications when relevant data changes in an RDF store. Queries can be registered to define which changes to track. When data is updated, sparqlPuSH identifies relevant changes and broadcasts notifications using PuSH. This allows applications to receive real-time updates without polling.
The document discusses the potential for a "Social Semantic Web" where Semantic Web technologies could support online communities generated by social networks, and those communities could also contribute semantic data by connecting information. It notes that while the Social Web and Semantic Web have seen success individually, fully realizing their integration as a Social Semantic Web has yet to be achieved. Challenges include getting users to adopt shared ontologies and developing systems that are ubiquitous, real-time, and proactive.
The document introduces SMOB, a framework for semantic microblogging. SMOB addresses issues with current microblogging systems by using a distributed architecture, common representation formats through an ontology stack, meaningful tags through integration with Linked Open Data, and customized information delivery through sharing spaces. It reuses popular models like FOAF and SIOC and models posts in RDFa. SMOB also describes ongoing work on sparqlPuSH for real-time synchronization and Twarql for LOD interlinking from Twitter data.
A semantic framework for modelling quotes in email conversationsAlexandre Passant
The document proposes a semantic framework for modeling quotes in email conversations. It involves extending the SIOC (Semantically-Interlinked Online Communities) ontology to capture quoting patterns observed in analyzing mailing list archives. Specifically, a new "quotes" ontology is defined to represent quoted text using the "quotes:has_block" property, allowing quoted portions of messages to be explicitly linked back to the original text. This enhanced model allows quotes extracted from email threads to be represented and queried in RDF.
Hey! Ho! Let’s go! Explanatory music recommendations with dbrecAlexandre Passant
The document describes a demonstration of dbrec, a self-explanatory music recommendation system that uses Linked Data and the Linked Data Semantic Distance (LDSD) to provide recommendations for over 39,000 musical artists and bands from DBpedia. It computes recommendations by calculating the LDSD between artists modeled in an LDSD ontology and displays results through a SPARQL query-based user interface using RDFa.
sparqlPuSH: Proactive notification of data updates in RDF stores using PubSub...Alexandre Passant
Presentation @ SFSW2010 (ESWC2010 Workshop). Paper available at semanticscripting.org/SFSW2010/papers/sfsw2010_submission_6.pdf + video at http://apassant.net/blog/2010/04/18/sparql-pubsubhubbub-sparqlpush#comments
Using Ontologies to Strengthen Folksonomies and Enrich Information Retrieval ...Alexandre Passant
The document discusses using ontologies to strengthen folksonomies and improve information retrieval from blogs. It proposes linking tags that users assign to blog posts to concepts in a domain ontology. This reduces issues with tag ambiguity and variation. The system then uses the semantic connections between tags and ontology concepts to suggest related tags and posts to users based on their searches. Querying the enriched data in this way allows for more precise searching and discovery of relevant information in blogs.
This document provides an overview of a lecture on social media and the social web. It discusses concepts like blogs, wikis, media sharing, online social networking, microblogging and related topics. Examples of popular social media platforms are given for each concept, like Wikipedia for wikis, Flickr and YouTube for media sharing, Facebook and Twitter for social networking. The document also provides guidance on how to use some of these platforms.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Towards an Interlinked Semantic Wiki Farm
1. Towards an Interlinked
Semantic Wiki Farm
SemWiki2008 - ESWC
June 2, 2008
Alexandre Passant1,2, Philippe Laublet1
1 LaLIC, Université Paris-Sorbonne
2 EDF, Recherche & Développement
1
2. Social software at EDF R&D
• Electricité de France R&D
• The largest electricity company in France
• More than 2000 researchers in its R&D department
• Lots of different people: chemists, trading experts, computer
scientists ...
• Informal and closed-world communities
• “Knowledge = Power”, due to its cultural history
• Enterprise 2.0 and EDF R&D
• Enterprise 2.0 is the use of emergent social software platforms
within companies, or between companies and their partners or
customers”, Andrew McAfee, May 2006
• Find ways to let people more easily exchange information and build
knowledge collaboratively
• Introducing new tools and principles
• Blogs, RSS feeds, tagging, wikis
2 • Top-down vs bottom-up approach
3. Using Wikis at EDF R&D
• Users adopted wikis for their particular needs, as people do
on the Web
• Internal project management
• Corporate information
• Knowledge bases on scientific topics
• ...
• Usage statistics
• More than 1000 registered users for the whole platform
• About 50 wikis, 2800 pages, 60 active users
• Different wikis as some people want “their” wiki
• Closed, read-only and open wikis, depending on the communities
• 10% radio between consumers and producers (blogs included)
• Many people were not used to those Web 2.0 principles
• Both technically and regarding the cultural changes it implies
regarding knowledge management
3
4. Technical limits of wikis for knowledge management
• Wikis search-engines can only answer plain-text queries
• Natural Language Processing algorithms must be applied to extract
information from current wiki systems
• Cannot answer questions about the content of wiki pages
• “Is EDF located in France ?”
• “List all companies known in that wiki”
• “Who’s working on tidal energies ?”
• Wikis manage documents, not machine-readable
representations of real-world objects
• Documents and hyperlinks instead of resources, relationships and
properties
• A gap between these two ways of modeling knowledge
• The Semantic Web bridges this gap and semantic wikis provide
ways to achieve this
4
6. Existing semantic wikis
• Using wikis to model data using Semanitic Web principles
• Ontology population and instances evolution
• Adding RDF into wiki pages
• SemperWiki
• Extending wiki syntax to define annotations
• Semantic MediaWiki
• Assisting users with user-friendly interfaces
• IkeWiki, OntoWiki
• Using Semantic Web principles to enrich the usage of wikis
• Organizing tags to enhance information retrieval
• SweetWiki
• Powerful (querying, reasoning, enhanced navigation ...), but
raise usability issues in a corporate context
• URIs, namespaces, triples...
6 • People need something that work without aditional efforts
7. UfoWiki: Unifying Forms and Ontologies in a Wiki
• A semantic wiki-farm server
• Goals
• Provide a user-friendly interface to let users create ontology
instances and manage them in the wiki way
• Hidden semantics for end-users, using forms
• Use SIOC and MOAT to model wiki pages meta-data, so that it can
be integrated with other internal SIOC data
• A corporate SIOC-o-sphere
• Connect the meta-data layer to the data (i.e. content) layer
• Who wrote facts about EDF ?
• Reuse RDF data available on the Web
• Geonames.org, DBpedia ...
• Interlink data from various wiki instances
• While some wikis are private, their data is valuable
• Re-use semantic annotations to provide value added interfaces
7 • Macros, semantic search-engine
8. A form-based user interface
• Using forms to maintain ontology instances
• Let end-users focus on the content rather than on the modeling
• Avoid semantic heterogeneity
• Wiki administrators define form-based page templates
• Based on existing Drupal modules
• Flexinode (Drupal4), rewriting to CCK
• Each page corresponds to an ontology class
• Organization page => foaf:Organization
• Each field corresponds to a property or relationship
• Some complex fields can be used to define internal instances
• Some fields can be used to define MOAT tags
• Ease the process of linking tagged content to related instances
• E.g. acronym, nickname
• SPARQL autocompletion based on expected class type
8
• Closing the open-world assumption, inference might come later
9. Using wiki to manage ontology instances
• Each created page yields to one main instance, and related /
internal ones
Macro interne
Champ avec
auto-complétion
Instance interne
9
10. Linking data and meta-data
• embedsKnowledge: linking a sioc:Item to a graph containing
the triples that were create from this sioc:Item
• Using SIOC Types module for meta-data modeling
• sioct:WikiArticle rdfs:subClassOf sioc:Item
Meta-data RDF file
http:://athena/alex
sioc:has_creator
EDF Data RDF file
embedsKnowledge
dc:title
Wiki page A
athena:EDF
rdf:type http://sws.geonames.org/
geonames:locatedIn 3017382
sioct:WikiArticle
10
11. Architecture of a single wiki
edit
User 2
Wiki page
B
Wiki page HTML hyperlink
edit A
User 1
Document layer
(wiki level)
produces
produces produces produces RDF
Store
RDF
RDF meta-data
RDF about page
RDF description
description
meta-data of objects Semantic B
of objects
about page embedded relationships embedded
A in page A between objects in page B
uses semantic link
semantic link Storage
Semantic Web layer uses
Meta-data
ontologies Data-modeling
(SIOC, DC ...) ontologies
(SKOS, Domain
ontologies ...)
11
12. Interlinking data from various wikis
• All wikis share a common knowledge base
• URI identification across wikis
• Merge statements about URIs but keep source using named graphs
embedsKnowledge
embedsKnowledge
Wiki page A
athena:EDF athena:EDF
Wiki page B
rdf:type
http://sws.geonames.org/ athena:produces
3017382
geonames:locatedIn
rdf:type
sioct:WikiArticle athena:NuclearEnergy
sioct:WikiArticle
Wiki A
Wiki B
stores stores
RDF
Backend
merges
athena:EDF
geonames:locatedIn
athena:produces
http://sws.geonames.org/
3017382 athena:NuclearEnergy
12
13. Using produced RDF data
• RDF data is exported to the triple-store when page is created
• Immediately re-usable and up-to-date data
• Inline macros
• Defined by wiki administrators, using PHP and SPARQL
• User-friendly syntax to let end-users embed it in wiki pages
• Eg: [onto|members], [onto|type,foaf:Person]
• Can be used to run complex queries about data from another wiki
• Eg: All activities of an company and related organizations
• Direct RDF querying
• Advanced users - User-friendly SPARQL interface planned
• Queries regarding data, meta-data or both
• Semantic search
• From keyword to concept
• Integration with other SIOC data
13
14. Macro results
• SPARQL query results in wiki pages
• Subject of the query is the currently browsed instance
• Similar to SemanticMediaWiki inline macros
• Semantic back-links
14
15. SPARQL-ing RDF data from the wiki
• Combining meta-data and content levels:
• All pages from the wiki #6 that provide information about EDF and
than have at least 2 replies
select ?page ?title
where {
graph ?data {
:EDF ?predicate ?object
} .
?page :embedsKnowledge ?data ;
rdf:type sioct:WikiArticle ;
dc:title ?title ;
sioc:has_container <http://example.org/wiki/6> ;
sioc:num_replies ?replies .
<http://example.org/wiki/6> a sioct:Wiki .
} FILTER (?replies > 1)
15
16. Reusing RDF data available on the Web
• The Linking Open Data initiative
• Lots or RDF data available from reference data-sets
• GeoNames, DBpedia, riese …
• Using the same ontologies in a corporate environment
• Low-cost integration
• No need to align vocabularies and define mapping between them
• GeoNames wrapper
• “city, (state), country” fields mapped to geonames.org web-service
• Retrieve the location URI and its related RDF file (with coordinates)
• Simple way to create geolocation services and enhance navigation
• Provide interlinked RDF data
16
17. Easy-geolocation with our GeoNames wrapper
• Combining geolocation and macro-queries
• Location of any member of the currently browsed organization
• [onto|mapmembers] => SPARQL + rendering
17
18. Querying the internal SIOC-o-sphere
• Combining SIOC-based information from various data sources
• Find relevant resource from given keyword with MOAT
• Retrieves main / related wiki pages from different wikis
• Retrieve “tagged” blog posts
18
19. Conclusion
• Overview of our approach
• User-friendly interface with forms mapped to ontologies
• SIOC-based meta-data to ease integration with existing SIOC data
• Combining meta-data and data (content) layer
• Interlinking data from various wiki instances
• Using existing RDF data and vocabularies for value-added service
• What’s next ?
• Validate forms using the underlying ontologies
• User-interface to define macros
• Linking / reusing more LOD data
• Use SIOC in other wikis as a meta-data model
19
20. Thank you !
Any questions ?
slides @ http://apassant.net
20