This document discusses how semantic web technologies can make scientific information systems more social. It provides examples of how schema.org defines structured data for annotating web pages with information like movies, reviews, and social relationships between people. It also briefly mentions Facebook's Open Graph protocol. The key points are that semantic web annotations allow machines to understand web data in order to assist users, initiatives like schema.org are making these annotations mainstream, and structured semantic data enables social features for information sharing and collaboration.
This document discusses authentication options for integrating SmartCards with SharePoint. It provides background on the presenters and an overview of security concerns and benefits of SmartCards. Options for SmartCard authentication architectures are presented, including leveraging Active Directory, custom membership providers, and third party products. Considerations for implementation such as certificate revocation checking and linking user accounts are also covered.
ISWC 2016 Tutorial: Semantic Web of Things M3 framework & FIESTA-IoT EU projectFIESTA-IoT
Amelie Gyrard presents a tutorial on SWOT - the Semantic Web of Things.
For further information about this work. Please visit:
http://semantic-web-of-things.appspot.com
The document describes a virtual keyboard, which projects a full-sized keyboard onto any flat surface using infrared and laser technology. This allows mobile device users to type normally without small, cramped keyboards. The virtual keyboard is contained in a small device the size of a fountain pen that tracks finger movements to type. It can project the keyboard wirelessly using Bluetooth or optically detect typing on any surface. This provides benefits over physical keyboards like portability, lack of need for a flat surface, and reduced risk of repetitive strain injuries.
How software developers need to manage metadata and data dictionaries to make software integration faster and more cost effective. This presentation is a general overview of the concepts around data semantics for college-level students. This presentation was originally created for a seminar at Carleton College.
The document provides an introduction to the Semantic Web by discussing its key concepts and architecture. It explains that the Semantic Web aims to make web data easier for machines to understand by giving information well-defined meanings. This allows computers and humans to better cooperate by enabling more advanced search, mashups and applications. The Semantic Web is presented as an extension of the current web that builds on existing standards and technologies.
Deep learning networks can be successfully applied to big data for knowledge discovery, knowledge application, and knowledge-based prediction. In other words, deep learning can be a powerful engine for producing actionable results.
This is a general presentation that is appropriate for anyone that is just learning concepts of semantic integration. This presentation covers some of the background concepts underlying semantics (Ogden\'s Semantic Triangle), lexical and conceptual mapping, metadata registries, metadata discovery and semantic thinking. Excellent for an introductory class in business semantics.
This document discusses authentication options for integrating SmartCards with SharePoint. It provides background on the presenters and an overview of security concerns and benefits of SmartCards. Options for SmartCard authentication architectures are presented, including leveraging Active Directory, custom membership providers, and third party products. Considerations for implementation such as certificate revocation checking and linking user accounts are also covered.
ISWC 2016 Tutorial: Semantic Web of Things M3 framework & FIESTA-IoT EU projectFIESTA-IoT
Amelie Gyrard presents a tutorial on SWOT - the Semantic Web of Things.
For further information about this work. Please visit:
http://semantic-web-of-things.appspot.com
The document describes a virtual keyboard, which projects a full-sized keyboard onto any flat surface using infrared and laser technology. This allows mobile device users to type normally without small, cramped keyboards. The virtual keyboard is contained in a small device the size of a fountain pen that tracks finger movements to type. It can project the keyboard wirelessly using Bluetooth or optically detect typing on any surface. This provides benefits over physical keyboards like portability, lack of need for a flat surface, and reduced risk of repetitive strain injuries.
How software developers need to manage metadata and data dictionaries to make software integration faster and more cost effective. This presentation is a general overview of the concepts around data semantics for college-level students. This presentation was originally created for a seminar at Carleton College.
The document provides an introduction to the Semantic Web by discussing its key concepts and architecture. It explains that the Semantic Web aims to make web data easier for machines to understand by giving information well-defined meanings. This allows computers and humans to better cooperate by enabling more advanced search, mashups and applications. The Semantic Web is presented as an extension of the current web that builds on existing standards and technologies.
Deep learning networks can be successfully applied to big data for knowledge discovery, knowledge application, and knowledge-based prediction. In other words, deep learning can be a powerful engine for producing actionable results.
This is a general presentation that is appropriate for anyone that is just learning concepts of semantic integration. This presentation covers some of the background concepts underlying semantics (Ogden\'s Semantic Triangle), lexical and conceptual mapping, metadata registries, metadata discovery and semantic thinking. Excellent for an introductory class in business semantics.
This document discusses the history and concepts of tagging and folksonomies. It describes how tagging originated in the late 1980s as a way to annotate and organize digital objects. Folksonomies involve socially generated tags to make personal connections to meaning, taking a bottom-up approach. The document also discusses early video analysis tools from the late 1980s and early 1990s that utilized tagging to group similar video clips. A key open question is whether tagging is best done from a top-down or bottom-up approach.
Chatbots and Natural Language Generation - A Bird Eyes ViewMark Cieliebak
Chatbots, conversational user interfaces, dialogue systems, question-answering - the names differ, but the fundamental idea is the same: smart computer systems which can "talk" to humans in a natural way. Chatbots and their derivatives are designed to understand human language, interpret its content, and reply accordingly. This long-standing vision from artificial intelligence has gained enormous momentum since 2015.
But what is possible, and where are the boundaries? Do chatbots really "understand" the meaning of text? And how can they be employed beneficially in real-world applications?
In this talk, we will give an overview of state-of-the-art technologies and applications for dialogue systems in research and industry.
Machine Support for Interacting with Scientific Publications Improving Inform...Christoph Lange
1) The document discusses using semantic web and linked data technologies to help assess the quality of scientific output by answering questions about workshops, conferences, publications, and data.
2) It proposes connecting bibliographic metadata, citations, full text, social networks and research data using initiatives like schema.org to provide machine support for quality assessment.
3) The goal is to provide complementary metrics to human peer review and impact factors by enabling multidimensional, context-sensitive analysis of trends, topics, citations and more.
Presented at NDC 2014 in Oslo (4th June 2014)
Video available on Vimeo: https://vimeo.com/97344527
Apparently, everyone knows about patterns. Except for the ones that don't. Which is basically all the people who've never come across patterns... plus most of the people who have.
Singleton is often treated as a must-know pattern. Patterns are sometimes considered to be the basis of blueprint-driven architecture. Patterns are also seen as something you don't need to know any more because you've got frameworks, libraries and middleware by the download. Or that patterns are something you don't need to know because you're building on UML, legacy code or emergent design. There are all these misconceptions about patterns... and more.
In this talk, let's take an alternative tour of patterns, one that is based on improving the habitability of code, communication, exploration, empiricism, reasoning, incremental development, sharing design and bridging rather than barricading different levels of expertise.
This document discusses using semantic web technologies for social network analysis. It describes how social network data can be represented using ontologies like FOAF, SIOC, and relationship to form semantic social graphs. Classic social network analysis metrics like in-degree and betweenness centrality can then be computed on these graphs using SPARQL queries extended with social network analysis functions. Global semantic graphs can also be constructed by linking different domain ontologies and user data from social networks and bookmarks. Some bridges already exist between semantic web and mobile/enterprise applications.
Multidimensional Patterns of Disturbance in Digital Social NetworksDimitar Denev
RWTH Aachen University researchers developed PALADIN, a Pattern Language for Analyzing Disturbances in digital social Networks. PALADIN uses a graph-based model and pattern language to automatically analyze social networks for recurring disturbance patterns. It represents actors, media, artifacts and dependencies in a social network. PALADIN was tested on 10 disturbance patterns over 119 social network instances with over 17,000 individuals. The results showed PALADIN could detect different disturbance patterns and provide insights to network administrators. Future work will focus on interoperability, visualization of multidimensional disturbances, and integrating social network simulation.
SBQS 2013 Keynote: Cooperative Testing and AnalysisTao Xie
SBQS 2013 Keynote: Cooperative Testing and Analysis: Human-Tool, Tool-Tool, and Human-Human Cooperations to Get Work Done http://sbqs.dcc.ufba.br/view/palestrantes.php
These are notes from the Make It So presentation Chris Noessel and I have given at SXSW as well as a few other venues. Because the presentation itself isn't in a format that is easily savable, these notes are a better way to share the content.
This document discusses collective knowledge systems and how the semantic web can help augment user-generated data. It provides an example of a collective knowledge system called RealTravel, which allows users to share travel stories, photos and experiences. The semantic web can help RealTravel by adding structure to user contributions, enabling data sharing and computation across applications. This includes distinguishing locations, exposing data in structured ways, and integrating tagging data from different sources.
Metadata in a Crowd: Shared Knowledge ProductionKevin Rundblad
The document discusses human computation and how crowdsourcing can be used to generate metadata. It describes different models of human computation, including socially motivated tasks like tagging photos on Flickr, economically motivated tasks on Amazon Mechanical Turk, and tacit tasks like reCAPTCHAs. The document also discusses how human computation draws on human abilities at visual and language tasks to solve problems in parallel in a way similar to bittorrent networks. It argues that successful systems motivate participation through incentives, games, or the ability to contribute to a collective knowledge base.
The document describes TweeTopic, an algorithm for detecting topics in short social media posts like tweets. It works by treating tweets like search queries, querying a search engine with noun phrases from tweets, and mining the results to extract topic keywords. TweeTopic aims to overcome the difficulty of topic detection from very short texts by leveraging information retrieval techniques normally used on longer documents.
From file-based production to real-time co-productionMaarten Verwaest
The document discusses how semantic technologies can enable the transition from file-based media production to real-time co-production. It describes some of the issues with current non-integrated production processes and how semantic modeling, linked data, and system integration using semantic technologies can help solve problems around re-use, retrieval and scalability. Examples of applications for semantic technologies in media production workflows from pre-production to archiving are also provided.
Synergy of Human and Artificial Intelligence in Software EngineeringTao Xie
Keynote Talk by Tao Xie at International NSF sponsored Workshop on Realizing Artificial Intelligence Synergies in Software Engineering (RAISE 2013) http://promisedata.org/raise/2013/
The document discusses the evolution of the internet and web technologies. It describes early technologies like Vannevar Bush's memex and hypertext, the development of the World Wide Web through HTTP and HTML. It outlines the rise of user-generated content through blogs, photos, video and social sharing sites. It also discusses the potential for machines to understand semantic meaning through standards like XML, RDF and ontologies.
Develop your career in the field of software development. Want to learn programming and develop your own applications, the presentation helps you to understanding the technology and the training methodologies required for that.
This document outlines the focus and structure of a course on literacy and inquiry. The course aims to teach students to conduct qualitative research through a DIY media project. Students will work in groups to (1) produce a digital media artifact and collect data on the process, and (2) write a research report analyzing the process through relevant literacy theory. Typical media projects include blogs, wikis, podcasts, and video editing. Students must document their project through field notes, photos/videos, and verbatim recordings to collect spoken, written and observed data for analysis.
Semantic Web & Information Brokering: Opportunities, Commercialization and Ch...Amit Sheth
Amit Sheth, "Semantic Web & Info. Brokering Opportunities, Commercialization and Challenges," Keynote talk at the workshop on Semantic Web: Models, Architecture and Management, September 21, 2000, Lisbon, Portugal.
This was the keynote given at probably the first international event with "Semantic Web" in title (and before the well known SciAm article). As in TBL's use of Semantic Web in his 1999 book, (semantic) metadata plays central role. The use of Worldmodel/Ontology is consistent with our use of ontology for (Web) information integration in 1994 CIKM paper. Summary of the talk by event organizers and other details are at: http://knoesis.org/library/resource.php?id=735
Prof. Sheth started a Semantic Web company Taalee, Inc. in 1999 (product was called MediaAnywhere A/V search engine- discussed in this paper in the context of one of its use by a customer Redband Broadcasting). The product included Semantic Web/populated Ontology based semantic (faceted) search, semantic browsing, semantic personalization, semantic targeting (advertisement), etc as is described in U.S. Patent #6311194, 30 Oct. 2001 (filed 2000). MediaAnywhere has about 25 ontologies in News/Business, Sports, Entertainment, etc.
Taalee merged to become Voquette in 2001 (product was called SCORE), Semagix in 2004 (product was called Semagix Freedom), and then Fortent in 2006 (products included Know Your Customers).
The document introduces the Semantic Web and its goals of making web content machine-readable through the use of ontologies and semantic annotations. It describes the evolution of the web from human-readable documents and links to machine-processable data through technologies like XML, RDF, and OWL. It outlines current work by the W3C to develop standards and an active working group to develop the Semantic Web.
Faire Datenökonomie für Wirtschaft, Wissenschaft und Gesellschaft: Was brauch...Christoph Lange
In Wirtschaft und Wissenschaft entstehen zunehmend Infrastrukturen für Datenaustausch. Der Wirtschaft ist Vertrauen unter Geschäftspartnern wichtig und Souveränität darüber, was Andere mit meinen Daten machen – die Wissenschaft betont freie Zugänglichkeit und Nachnutzbarkeit. FAIR Data Spaces verbinden beides auf Grundlage gemeinsamer Prinzipien.
Was muss getan werden, damit Datenaustausch nicht mehr bedeutet, E-Mail-Anhänge zu verschicken oder Geheimnisse zentralen Plattformen feindlicher Mächte anzuvertrauen? Wirtschaft, Wissenschaft und öffentliche Verwaltung suchen zunehmend nach Lösungen, um den Datenaustausch sicher und effizient zu gestalten und damit neues Innovationspotenzial zu heben. Was gibt es schon, was ist geplant, und wie können vorhandene Initiativen zusammenwachsen, um Daten über die Grenzen dieser Welten hinaus gemeinsam zu nutzen?
Initiativen der Wirtschaft wie Gaia-X und International Data Spaces priorisieren den Aufbau von Vertrauen unter Geschäftspartner:innen ohne Papier-Verträge sowie die Souveränität darüber, was Andere mit den eigenen wertvollen Daten machen. In der Wissenschaft, zum Beispiel bei der Nationalen Forschungsdateninfrastruktur NFDI, geht es um freie Zugänglichkeit und Nachnutzbarkeit im Einklang mit ethischen Prinzipien. Der öffentlichen Hand ist neben dem freien Zugang etwa zu Open-Data-Portalen die digitale Daseinsvorsorge wichtig. Große Herausforderungen unserer Zeit erfordern Datenaustausch nicht nur innerhalb dieser Welten, sondern über ihre Grenzen hinaus:
zum Beispiel zwischen Forschungsinstituten und kleinen Technologie-Unternehmen, die nicht alle Daten selbst sammeln können,
oder zwischen großen Unternehmen mit reichen Datenschätzen und wirtschaftlichen Interessen und einer Nutzung dieser Daten für das Gemeinwohl.
Das Projekt FAIR Data Spaces schafft Bausteine für übergreifende Datenräume als Keimzellen einer fairen Datenökonomie nach gemeinsamen Prinzipien. Wir möchten diskutieren, wie weit die aus dem Forschungsdatenmanagement stammenden FAIR-Data-Prinzipien tragen, wonach Daten findable (auffindbar), accessible (zugänglich), interoperabel und reusable (nachnutzbar) sein sollen. Das Projekt verfolgt den Plan, vorhandene Initiativen organisatorisch, rechtlich, technisch und praktisch zu einer gemeinsamen Community zusammenzuführen, und lebt dabei von einer breiten Mitwirkung. Werdet mit dem Fraunhofer IUK-Verbund Teil dieser Community und bleibt dabei innovativ und kritisch!
This document discusses the history and concepts of tagging and folksonomies. It describes how tagging originated in the late 1980s as a way to annotate and organize digital objects. Folksonomies involve socially generated tags to make personal connections to meaning, taking a bottom-up approach. The document also discusses early video analysis tools from the late 1980s and early 1990s that utilized tagging to group similar video clips. A key open question is whether tagging is best done from a top-down or bottom-up approach.
Chatbots and Natural Language Generation - A Bird Eyes ViewMark Cieliebak
Chatbots, conversational user interfaces, dialogue systems, question-answering - the names differ, but the fundamental idea is the same: smart computer systems which can "talk" to humans in a natural way. Chatbots and their derivatives are designed to understand human language, interpret its content, and reply accordingly. This long-standing vision from artificial intelligence has gained enormous momentum since 2015.
But what is possible, and where are the boundaries? Do chatbots really "understand" the meaning of text? And how can they be employed beneficially in real-world applications?
In this talk, we will give an overview of state-of-the-art technologies and applications for dialogue systems in research and industry.
Machine Support for Interacting with Scientific Publications Improving Inform...Christoph Lange
1) The document discusses using semantic web and linked data technologies to help assess the quality of scientific output by answering questions about workshops, conferences, publications, and data.
2) It proposes connecting bibliographic metadata, citations, full text, social networks and research data using initiatives like schema.org to provide machine support for quality assessment.
3) The goal is to provide complementary metrics to human peer review and impact factors by enabling multidimensional, context-sensitive analysis of trends, topics, citations and more.
Presented at NDC 2014 in Oslo (4th June 2014)
Video available on Vimeo: https://vimeo.com/97344527
Apparently, everyone knows about patterns. Except for the ones that don't. Which is basically all the people who've never come across patterns... plus most of the people who have.
Singleton is often treated as a must-know pattern. Patterns are sometimes considered to be the basis of blueprint-driven architecture. Patterns are also seen as something you don't need to know any more because you've got frameworks, libraries and middleware by the download. Or that patterns are something you don't need to know because you're building on UML, legacy code or emergent design. There are all these misconceptions about patterns... and more.
In this talk, let's take an alternative tour of patterns, one that is based on improving the habitability of code, communication, exploration, empiricism, reasoning, incremental development, sharing design and bridging rather than barricading different levels of expertise.
This document discusses using semantic web technologies for social network analysis. It describes how social network data can be represented using ontologies like FOAF, SIOC, and relationship to form semantic social graphs. Classic social network analysis metrics like in-degree and betweenness centrality can then be computed on these graphs using SPARQL queries extended with social network analysis functions. Global semantic graphs can also be constructed by linking different domain ontologies and user data from social networks and bookmarks. Some bridges already exist between semantic web and mobile/enterprise applications.
Multidimensional Patterns of Disturbance in Digital Social NetworksDimitar Denev
RWTH Aachen University researchers developed PALADIN, a Pattern Language for Analyzing Disturbances in digital social Networks. PALADIN uses a graph-based model and pattern language to automatically analyze social networks for recurring disturbance patterns. It represents actors, media, artifacts and dependencies in a social network. PALADIN was tested on 10 disturbance patterns over 119 social network instances with over 17,000 individuals. The results showed PALADIN could detect different disturbance patterns and provide insights to network administrators. Future work will focus on interoperability, visualization of multidimensional disturbances, and integrating social network simulation.
SBQS 2013 Keynote: Cooperative Testing and AnalysisTao Xie
SBQS 2013 Keynote: Cooperative Testing and Analysis: Human-Tool, Tool-Tool, and Human-Human Cooperations to Get Work Done http://sbqs.dcc.ufba.br/view/palestrantes.php
These are notes from the Make It So presentation Chris Noessel and I have given at SXSW as well as a few other venues. Because the presentation itself isn't in a format that is easily savable, these notes are a better way to share the content.
This document discusses collective knowledge systems and how the semantic web can help augment user-generated data. It provides an example of a collective knowledge system called RealTravel, which allows users to share travel stories, photos and experiences. The semantic web can help RealTravel by adding structure to user contributions, enabling data sharing and computation across applications. This includes distinguishing locations, exposing data in structured ways, and integrating tagging data from different sources.
Metadata in a Crowd: Shared Knowledge ProductionKevin Rundblad
The document discusses human computation and how crowdsourcing can be used to generate metadata. It describes different models of human computation, including socially motivated tasks like tagging photos on Flickr, economically motivated tasks on Amazon Mechanical Turk, and tacit tasks like reCAPTCHAs. The document also discusses how human computation draws on human abilities at visual and language tasks to solve problems in parallel in a way similar to bittorrent networks. It argues that successful systems motivate participation through incentives, games, or the ability to contribute to a collective knowledge base.
The document describes TweeTopic, an algorithm for detecting topics in short social media posts like tweets. It works by treating tweets like search queries, querying a search engine with noun phrases from tweets, and mining the results to extract topic keywords. TweeTopic aims to overcome the difficulty of topic detection from very short texts by leveraging information retrieval techniques normally used on longer documents.
From file-based production to real-time co-productionMaarten Verwaest
The document discusses how semantic technologies can enable the transition from file-based media production to real-time co-production. It describes some of the issues with current non-integrated production processes and how semantic modeling, linked data, and system integration using semantic technologies can help solve problems around re-use, retrieval and scalability. Examples of applications for semantic technologies in media production workflows from pre-production to archiving are also provided.
Synergy of Human and Artificial Intelligence in Software EngineeringTao Xie
Keynote Talk by Tao Xie at International NSF sponsored Workshop on Realizing Artificial Intelligence Synergies in Software Engineering (RAISE 2013) http://promisedata.org/raise/2013/
The document discusses the evolution of the internet and web technologies. It describes early technologies like Vannevar Bush's memex and hypertext, the development of the World Wide Web through HTTP and HTML. It outlines the rise of user-generated content through blogs, photos, video and social sharing sites. It also discusses the potential for machines to understand semantic meaning through standards like XML, RDF and ontologies.
Develop your career in the field of software development. Want to learn programming and develop your own applications, the presentation helps you to understanding the technology and the training methodologies required for that.
This document outlines the focus and structure of a course on literacy and inquiry. The course aims to teach students to conduct qualitative research through a DIY media project. Students will work in groups to (1) produce a digital media artifact and collect data on the process, and (2) write a research report analyzing the process through relevant literacy theory. Typical media projects include blogs, wikis, podcasts, and video editing. Students must document their project through field notes, photos/videos, and verbatim recordings to collect spoken, written and observed data for analysis.
Semantic Web & Information Brokering: Opportunities, Commercialization and Ch...Amit Sheth
Amit Sheth, "Semantic Web & Info. Brokering Opportunities, Commercialization and Challenges," Keynote talk at the workshop on Semantic Web: Models, Architecture and Management, September 21, 2000, Lisbon, Portugal.
This was the keynote given at probably the first international event with "Semantic Web" in title (and before the well known SciAm article). As in TBL's use of Semantic Web in his 1999 book, (semantic) metadata plays central role. The use of Worldmodel/Ontology is consistent with our use of ontology for (Web) information integration in 1994 CIKM paper. Summary of the talk by event organizers and other details are at: http://knoesis.org/library/resource.php?id=735
Prof. Sheth started a Semantic Web company Taalee, Inc. in 1999 (product was called MediaAnywhere A/V search engine- discussed in this paper in the context of one of its use by a customer Redband Broadcasting). The product included Semantic Web/populated Ontology based semantic (faceted) search, semantic browsing, semantic personalization, semantic targeting (advertisement), etc as is described in U.S. Patent #6311194, 30 Oct. 2001 (filed 2000). MediaAnywhere has about 25 ontologies in News/Business, Sports, Entertainment, etc.
Taalee merged to become Voquette in 2001 (product was called SCORE), Semagix in 2004 (product was called Semagix Freedom), and then Fortent in 2006 (products included Know Your Customers).
The document introduces the Semantic Web and its goals of making web content machine-readable through the use of ontologies and semantic annotations. It describes the evolution of the web from human-readable documents and links to machine-processable data through technologies like XML, RDF, and OWL. It outlines current work by the W3C to develop standards and an active working group to develop the Semantic Web.
Similar to Semantic Web Technology: The Key to Making Scientific Information Systems Social (20)
Faire Datenökonomie für Wirtschaft, Wissenschaft und Gesellschaft: Was brauch...Christoph Lange
In Wirtschaft und Wissenschaft entstehen zunehmend Infrastrukturen für Datenaustausch. Der Wirtschaft ist Vertrauen unter Geschäftspartnern wichtig und Souveränität darüber, was Andere mit meinen Daten machen – die Wissenschaft betont freie Zugänglichkeit und Nachnutzbarkeit. FAIR Data Spaces verbinden beides auf Grundlage gemeinsamer Prinzipien.
Was muss getan werden, damit Datenaustausch nicht mehr bedeutet, E-Mail-Anhänge zu verschicken oder Geheimnisse zentralen Plattformen feindlicher Mächte anzuvertrauen? Wirtschaft, Wissenschaft und öffentliche Verwaltung suchen zunehmend nach Lösungen, um den Datenaustausch sicher und effizient zu gestalten und damit neues Innovationspotenzial zu heben. Was gibt es schon, was ist geplant, und wie können vorhandene Initiativen zusammenwachsen, um Daten über die Grenzen dieser Welten hinaus gemeinsam zu nutzen?
Initiativen der Wirtschaft wie Gaia-X und International Data Spaces priorisieren den Aufbau von Vertrauen unter Geschäftspartner:innen ohne Papier-Verträge sowie die Souveränität darüber, was Andere mit den eigenen wertvollen Daten machen. In der Wissenschaft, zum Beispiel bei der Nationalen Forschungsdateninfrastruktur NFDI, geht es um freie Zugänglichkeit und Nachnutzbarkeit im Einklang mit ethischen Prinzipien. Der öffentlichen Hand ist neben dem freien Zugang etwa zu Open-Data-Portalen die digitale Daseinsvorsorge wichtig. Große Herausforderungen unserer Zeit erfordern Datenaustausch nicht nur innerhalb dieser Welten, sondern über ihre Grenzen hinaus:
zum Beispiel zwischen Forschungsinstituten und kleinen Technologie-Unternehmen, die nicht alle Daten selbst sammeln können,
oder zwischen großen Unternehmen mit reichen Datenschätzen und wirtschaftlichen Interessen und einer Nutzung dieser Daten für das Gemeinwohl.
Das Projekt FAIR Data Spaces schafft Bausteine für übergreifende Datenräume als Keimzellen einer fairen Datenökonomie nach gemeinsamen Prinzipien. Wir möchten diskutieren, wie weit die aus dem Forschungsdatenmanagement stammenden FAIR-Data-Prinzipien tragen, wonach Daten findable (auffindbar), accessible (zugänglich), interoperabel und reusable (nachnutzbar) sein sollen. Das Projekt verfolgt den Plan, vorhandene Initiativen organisatorisch, rechtlich, technisch und praktisch zu einer gemeinsamen Community zusammenzuführen, und lebt dabei von einer breiten Mitwirkung. Werdet mit dem Fraunhofer IUK-Verbund Teil dieser Community und bleibt dabei innovativ und kritisch!
Interlinking Data and Knowledge in Enterprises, Research and Society with Lin...Christoph Lange
The Linked Data paradigm has emerged as a powerful enabler for data and knowledge interlinking and exchange using standardised Web technologies.
In this article, we discuss our vision how the Linked Data paradigm can be employed to evolve the intranets of large organisations -- be it enterprises, research organisations or governmental and public administrations -- into networks of internal data and knowledge.
In particular for large enterprises data integration is still a key challenge. The Linked Data paradigm seems a promising approach for integrating enterprise data. Like the Web of Data, which now complements the original document-centred Web, data intranets may help to enhance and flexibilise the intranets and service-oriented architectures that exist in large organisations. Furthermore, using Linked Data gives enterprises access to 50+ billion facts from the growing Linked Open Data (LOD) cloud. As a result, a data intranet can help to bridge the gap between structured data management (in ERP, CRM or SCM systems) and semi-structured or unstructured information in documents, wikis or web portals, and make all of these sources searchable in a coherent way.
Keynote at Baltic DB&IS 2014, 9 June 2014, Tallinn, Estonia
Linked Open (Geo)Data and the Distributed Ontology Language – a perfect matchChristoph Lange
The Distributed Ontology Language is a meta-language for integrating
ontologies written in different languages. Our notion of “distributed”
comprises logical heterogeneity within ontologies, modularity and reuse,
and links across ontologies in different places of the Web. Not only
can ontologies be distributed across the Web, but DOL's supply of
supported ontology languages can also be extended in a decentral way.
For this functionality, DOL builds on the Linked Open Data (LOD)
principles. But DOL also contributes to LOD use cases. Many current
LOD applications are limited by the weak expressivity of the RDF and
RDFS languages commonly used to express data and vocabularies.
Completely switching to a more expressive language would impair
scalability to big datasets. DOL addresses the scalability and
expressivity requirements by allowing to represent each aspect of a
dataset in the most suitable language and keeping these different
representations connected. This is particularly useful in geographic
information systems, where big datasets (e.g. Linked Geo Data, the LOD
version of OpenStreetMap) need to be integrated with formalisations of
complex spatial notions (e.g. in the first-order language Common Logic).
Linking Big Data to Rich Process DescriptionsChristoph Lange
Linked (Open) Data is one key to coping with Big Data: it enables decentralised, collaborative management of big datasets, low-overhead information retrieval, and scalable reasoning. Big Data are created or consumed by technical processes or business processes. Their formal description, e.g. for software verification or compliance checking, requires logics whose complexity far exceeds that of the data. Restricting LOD to the RDF logic does not allow for integrating rich process descriptions with the data that these processes create, and therefore does not enable knowledge management, information retrieval and reasoning to take full advantage of rich background knowledge. In this talk I demonstrate different frontiers at which I have worked towards achieving an integration of process descriptions and data.
The Distributed Ontology Language (DOL): Use Cases, Syntax, and ExtensibilityChristoph Lange
The document discusses the Distributed Ontology Language (DOL) which aims to support semantic integration and interoperability across heterogeneous ontologies. DOL allows for logically heterogeneous ontologies, modular ontologies, and formal and informal links between ontologies. It has a formal semantics and can be serialized in XML, RDF, and text. Examples of applications that could benefit from DOL include an ontology repository engine and a multilingual map user interface driven by aligned ontologies.
Bringing Mathematics To the Web of Data: the Case of the Mathematics Subject ...Christoph Lange
This document discusses redesigning the Mathematics Subject Classification (MSC) scheme as a linked dataset using SKOS. Key points include: representing the MSC hierarchy using SKOS concepts and properties; adding multilingual labels and mathematical markup; linking related concepts within and across schemes; and deploying the dataset on the web with a SPARQL endpoint for access. The redesign aims to facilitate maintenance and reuse while preserving all existing MSC information.
Making Heterogeneous Ontologies Interoperable Through StandardisationChristoph Lange
The document discusses making heterogeneous ontologies interoperable through standardization, presenting a scenario of an assisted living environment where different devices like a wheelchair and freezer need to communicate but use different ontologies. It argues for developing a standardized meta ontology language to facilitate integration and interoperability between these diverse ontologies used by different devices with varying knowledge needs.
Previewing OWL Changes and Refactorings Using a Flexible XML DatabaseChristoph Lange
The document discusses using a flexible XML database called TNTBase to preview changes and refactorings to ontologies. TNTBase allows editing ontologies through "virtual documents" that define editable XML views of ontology content. This enables refactoring ontologies by previewing the effects of changes like extracting subclasses into a new module before making the changes live. The document provides examples of refactoring an ontology in this way and describes the underlying library functions that power the refactoring previews.
The document proposes an architecture called JOBAD that allows mathematical documents to interactively access web services. JOBAD uses JavaScript to integrate definition lookup, unit conversion, and other services directly into OMDoc-based documents. This allows readers to interactively adapt document appearance and access remote explanations and computations without leaving the document interface. Future plans include more interactive customization and linking documents to external search and information resources.
The document describes a project to publish mathematics lecture notes as linked data. Key points:
1) Lecture notes containing 2,000 slides and 1,000 homework problems were semantically annotated and converted to RDF to create structured data.
2) The RDF is stored in a triplestore and can be queried with an OMDoc-aware SPARQL endpoint or full-text search.
3) Annotations in the human-readable XHTML documents link to services for interactivity. The goal is to scale this to 300,000 annotated publications and link to external datasets.
sTeX+ – a System for Flexible Formalization of Linked DataChristoph Lange
The document describes S EX+, an extension of S EX that allows formalizing and annotating technical documents with semantic metadata. S EX+ enables defining ad hoc vocabularies to describe project-specific concepts and annotate documents accordingly. It produces output in PDF, OMDoc+RDFa, and XHTML+MathML+RDFa to enable interactive services. S EX+ aims to balance formalization with flexibility for existing authoring practices.
Krextor – An Extensible Framework for Contributing Content Math to the Web of...Christoph Lange
Moseley (DBTune) (DBTune) RAMEAU
Folk NTU SH lobid
GTAA Plymouth Resource
Krextor is an extensible framework for contributing mathematical content from OpenMath CDs to the Web of Data. It converts OpenMath CDs, which are document-oriented, to RDF, which follows the graph-based RDF data model used by the Web of Data. As an example, it can link a mathematical property in an OpenMath CD to its identifier by grouping the property and giving it an ID, without modifying the original CD. This allows bootstrapping mathematics onto the Web of Data in
The document discusses the mathematical semantics of statistical data. It presents examples of derived statistical values for populations and unemployment rates for two locations. It raises questions about how to validate derived values and compute them for new data points. It proposes representing mathematical expressions as ordered n-ary trees in RDF to integrate math into the semantic web of data.
Enabling Collaboration on Semiformal Mathematical Knowledge by Semantic Web I...Christoph Lange
The document discusses enabling collaboration on semiformal mathematical knowledge through semantic web integration. It outlines the current state of collaboration in mathematics through blogs, wikis and projects. The author proposes an integrated view of the collaboration workflow between authors, readers and reviewers to formalize, validate, present and review semiformal mathematical knowledge.
Ontology Integration and Interoperability (OntoIOp) – Part 1: The Distributed...Christoph Lange
The document introduces the Distributed Ontology Language (DOL), which is part of the Ontology Integration and Interoperability (OntoIOP) standard. DOL aims to enable logical and modular heterogeneity across ontologies to improve semantic integration and interoperability. It will serve as a logic-agnostic meta-language for structuring ontologies, ontology modules, and formal and informal links between ontologies. DOL is intended to have well-defined semantics and serializations to XML, RDF, and text to facilitate reuse of existing ontologies and reasoning over heterogeneous ontological representations.
Ontology Integration and Interoperability (OntoIOp) – Part 1: The Distributed...Christoph Lange
The document discusses the Distributed Ontology Language (DOL), a proposed standard being developed by ISO for expressing heterogeneous ontologies and links between ontologies. DOL aims to achieve semantic integration and interoperability across knowledge representations. It will have a formal semantics and support multiple serialization formats. The standard is being developed to facilitate communication and reduce complexity for applications involving multiple ontologies.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Introducing Milvus Lite: Easy-to-Install, Easy-to-Use vector database for you...Zilliz
Join us to introduce Milvus Lite, a vector database that can run on notebooks and laptops, share the same API with Milvus, and integrate with every popular GenAI framework. This webinar is perfect for developers seeking easy-to-use, well-integrated vector databases for their GenAI apps.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Communications Mining Series - Zero to Hero - Session 1
Semantic Web Technology: The Key to Making Scientific Information Systems Social
1. Introduction Social Semantic Web Collaborative Problem Solving Self-explaining and Adaptive Systems
Semantic Web Technology: The Key to Making
Scientific Information Systems Social
Presentation at Heinrich Heine University Düsseldorf
Christoph Lange
University of Bremen, Germany
2012-02-29
Christoph Lange Semantic Web Technology: The Key to Making Scientific Information Systems Social 2012-02-29 1
2. Introduction Social Semantic Web Collaborative Problem Solving Self-explaining and Adaptive Systems
‘Hello, World!’
2011: Ph.D. from Jacobs University Bremen
(with Michael Kohlhase)
Enabling Collaboration
on Semiformal Mathematical Knowledge
by Semantic Web Integration
from 2011: Postdoctoral researcher at the
University of Bremen
(with John Bateman, Till Mossakowski)
Ontology Integration and Interoperability
(OntoIOp) – Distributed Ontology Language
(DOL) ↝ ISO 17347
Christoph Lange Semantic Web Technology: The Key to Making Scientific Information Systems Social 2012-02-29 2
3. Introduction Social Semantic Web Collaborative Problem Solving Self-explaining and Adaptive Systems
The Semantic Web Vision
vision of Berners-Lee, Hendler and
Lassila 2001:
Machines understand the data on
the Web
. . . to assist users with
knowledge-related tasks.
low-profile artificial intelligence:
Don’t try to automatically
understand documents, . . .
. . . but enable authors and
applications to publish
structured data.
(Berners-Lee, Hendler and
now going mainstream Lassila 2001)
Christoph Lange Semantic Web Technology: The Key to Making Scientific Information Systems Social 2012-02-29 3
4. Introduction Social Semantic Web Collaborative Problem Solving Self-explaining and Adaptive Systems
The schema.org Search Vocabulary
initiative of search engine operators (Bing, Google, etc.)
annotation vocabulary for structuring web pages; covers . . .
creative works organizations places
events persons products
Example (Movie description)
Avatar
Director: James Cameron (born August 16, 1954)
Science fiction
Trailer
Christoph Lange Semantic Web Technology: The Key to Making Scientific Information Systems Social 2012-02-29 4
5. Introduction Social Semantic Web Collaborative Problem Solving Self-explaining and Adaptive Systems
The schema.org Search Vocabulary
initiative of search engine operators (Bing, Google, etc.)
annotation vocabulary for structuring web pages; covers . . .
creative works organizations places
events persons products
Example (Movie description)
<div class="movie">
<h1>Avatar</h1>
<div class="director">
Director: James Cameron
(born August 16, 1954)
</div>
<span class="genre">Science fiction</span>
<a href="../movies/avatar-theatrical-trailer.html"
Trailer</a>
</div>
Christoph Lange Semantic Web Technology: The Key to Making Scientific Information Systems Social 2012-02-29 4
6. Introduction Social Semantic Web Collaborative Problem Solving Self-explaining and Adaptive Systems
The schema.org Search Vocabulary
initiative of search engine operators (Bing, Google, etc.)
annotation vocabulary for structuring web pages; covers . . .
creative works organizations places
events persons products
Example (Movie description)
<div itemscope itemtype="http://schema.org/Movie">
<h1 itemprop="name">Avatar</h1>
<div itemprop="director" itemscope itemtype="http://schema.org/Person">
Director: <span itemprop="name">James Cameron</span>
(born <span itemprop="birthDate">August 16, 1954</span>)
</div>
<span itemprop="genre">Science fiction</span>
<a href="../movies/avatar-theatrical-trailer.html"
itemprop="trailer">Trailer</a>
</div>
Christoph Lange Semantic Web Technology: The Key to Making Scientific Information Systems Social 2012-02-29 4
7. Introduction Social Semantic Web Collaborative Problem Solving Self-explaining and Adaptive Systems
The schema.org Search Vocabulary
initiative of search engine operators (Bing, Google, etc.)
annotation vocabulary for structuring web pages; covers . . .
creative works organizations places
events persons products
Example (Movie description)
Movie Avatar Person
e
pe
ty
m
na ty
p
e
director name
bi James Cameron
ge rth
nr Da
e
r
te
ile
tra
../movies/. . . Science fiction August 16, 1954
Christoph Lange Semantic Web Technology: The Key to Making Scientific Information Systems Social 2012-02-29 4
8. Introduction Social Semantic Web Collaborative Problem Solving Self-explaining and Adaptive Systems
Social Data with schema.org
review or rating of a creative work, organization or product
(written by a person)
social network of a person:
person follows/knows person
person works for person, is colleague of person
person has parents/siblings/spouse/children/other relatives
Example (Reviews of a movie)
ratingValue
g
atin 6
ewR
revi
author name
Movie Pünktchen
type ews
revi type
knows
e revi Person
nam ews type
author
Avatar name Anton
revi
ewR
atin 8.5
g
ratingValue
Christoph Lange Semantic Web Technology: The Key to Making Scientific Information Systems Social 2012-02-29 5
9. Introduction Social Semantic Web Collaborative Problem Solving Self-explaining and Adaptive Systems
What Search Engines make out of schema.org
Christoph Lange Semantic Web Technology: The Key to Making Scientific Information Systems Social 2012-02-29 6
10. Introduction Social Semantic Web Collaborative Problem Solving Self-explaining and Adaptive Systems
Facebook’s Open Graph Protocol
Let people ‘like’ your website without maintaining a Facebook page
1 Annotate it with Open Graph metadata
2 Integrate your site into Facebook’s ‘social graph’
<html xmlns:og="http://ogp.me/ns#"
xmlns:fb="http://www.facebook.com/2008/fbml">
<meta property="og:image"
content="http://www.malcolmcoles.co.uk/blog/..."/>
<meta property="og:site_name"
content="Malcolm Coles's blog"/>
<meta property="fb:admins" content="522100824"/>
<meta property="og:title"
content="Malcolm Coles: SEO, Twitter and ..."/>
<meta property="og:type" content="blog"/>
<meta property="og:url"
content="http://www.malcolmcoles.co.uk/blog/"/>
<meta property="og:description"
content="The blog of Malcolm Coles. ..."/>
...
Christoph Lange Semantic Web Technology: The Key to Making Scientific Information Systems Social 2012-02-29 7
11. Introduction Social Semantic Web Collaborative Problem Solving Self-explaining and Adaptive Systems
Science is Social – Collaboration in Mathematics
History of collaboration
in the small: Hardy/Littlewood
in the large: hundreds of
mathematicians classifying the finite
simple groups
‘industrialization’ of research
Utilizing the Social Web
research blogs: Baez, Gowers, Tao
Polymath: collaborative proofs
Collaboration = creation,
formalization, organization, Polymath wiki/blog: P ≠ NP proof
understanding, reuse, application
Christoph Lange Semantic Web Technology: The Key to Making Scientific Information Systems Social 2012-02-29 8
12. Introduction Social Semantic Web Collaborative Problem Solving Self-explaining and Adaptive Systems
Discourse in Mathematics
Proofs and Refutations (Lakatos 1976):
1 initial theorem, initial proof sketch
2 problem in the proof identified (only covers a specific case);
counter-example
3 rework proof, or even restate theorem
Peer review (not just in mathematics)
1 read paper
‘What does this mean?’
(missing background knowledge, unfamiliar notation)
‘How does this work?’
‘What is this good for?’
look up background information in cited publications
2 verify claims
3 point out problems with the paper and its formal concepts
Christoph Lange Semantic Web Technology: The Key to Making Scientific Information Systems Social 2012-02-29 9
13. Introduction Social Semantic Web Collaborative Problem Solving Self-explaining and Adaptive Systems
An Integrated Representation of
Mathematical Knowledge and Discourse
Theorem …… Example
SIOC
argumentation subClassOf
module (partly shown) Position
agrees_with/ agrees_with/
disagrees_with disagrees_with
Domain-specific Math. Know-
argumentation
supported_by ledge Item
classes (partly shown)
subClassOf
OMDoc ontology
Ontology
Decision Entity
decides decides
resolves_into
Issue Idea
proposes_solution_for
subClassOf subClassOf
Wrong Inappropriate Incomprehensible Provide Keep as Delete
for Domain Example Bad Example
(Lange, Hastrup and Corlosquet 2008)
Christoph Lange Semantic Web Technology: The Key to Making Scientific Information Systems Social 2012-02-29 10
14. Introduction Social Semantic Web Collaborative Problem Solving Self-explaining and Adaptive Systems
Problem Solving in a Semantic Wiki
1 Is there an unresolved issue
that is considered legitimate?
2 If solutions have been
suggested, let the
highest-ranked suggestion
win.
(Lange 2011)
Christoph Lange Semantic Web Technology: The Key to Making Scientific Information Systems Social 2012-02-29 11
15. Introduction Social Semantic Web Collaborative Problem Solving Self-explaining and Adaptive Systems
Problem Solving in a Semantic Wiki
(Lange 2011)
Christoph Lange Semantic Web Technology: The Key to Making Scientific Information Systems Social 2012-02-29 11
16. Introduction Social Semantic Web Collaborative Problem Solving Self-explaining and Adaptive Systems
Problem Solving in a Semantic Wiki
↓
(Lange 2011)
Christoph Lange Semantic Web Technology: The Key to Making Scientific Information Systems Social 2012-02-29 11
17. Introduction Social Semantic Web Collaborative Problem Solving Self-explaining and Adaptive Systems
Usability Evaluation of the Wiki Prototype
Is the system usable?
learnable?
effective?
useful?
satisfying to use?
Can we effectively support maintenance workflows?
Quick local fixing of minor errors
(in text, formalization, or presentation)
Peer review, and discussing about problems
Christoph Lange Semantic Web Technology: The Key to Making Scientific Information Systems Social 2012-02-29 12
18. Introduction Social Semantic Web Collaborative Problem Solving Self-explaining and Adaptive Systems
Feedback Statements from Test Users
positive
statement
successful 93
action
95 understood
concept
36
18 not understood concept
18 unexpected bug
negative 61
statement 43
dissatisfaction
52 44
51
confusion/uncertainty not understood
expectation what to do
not met
Understanding only seems marginal, but had a high impact on
successfully accomplishing tasks! (Lange 2011)
Christoph Lange Semantic Web Technology: The Key to Making Scientific Information Systems Social 2012-02-29 13
19. Introduction Social Semantic Web Collaborative Problem Solving Self-explaining and Adaptive Systems
Results, Interpretation, and Consequences
Particular results about argumentation model and user interface:
generally successful:
1 associate new discussion post
with the knowledge item in question
2 model covers most commonly used argumentation primitives
3 user interface informs about available primitives
4 effectively supports user in choosing the right one
missed ‘question’ post type
requested better documentation of available argumentation
primitives and how to use them
Conclusion
Make the user interface semantically transparent (for learnability)
Christoph Lange Semantic Web Technology: The Key to Making Scientific Information Systems Social 2012-02-29 14
20. Introduction Social Semantic Web Collaborative Problem Solving Self-explaining and Adaptive Systems
Self-explaining Publications
and Assistive Services
(David, Kohlhase, Lange, Rabe, Zhiltsov and Zholudev 2010)
Christoph Lange Semantic Web Technology: The Key to Making Scientific Information Systems Social 2012-02-29 15
21. Introduction Social Semantic Web Collaborative Problem Solving Self-explaining and Adaptive Systems
Planetary: e-Math on the Web 3.0
Planetary: math-enabled social semantic web information portal
(http://trac.mathweb.org/planetary/; Kohlhase, Corneli, David,
Ginev, Jucovschi, Kohlhase, Lange, Matican, Mirea and
Zholudev 2011, Elsevier Executable Paper Challenge finalist)
based on Drupal 7 Content Management System
contributing legacy
mathematical knowledge
collections to the Web of
Data
PlanetMath
encyclopedia
arXiv.org pre-prints
Christoph Lange Semantic Web Technology: The Key to Making Scientific Information Systems Social 2012-02-29 16
22. Introduction Social Semantic Web Collaborative Problem Solving Self-explaining and Adaptive Systems
Self-explaining UIs with System Ontologies
System ontologies in Planetary: structural ontologies, workflow
ontologies, argumentation ontology
Customizable in the environment (= mathematical documents)
‘The ontology is the API’
(needs rich ontology language, e.g. DOL)
Self-explaining user interface via ontology documentation
Theorem …… Example
hasDiscussion ` SIOC
forum1 definition argumentation subClassOf
(IkeWiki ontology) module (partly shown) Position
agrees_with/ agrees_with/
exemplifies disagrees_with disagrees_with
post1: Issue Domain-specific Math. Know-
(UnclearWh.Useful) example argumentation ledge Item
classes (partly shown) supported_by
has_reply elaborates_on
subClassOf
post2: Elaboration OMDoc ontology
Ontology
has_container agrees_with Decision Entity
post3: Position resolvesInto
decides decides
proposes_
solution_for knowledge resolves_into
post4: Idea items
(ProvideExample) (OMDoc ontology) Issue Idea
proposes_solution_for
supports on wiki pages subClassOf subClassOf
decides
post5: Evaluation
Wrong Inappropriate Incomprehensible Provide Keep as Delete
agrees_with for Domain Example Bad Example
post6: Position
post7: Decision supported_by
argumentative
physical structure structure
(SIOC Core) discussion page (SIOC Arg.)
Christoph Lange Semantic Web Technology: The Key to Making Scientific Information Systems Social 2012-02-29 17
23. Introduction Social Semantic Web Collaborative Problem Solving Self-explaining and Adaptive Systems
References I
Berners-Lee, Tim, James Hendler and Ora Lassila (2001). ‘The
Semantic Web. A new form of Web content that is meaningful to
computers will unleash a revolution of new possibilities’. In:
Scientific American 284.
David, Catalin, Michael Kohlhase, Christoph Lange, Florian Rabe,
Nikita Zhiltsov and Vyacheslav Zholudev (2010). ‘Publishing Math
Lecture Notes as Linked Data’. In: The Semantic Web: Research and
Applications (Part II). 7th Extended Semantic Web Conference (ESWC)
(Hersonissos, Crete, Greece, 30th May–3rd June 2010). Ed. by
Lora Aroyo, Grigoris Antoniou, Eero Hyvönen, Annette ten Teije,
Heiner Stuckenschmidt, Liliana Cabral and Tania Tudorache. Lecture
Notes in Computer Science 6089. Springer Verlag, pp. 370–375.
arXiv:1004.3390v1 [cs.DL].
Christoph Lange Semantic Web Technology: The Key to Making Scientific Information Systems Social 2012-02-29 18
24. Introduction Social Semantic Web Collaborative Problem Solving Self-explaining and Adaptive Systems
References II
Kohlhase, Michael, Joe Corneli, Catalin David, Deyan Ginev,
Constantin Jucovschi, Andrea Kohlhase, Christoph Lange,
Bogdan Matican, Stefan Mirea and Vyacheslav Zholudev (2011). ‘The
Planetary System: Web 3.0 & Active Documents for STEM’. In:
Procedia Computer Science 4: Special issue: Proceedings of the
International Conference on Computational Science (ICCS). Ed. by
Mitsuhisa Sato, Satoshi Matsuoka, Peter M. Sloot, G. Dick van Albada
and Jack Dongarra. Finalist at the Executable Papers Challenge,
pp. 598–607. doi: 10.1016/j.procs.2011.04.063. url:
https://svn.mathweb.org/repos/planetary/doc/epc1
1/paper.pdf.
Lakatos, Imre (1976). Proofs and Refutations. The Logic of
Mathematical Discovery. Cambridge University Press.
Christoph Lange Semantic Web Technology: The Key to Making Scientific Information Systems Social 2012-02-29 19
25. Introduction Social Semantic Web Collaborative Problem Solving Self-explaining and Adaptive Systems
References III
Lange, Christoph (2011). Enabling Collaboration on Semiformal
Mathematical Knowledge by Semantic Web Integration. Studies on
the Semantic Web 11. Heidelberg and Amsterdam: AKA Verlag and
IOS Press. isbn: 978-1-60750-840-3.
Lange, Christoph, Tuukka Hastrup and Stéphane Corlosquet (Oct.
2008). ‘Arguing on Issues with Mathematical Knowledge Items in a
Semantic Wiki’. In: Wissens- und Erfahrungsmanagement LWA
(Lernen, Wissensentdeckung und Adaptivität) Conference Proceedings.
Ed. by Joachim Baumeister and Martin Atzmüller. Vol. 448.
Christoph Lange Semantic Web Technology: The Key to Making Scientific Information Systems Social 2012-02-29 20