Thomas Kurz gives a presentation on semantic media management using Apache Marmotta. He plans to create a new Marmotta module that supports storing images, annotating image fragments, and retrieving images and fragments based on annotations. This will make use of linked data platform, media fragment URIs, open annotation model, and SPARQL-MM. The goal is to create a Marmotta module and webapp that extends LDP for image fragments and provides a UI for image annotation and retrieval.
Linked Media Management with Apache MarmottaThomas Kurz
The integration of multimedia assets on the web with structured (linked) data promises further opportunities for digital market places regarding findability and recommendations. The new W3C standards for Media Annotation, Media Fragment UIRs and Linked Data Platforms build a stable base for this purpose. Thomas Kurz shows how to use the Linked Data Platform Apache Marmotta as a backend for the storage and retrieval of Linked Media. In his talk he is going to show extensions for a seamless integration of media streaming for Non-RDF resources and spatio-regional media fragment retrieval with SPARQL.
Enabling access to Linked Media with SPARQL-MMThomas Kurz
The amount of audio, video and image data on the web is immensely growing, which leads to data management problems based on the hidden character of multimedia. Therefore the interlinking of semantic concepts and media data with the aim to bridge the gap between the document web and the Web of Data has become a common practice and is known as Linked Media. However, the value of connecting media to its semantic meta data is limited due to lacking access methods specialized for media assets and fragments as well as to the variety of used description models. With SPARQL-MM we extend SPARQL, the standard query language for the Semantic Web with media specific concepts and functions to unify the access to Linked Media. In this paper we describe the motivation for SPARQL-MM, present the State of the Art of Linked Media description formats and Multimedia query languages, and outline the specification and implementation of the SPARQL-MM function set.
Linked Media Management with Apache MarmottaThomas Kurz
The integration of multimedia assets on the web with structured (linked) data promises further opportunities for digital market places regarding findability and recommendations. The new W3C standards for Media Annotation, Media Fragment UIRs and Linked Data Platforms build a stable base for this purpose. Thomas Kurz shows how to use the Linked Data Platform Apache Marmotta as a backend for the storage and retrieval of Linked Media. In his talk he is going to show extensions for a seamless integration of media streaming for Non-RDF resources and spatio-regional media fragment retrieval with SPARQL.
Enabling access to Linked Media with SPARQL-MMThomas Kurz
The amount of audio, video and image data on the web is immensely growing, which leads to data management problems based on the hidden character of multimedia. Therefore the interlinking of semantic concepts and media data with the aim to bridge the gap between the document web and the Web of Data has become a common practice and is known as Linked Media. However, the value of connecting media to its semantic meta data is limited due to lacking access methods specialized for media assets and fragments as well as to the variety of used description models. With SPARQL-MM we extend SPARQL, the standard query language for the Semantic Web with media specific concepts and functions to unify the access to Linked Media. In this paper we describe the motivation for SPARQL-MM, present the State of the Art of Linked Media description formats and Multimedia query languages, and outline the specification and implementation of the SPARQL-MM function set.
Hierarchical Cluster Engine (HCE) project
The main idea of this new project – to implement the solution that can be used to: construct custom network mesh or distributed network cluster structure with several relations types between nodes, formalize the data flow processing goes from upper node level central source point to down nodes and backward, formalize the management requests handling from multiple source points, support native reducing of multiple nodes results (aggregation, duplicates elimination, sorting and so on), internally support powerful full-text search engine and data storage, provide transactions-less and transactional requests processing, support flexible run-time changes of cluster infrastructure, have many languages bindings for client-side integration APIs in one product build on C++ language…
Linked Data Publishing with Drupal (SWIB13 workshop)Joachim Neubert
Publishing Linked Open Data in a user-appealing way is still a challenge: Generic solutions to convert arbitrary RDF structures to HTML out-of-the-box are available, but leave users perplexed. Custom-built web applications to enrich web pages with semantic tags "under the hood" require high efforts in programming. Given this dilemma, content management systems (CMS) could be a natural enhancement point for data on the web. In the case of Drupal, one of the most popular CMS nowadays, Semantic Web enrichment is provided as part of the CMS core. In a simple declarative approach, classes and properties from arbitrary vocabularies can be added to Drupal content types and fields, and are turned into Linked Data on the web pages automagically. The embedded RDFa marked-up data can be easily extracted by other applications. This makes the pages part of the emerging Web of Data, and in the same course helps discoverability with the major search engines.
In the workshop, you will learn how to make use of the built-in Drupal 7 features to produce RDFa enriched pages. You will build new content types, add custom fields and enhance them with RDF markup from mixed vocabularies. The gory details of providing LOD-compatible "cool" URIs will not be skipped, and current limitations of RDF support in Drupal will be explained. Exposing the data in a REST-ful application programming interface or as a SPARQL endpoint are additional options provided by Drupal modules. The workshop will also introduce modules such as Web Taxonomy, which allows linking to thesauri or authority files on the web via simple JSON-based autocomplete lookup. Finally, we will touch the upcoming Drupal 8 version. (Workshop announcement)
Leaving Blackboxes Behind: benefits and challenges of running in-house developed e-resource management and discovery systems. Slides of the presdentation at ELAG 2016 in Copenhagen by Evelyn Weiser and Annika Domin, Leipzig University Library, Germany
Linking Media and Data using Apache Marmotta (LIME workshop keynote)LinkedTV
Sebastian Schaffert is CTO and co-founder of RedLink GmbH. He is also currently working as head of the "Knowledge and Media Technologies" department at Salzburg Research and occassionally as a lecturer at the University of Applied Sciences (FH) Salzburg. He received his diploma in Computer Science in 2001 and his PhD in 2004, both at the University of Munich, Germany. His current research focus is Semantic Web technologies, especially Linked Data, Semantic Search, Information Extraction, and Multimedia Information Systems.
Keynote at LIME workshop at ESWC 2014.
Hierarchical Cluster Engine (HCE) project
The main idea of this new project – to implement the solution that can be used to: construct custom network mesh or distributed network cluster structure with several relations types between nodes, formalize the data flow processing goes from upper node level central source point to down nodes and backward, formalize the management requests handling from multiple source points, support native reducing of multiple nodes results (aggregation, duplicates elimination, sorting and so on), internally support powerful full-text search engine and data storage, provide transactions-less and transactional requests processing, support flexible run-time changes of cluster infrastructure, have many languages bindings for client-side integration APIs in one product build on C++ language…
Linked Data Publishing with Drupal (SWIB13 workshop)Joachim Neubert
Publishing Linked Open Data in a user-appealing way is still a challenge: Generic solutions to convert arbitrary RDF structures to HTML out-of-the-box are available, but leave users perplexed. Custom-built web applications to enrich web pages with semantic tags "under the hood" require high efforts in programming. Given this dilemma, content management systems (CMS) could be a natural enhancement point for data on the web. In the case of Drupal, one of the most popular CMS nowadays, Semantic Web enrichment is provided as part of the CMS core. In a simple declarative approach, classes and properties from arbitrary vocabularies can be added to Drupal content types and fields, and are turned into Linked Data on the web pages automagically. The embedded RDFa marked-up data can be easily extracted by other applications. This makes the pages part of the emerging Web of Data, and in the same course helps discoverability with the major search engines.
In the workshop, you will learn how to make use of the built-in Drupal 7 features to produce RDFa enriched pages. You will build new content types, add custom fields and enhance them with RDF markup from mixed vocabularies. The gory details of providing LOD-compatible "cool" URIs will not be skipped, and current limitations of RDF support in Drupal will be explained. Exposing the data in a REST-ful application programming interface or as a SPARQL endpoint are additional options provided by Drupal modules. The workshop will also introduce modules such as Web Taxonomy, which allows linking to thesauri or authority files on the web via simple JSON-based autocomplete lookup. Finally, we will touch the upcoming Drupal 8 version. (Workshop announcement)
Leaving Blackboxes Behind: benefits and challenges of running in-house developed e-resource management and discovery systems. Slides of the presdentation at ELAG 2016 in Copenhagen by Evelyn Weiser and Annika Domin, Leipzig University Library, Germany
Linking Media and Data using Apache Marmotta (LIME workshop keynote)LinkedTV
Sebastian Schaffert is CTO and co-founder of RedLink GmbH. He is also currently working as head of the "Knowledge and Media Technologies" department at Salzburg Research and occassionally as a lecturer at the University of Applied Sciences (FH) Salzburg. He received his diploma in Computer Science in 2001 and his PhD in 2004, both at the University of Munich, Germany. His current research focus is Semantic Web technologies, especially Linked Data, Semantic Search, Information Extraction, and Multimedia Information Systems.
Keynote at LIME workshop at ESWC 2014.
The W3C Linked Data Platform (LDP) specification describes a set of best practices and simple approach for a read-write Linked Data architecture, based on HTTP access to web resources that describe their state using the RDF data model. This presentation provides a set of simple examples that illustrates how an LDP client can interact with an LDP server in the context of a read-write Linked Data application i.e. how to use the LDP protocol for retrieving, updating, creating and deleting Linked Data resources.
Why do they call it Linked Data when they want to say...?Oscar Corcho
The four Linked Data publishing principles established in 2006 seem to be quite clear and well understood by people inside and outside the core Linked Data and Semantic Web community. However, not only when discussing with outsiders about the goodness of Linked Data but also when reviewing papers for the COLD workshop series, I find myself, in many occasions, going back again to the principles in order to see whether some approach for Web data publication and consumption is actually Linked Data or not. In this talk we will review some of the current approaches that we have for publishing data on the Web, and we will reflect on why it is sometimes so difficult to get into an agreement on what we understand by Linked Data. Furthermore, we will take the opportunity to describe yet another approach that we have been working on recently at the Center for Open Middleware, a joint technology center between Banco Santander and Universidad Politécnica de Madrid, in order to facilitate Linked Data consumption.
A set of slides that provides a high-level overview of the W3C Linked Data Platform specification presented at the 4th Linked Data in Architecture and Construction Workshop.
For more detailed and technical version of the presentation, please refer to
http://www.slideshare.net/nandana/learning-w3c-linked-data-platform-with-examples
LDAC 2016 programme
http://smartcity.linkeddata.es/LDAC2016/#programme
Technologie Proche: Imagining the Archival Systems of Tomorrow With the Tools...Artefactual Systems - AtoM
These slides accompanied a June 4th, 2016 presentation made by Dan Gillean of Artefactual Systems at the Association of Canadian Archivists' 2016 Conference in Montreal, QC, Canada.
This presentation aims to examine several existing or emerging computing paradigms, with specific examples, to imagine how they might inform next-generation archival systems to support digital preservation, description, and access. Topics covered include:
- Distributed Version Control and git
- P2P architectures and the BitTorrent protocol
- Linked Open Data and RDF
- Blockchain technology
The session is part of an attempt by the ACA to create interactive "working sessions" at its conferences. Accompanying notes can be found at: http://bit.ly/tech-Proche
Participants were also asked to use the Twitter hashtag of #techProche for online interaction during the session.
Knowledge Graph Conference 2021
Semantic MediaWiki (SMW), which was introduced as early as in 2006, has since gone on to establish a vital community and is currently one of the few semantic wiki solutions still in existence. SMW is an extension of MediaWiki, the software used for Wikipedia and many other projects, resulting in a largely sustainable codebase and ecosystem. There are many reasons why SMW should not be overlooked by the knowledge graph community:
SMW is capable of directly connecting to several triple stores (Blazegraph, Virtuoso, Jena), which is why it can be considered an interface for entering data into knowledge graphs.
SMW can use its internal relational database (or ElasticSearch), enabling users to build simple knowledge graphs without in-depth knowledge about triple stores.
SMW has the built-in capability of exporting to RDF including building complete RDF data dumps that can be imported into existing knowledge graphs.
SMW has the capability to reuse existing ontologies by importing vocabularies and providing unique identifiers.
The explicit semantic content of Semantic MediaWiki is formally interpreted in the OWL DL ontology language and is made available in XML/RDF format.
A simple internal query language is available to query the internal knowledge graph from within SMW, without the requirement of having a SPARQL endpoint. However, extensions for implementing SPARQL in SMW are available as well.
SMW has the capability to enable data curation for experienced users responsible for the ontology as well as simple form-based input for regular users that can easily populate the KG with data.
There are several approaches to visualizing data in SMW, thus making the knowledge graph visible and interactive.
Implementing custom ontologies in SMW is quite easy, everything is built-in wiki pages (e.g. definition of properties and datatypes, forms and templates).
SMW has low barriers to implementation as it is a clean extension to MediaWiki, which is PHP software running on regular web hosts.
In the talk, I will give an overview of the mentioned aspects and highlight some main differences to Wikibase – which is an alternative approach for managing structured data in MediaWiki – as well as the current limitations of SMW.
Open Source project failure often stems from not setting clear objectives or having a shared vision from the start. That said there are many success stories, including two well known Statistical examples: Demetra; and Eurostat SDMX tools (SDMX-RI). However, in all these examples there was at first a founding organisation/entity that created the right environment for its successful path into a new paradigm. In the context of my presentation this being the Statistical Information System Collaboration Community (SIS-CC / http://siscc.oecd.org).
Presented at the International Marketing and Output DataBase Conference, Gozd Martuljek, September 18 - 22, 2016.
Digital Manuscripts Toolkit, using IIIF and JavaScript. Monica Messaggi KayaFuture Insights
FOWA London 2015
Monica is part of the DMT project at the Bodleian Libraries (University of Oxford) that aims to create a toolkit using IIIF standard (http://iiif.io) for images, a server solution (to store images of manuscripts and metadata), and a client solution using JavaScript to build an authoring tool that allows editing the manuscript manifest and its metadata. Working specifically on the authoring tool, and on the challenges that different types of manifests presents for the developer. You will have a glimpse of the whole picture and then she taps into the libraries used, choices made, collaboration experiences and lessons learned so far.
Toolkit using IIIF standard (http://iiif.io) for images, a server solution (to store images of manuscripts and metadata), and a client solution using JavaScript to build an authoring tool that allows editing the manuscript manifest and its metadata.
Intervention de Stefanie Gehrke au Workshop "TEI and Neighbouring Standards" à la DiXiT Convention Week 2015 (Huygens ING, La Haye, 15 septembre 2015).
A presentation by Gordon Dunsire.
Delivered at the Cataloguing and Indexing Group Scotland (CIGS) Linked Open Data (LOD) Conference which took place Fri 21 September 2012 at the Edinburgh Centre for Carbon Innovation.
Nelson Piedra , Janneth Chicaiza
and Jorge López, Universidad Técnica Particular de Loja, Edmundo
Tovar, Universidad Politécnica de Madrid,
and Oscar Martínez, Universitas
Miguel Hernández
Explore the advantages of using linked data with OERs.
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...Mind IT Systems
Healthcare providers often struggle with the complexities of chronic conditions and remote patient monitoring, as each patient requires personalized care and ongoing monitoring. Off-the-shelf solutions may not meet these diverse needs, leading to inefficiencies and gaps in care. It’s here, custom healthcare software offers a tailored solution, ensuring improved care and effectiveness.
Cyaniclab : Software Development Agency Portfolio.pdfCyanic lab
CyanicLab, an offshore custom software development company based in Sweden,India, Finland, is your go-to partner for startup development and innovative web design solutions. Our expert team specializes in crafting cutting-edge software tailored to meet the unique needs of startups and established enterprises alike. From conceptualization to execution, we offer comprehensive services including web and mobile app development, UI/UX design, and ongoing software maintenance. Ready to elevate your business? Contact CyanicLab today and let us propel your vision to success with our top-notch IT solutions.
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Enhancing Project Management Efficiency_ Leveraging AI Tools like ChatGPT.pdfJay Das
With the advent of artificial intelligence or AI tools, project management processes are undergoing a transformative shift. By using tools like ChatGPT, and Bard organizations can empower their leaders and managers to plan, execute, and monitor projects more effectively.
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
How Recreation Management Software Can Streamline Your Operations.pptxwottaspaceseo
Recreation management software streamlines operations by automating key tasks such as scheduling, registration, and payment processing, reducing manual workload and errors. It provides centralized management of facilities, classes, and events, ensuring efficient resource allocation and facility usage. The software offers user-friendly online portals for easy access to bookings and program information, enhancing customer experience. Real-time reporting and data analytics deliver insights into attendance and preferences, aiding in strategic decision-making. Additionally, effective communication tools keep participants and staff informed with timely updates. Overall, recreation management software enhances efficiency, improves service delivery, and boosts customer satisfaction.
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERRORTier1 app
Even though at surface level ‘java.lang.OutOfMemoryError’ appears as one single error; underlyingly there are 9 types of OutOfMemoryError. Each type of OutOfMemoryError has different causes, diagnosis approaches and solutions. This session equips you with the knowledge, tools, and techniques needed to troubleshoot and conquer OutOfMemoryError in all its forms, ensuring smoother, more efficient Java applications.
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
2. Thomas Kurz
@tkurz
http://de.linkedin.com/pub/thomas-kurz/74/357/b52/
http://www.salzburgresearch.at/person/kurz-thomas/
Who Am I
Committer at the Apache Software Foundation
also…
Lecturer at Fachhochschule Salzburg
Senior Researcher at Salzburg Research
Partner and Solution Architect at Redlink GmbH
3. Linked Media
is a “Web scale layer of structured,
interlinked media annotations (...)
inspired by the Linked Data movement for
making structured, interlinked
descriptions of resources better available
online.”
Lyndon J. B. Nixon. The importance of linked media to the future web: lime 2013 keynote talk - a proposal
for the linked media research agenda. WWW Companion Volume, page 455-456. International World Wide
Web Conferences Steering Committee / ACM, (2013).
4. Plan for the next hour
Create a new module for Apache
Marmotta which supports a simple
Semantic Media Management workflow,
including:
● Storing images
● Annotating image fragments
● Retrieving images and image fragments
based on annotations
5. Technologies
● Linked Data Platform
● Media Fragment URIs
● Open Annotation Model
● SPARQL-MM
6. Media Fragment URIs
"... a media-format independent, standard
means of addressing media fragments on
the Web using Uniform Resource
Identifiers."
[W3C Recommendation: Media Fragments URI 1.0 (basic)]
http://test.org/video.mpg?t=10,20&xywh=10,20,30,40
7. Open Annotation Model
The Open Annotation Core Data Model
specifies an interoperable framework for
creating associations between related
resources (...).
http://www.openannotation.org/spec/core/
8. SPARQL-MM
Multimedia Extension for SPARQL
https://github.com/tkurz/sparql-mm
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX mm: <http://linkedmultimedia.org/sparql-mm/functions#>
SELECT ?f1 ?f2 (mm:boundingBox(?f1,?f2) AS ?box) WHERE {
?f1 rdfs:label "a".
?f2 rdfs:label "b".
FILTER mm:rightBeside(?f1,?f2)
}
9. Eat your own dog foot
● Post images to LDP
● Create annotations for Image
Fragments with LDP and OA
● Retrieve and display
(aggregated) Image Fragments
with SPARQL-MM
10. What we gonna do
● Create a Marmotta Module
which extends LDP to Image
Fragments
● Create a Marmotta Webapp
which a nice UI for Image
Annotation and Retrieval