The document introduces Apache Marmotta, an open source linked data platform. It provides a linked data server, SPARQL endpoint, and libraries for building linked data applications. Marmotta allows users to easily publish and query RDF data on the web. It also includes features for multimedia management such as semantic annotation of media and extensions for querying over media fragments.
Enabling access to Linked Media with SPARQL-MMThomas Kurz
The amount of audio, video and image data on the web is immensely growing, which leads to data management problems based on the hidden character of multimedia. Therefore the interlinking of semantic concepts and media data with the aim to bridge the gap between the document web and the Web of Data has become a common practice and is known as Linked Media. However, the value of connecting media to its semantic meta data is limited due to lacking access methods specialized for media assets and fragments as well as to the variety of used description models. With SPARQL-MM we extend SPARQL, the standard query language for the Semantic Web with media specific concepts and functions to unify the access to Linked Media. In this paper we describe the motivation for SPARQL-MM, present the State of the Art of Linked Media description formats and Multimedia query languages, and outline the specification and implementation of the SPARQL-MM function set.
Drupal and Apache Stanbol. What if you could reliably do autotagging?Gabriel Dragomir
My presentation on Drupal and Apache Stanbol integration at DrupalCamp Arad 2012 - Romania. Want to talk about this? Find me at http://webikon.com, Twitter: @gabidrg.
Enabling access to Linked Media with SPARQL-MMThomas Kurz
The amount of audio, video and image data on the web is immensely growing, which leads to data management problems based on the hidden character of multimedia. Therefore the interlinking of semantic concepts and media data with the aim to bridge the gap between the document web and the Web of Data has become a common practice and is known as Linked Media. However, the value of connecting media to its semantic meta data is limited due to lacking access methods specialized for media assets and fragments as well as to the variety of used description models. With SPARQL-MM we extend SPARQL, the standard query language for the Semantic Web with media specific concepts and functions to unify the access to Linked Media. In this paper we describe the motivation for SPARQL-MM, present the State of the Art of Linked Media description formats and Multimedia query languages, and outline the specification and implementation of the SPARQL-MM function set.
Drupal and Apache Stanbol. What if you could reliably do autotagging?Gabriel Dragomir
My presentation on Drupal and Apache Stanbol integration at DrupalCamp Arad 2012 - Romania. Want to talk about this? Find me at http://webikon.com, Twitter: @gabidrg.
Adventures in Linked Data Land (presentation by Richard Light)jottevanger
"Adventures in Linked Data Land: bringing RDF to the Wordsworth Trust" is a paper given by RIchard Light (http://uk.linkedin.com/pub/richard-light/a/221/ba5) to a Linked Data meeting run by the Collections Trust in February 2010. He runs through the basics of LD, how it relates to cultural heritage, and some of his experiments with it, specifically with the data of the Wordsworth Trust, finally listing a series of challenges that face museums in trying to get on board the Linked Data bus.
Slides from my workshop at Open Repositories 2016 about DSpace's Linked Data support. The slides include a short introduction into the Semantic Web and Linked Data, the main ideas behind the Linked Data support of DSpace, information on how to configure this feature and some examples about how to query DSpace installations for Linked Data.
Hierarchical Cluster Engine (HCE) project
The main idea of this new project – to implement the solution that can be used to: construct custom network mesh or distributed network cluster structure with several relations types between nodes, formalize the data flow processing goes from upper node level central source point to down nodes and backward, formalize the management requests handling from multiple source points, support native reducing of multiple nodes results (aggregation, duplicates elimination, sorting and so on), internally support powerful full-text search engine and data storage, provide transactions-less and transactional requests processing, support flexible run-time changes of cluster infrastructure, have many languages bindings for client-side integration APIs in one product build on C++ language…
Repositories are systems to safely store and publish digital objects and their descriptive metadata. Repositories mainly serve their data by using web interfaces which are primarily oriented towards human consumption. They either hide their data behind non-generic interfaces or do not publish them at all in a way a computer can process easily. At the same time the data stored in repositories are particularly suited to be used in the Semantic Web as metadata are already available. They do not have to be generated or entered manually for publication as Linked Data. In my talk I will present a concept of how metadata and digital objects stored in repositories can be woven into the Linked (Open) Data Cloud and which characteristics of repositories have to be considered while doing so. One problem it targets is the use of existing metadata to present Linked Data. The concept can be applied to almost every repository software. At the end of my talk I will present an implementation for DSpace, one of the software solutions for repositories most widely used. With this implementation every institution using DSpace should become able to export their repository content as Linked Data.
LDP4j: A framework for the development of interoperable read-write Linked Da...Nandana Mihindukulasooriya
This presentation introduces LDP4j, an open source Java-based framework for the development of read-write Linked Data applications based on the W3C Linked Data Platform 1.0 (LDP) specification and available under the Apache 2.0 license. This was presented in the ISWC 2014 Developer Woskshop.
http://www.ldp4j.org/
Introduction to Apache Any23. Any23 is a library, a Web Service and a Command Line Tool written in Java, that extracts structured RDF data from a variety of Web documents and markup formats.
Any23 is an Apache Software Foundation top level project.
My presentation on RDFauthor at EKAW2010, Lisbon. For more information on RDFauthor visit http://aksw.org/Projects/RDFauthor; for the code visit http://code.google.com/p/rdfauthor/.
Adventures in Linked Data Land (presentation by Richard Light)jottevanger
"Adventures in Linked Data Land: bringing RDF to the Wordsworth Trust" is a paper given by RIchard Light (http://uk.linkedin.com/pub/richard-light/a/221/ba5) to a Linked Data meeting run by the Collections Trust in February 2010. He runs through the basics of LD, how it relates to cultural heritage, and some of his experiments with it, specifically with the data of the Wordsworth Trust, finally listing a series of challenges that face museums in trying to get on board the Linked Data bus.
Slides from my workshop at Open Repositories 2016 about DSpace's Linked Data support. The slides include a short introduction into the Semantic Web and Linked Data, the main ideas behind the Linked Data support of DSpace, information on how to configure this feature and some examples about how to query DSpace installations for Linked Data.
Hierarchical Cluster Engine (HCE) project
The main idea of this new project – to implement the solution that can be used to: construct custom network mesh or distributed network cluster structure with several relations types between nodes, formalize the data flow processing goes from upper node level central source point to down nodes and backward, formalize the management requests handling from multiple source points, support native reducing of multiple nodes results (aggregation, duplicates elimination, sorting and so on), internally support powerful full-text search engine and data storage, provide transactions-less and transactional requests processing, support flexible run-time changes of cluster infrastructure, have many languages bindings for client-side integration APIs in one product build on C++ language…
Repositories are systems to safely store and publish digital objects and their descriptive metadata. Repositories mainly serve their data by using web interfaces which are primarily oriented towards human consumption. They either hide their data behind non-generic interfaces or do not publish them at all in a way a computer can process easily. At the same time the data stored in repositories are particularly suited to be used in the Semantic Web as metadata are already available. They do not have to be generated or entered manually for publication as Linked Data. In my talk I will present a concept of how metadata and digital objects stored in repositories can be woven into the Linked (Open) Data Cloud and which characteristics of repositories have to be considered while doing so. One problem it targets is the use of existing metadata to present Linked Data. The concept can be applied to almost every repository software. At the end of my talk I will present an implementation for DSpace, one of the software solutions for repositories most widely used. With this implementation every institution using DSpace should become able to export their repository content as Linked Data.
LDP4j: A framework for the development of interoperable read-write Linked Da...Nandana Mihindukulasooriya
This presentation introduces LDP4j, an open source Java-based framework for the development of read-write Linked Data applications based on the W3C Linked Data Platform 1.0 (LDP) specification and available under the Apache 2.0 license. This was presented in the ISWC 2014 Developer Woskshop.
http://www.ldp4j.org/
Introduction to Apache Any23. Any23 is a library, a Web Service and a Command Line Tool written in Java, that extracts structured RDF data from a variety of Web documents and markup formats.
Any23 is an Apache Software Foundation top level project.
My presentation on RDFauthor at EKAW2010, Lisbon. For more information on RDFauthor visit http://aksw.org/Projects/RDFauthor; for the code visit http://code.google.com/p/rdfauthor/.
Stream your Operational Data with Apache Spark & Kafka into Hadoop using Couc...Data Con LA
Abstract:-
Tracking user events as they happen can challenge anyone providing real time user interaction. It can demand both huge scale and a lot of processing to support dynamic adjustment to targeting products and services. As the operational data store Couchbase data services are capable of processing tens of millions of updates a day. Streaming through systems such as Apache Spark and Kafka into Hadoop, information about these key events can be turned into deeper knowledge. We will review Lambda architectures deployed at sites like PayPal, Live Person and LinkedIn that leverage a Couchbase Data Pipeline.
Bio:-
Justin Michaels. With over 20 years experience in deploying mission critical systems, Justin Michaels industry experience covers capacity planning, architecture and industry vertical experience. Justin brings his passion for architecting, implementing and improving Couchbase to the community as a Solution Architect. His expertise involves both conventional application platforms as well as distributed data management systems. He regularly engages with existing and new Couchbase customers in performance reviews, architecture planning and best practice guidance.
Usage of Linked Data: Introduction and Application ScenariosEUCLID project
This presentation introduces the main principles of Linked Data, the underlying technologies and background standards. It provides basic knowledge for how data can be published over the Web, how it can be queried, and what are the possible use cases and benefits. As an example, we use the development of a music portal (based on the MusicBrainz dataset), which facilitates access to a wide range of information and multimedia resources relating to music.
http://bit.ly/1BTaXZP – As organizations look for even faster ways to derive value from big data, they are turning to Apache Spark is an in-memory processing framework that offers lightning-fast big data analytics, providing speed, developer productivity, and real-time processing advantages. The Spark software stack includes a core data-processing engine, an interface for interactive querying, Spark Streaming for streaming data analysis, and growing libraries for machine-learning and graph analysis. Spark is quickly establishing itself as a leading environment for doing fast, iterative in-memory and streaming analysis. This talk will give an introduction the Spark stack, explain how Spark has lighting fast results, and how it complements Apache Hadoop. By the end of the session, you’ll come away with a deeper understanding of how you can unlock deeper insights from your data, faster, with Spark.
Apache Kafka - Scalable Message-Processing and more !Guido Schmutz
Independent of the source of data, the integration of event streams into an Enterprise Architecture gets more and more important in the world of sensors, social media streams and Internet of Things. Events have to be accepted quickly and reliably, they have to be distributed and analysed, often with many consumers or systems interested in all or part of the events. How can me make sure that all these event are accepted and forwarded in an efficient and reliable way? This is where Apache Kafaka comes into play, a distirbuted, highly-scalable messaging broker, build for exchanging huge amount of messages between a source and a target.
This session will start with an introduction into Apache and presents the role of Apache Kafka in a modern data / information architecture and the advantages it brings to the table. Additionally the Kafka ecosystem will be covered as well as the integration of Kafka in the Oracle Stack, with products such as Golden Gate, Service Bus and Oracle Stream Analytics all being able to act as a Kafka consumer or producer.
Strata NYC 2015 - Supercharging R with Apache SparkDatabricks
R is the favorite language of many data scientists. In addition to a language and runtime, R is a rich ecosystem of libraries for a wide range of use cases from statistical inference to data visualization. However, handling large or distributed data with R is challenging. Hence R is used along with other frameworks and languages by most data scientist. In this mode most of the friction is at the interface of R and the other systems. For example, when data is sampled by a big data platform, results need to be transferred to and imported in R as native data structures. In this talk we show an alternative, and complimentary, approach to SparkR for integrating Spark and R.
Since SparkR was released in version 1.4 of Apache Spark distributed data remains inside the JVM instead of individual R processes running on workers. This approach is more convenient when dealing with external data sources such as Cassandra, Hive, and Spark’s own distributed DataFrames. We show two specific techniques to remove the data transfer friction between R and JVM: collecting Spark DataFrames as R data frames and user space filesystems. We think this model complements and improves the day-to-day workload of many data scientists who use R. Spark’s interactive query processing, especially with in-memory datasets, closely matches the R interactive session model. When integrated together Spark and R can provide state of the art tools for the entire end-to-end data science pipeline. We will show how such a pipeline works in real world use cases in a live demo at the end of the talk.
- A brief introduction to Spark Core
- Introduction to Spark Streaming
- A Demo of Streaming by evaluation top hashtags being used
- Introduction to Spark MLlib
- A Demo of MLlib by building a simple movie recommendation engine
Overview of Apache Fink: the 4 G of Big Data Analytics FrameworksSlim Baltagi
Slides of my talk at the Hadoop Summit Europe in Dublin, Ireland on April 13th, 2016. The talk introduces Apache Flink as both a multi-purpose Big Data analytics framework and real-world streaming analytics framework. It is focusing on Flink's key differentiators and suitability for streaming analytics use cases. It also shows how Flink enables novel use cases such as distributed CEP (Complex Event Processing) and querying the state by behaving like a key value data store.
Overview of Apache Fink: The 4G of Big Data Analytics FrameworksSlim Baltagi
Slides of my talk at the Hadoop Summit Europe in Dublin, Ireland on April 13th, 2016. The talk introduces Apache Flink as both a multi-purpose Big Data analytics framework and real-world streaming analytics framework. It is focusing on Flink's key differentiators and suitability for streaming analytics use cases. It also shows how Flink enables novel use cases such as distributed CEP (Complex Event Processing) and querying the state by behaving like a key value data store.
This is part 2 of the ISWC 2009 tutorial on the GoodRelations ontology and RDFa for e-commerce on the Web of Linked Data.
See also
http://www.ebusiness-unibw.org/wiki/Web_of_Data_for_E-Commerce_Tutorial_ISWC2009
This is part 2 of the ISWC 2009 tutorial on the GoodRelations ontology and RDFa for e-commerce on the Web of Linked Data.
See also
http://www.ebusiness-unibw.org/wiki/Web_of_Data_for_E-Commerce_Tutorial_ISWC2009
Bridging the Digital Gap Brad Spiegel Macon, GA Initiative.pptxBrad Spiegel Macon GA
Brad Spiegel Macon GA’s journey exemplifies the profound impact that one individual can have on their community. Through his unwavering dedication to digital inclusion, he’s not only bridging the gap in Macon but also setting an example for others to follow.
This 7-second Brain Wave Ritual Attracts Money To You.!nirahealhty
Discover the power of a simple 7-second brain wave ritual that can attract wealth and abundance into your life. By tapping into specific brain frequencies, this technique helps you manifest financial success effortlessly. Ready to transform your financial future? Try this powerful ritual and start attracting money today!
APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024APNIC
Ellisha Heppner, Grant Management Lead, presented an update on APNIC Foundation to the PNG DNS Forum held from 6 to 10 May, 2024 in Port Moresby, Papua New Guinea.
Meet up Milano 14 _ Axpo Italia_ Migration from Mule3 (On-prem) to.pdfFlorence Consulting
Quattordicesimo Meetup di Milano, tenutosi a Milano il 23 Maggio 2024 dalle ore 17:00 alle ore 18:30 in presenza e da remoto.
Abbiamo parlato di come Axpo Italia S.p.A. ha ridotto il technical debt migrando le proprie APIs da Mule 3.9 a Mule 4.4 passando anche da on-premises a CloudHub 1.0.
1.Wireless Communication System_Wireless communication is a broad term that i...JeyaPerumal1
Wireless communication involves the transmission of information over a distance without the help of wires, cables or any other forms of electrical conductors.
Wireless communication is a broad term that incorporates all procedures and forms of connecting and communicating between two or more devices using a wireless signal through wireless communication technologies and devices.
Features of Wireless Communication
The evolution of wireless technology has brought many advancements with its effective features.
The transmitted distance can be anywhere between a few meters (for example, a television's remote control) and thousands of kilometers (for example, radio communication).
Wireless communication can be used for cellular telephony, wireless access to the internet, wireless home networking, and so on.
Italy Agriculture Equipment Market Outlook to 2027harveenkaur52
Agriculture and Animal Care
Ken Research has an expertise in Agriculture and Animal Care sector and offer vast collection of information related to all major aspects such as Agriculture equipment, Crop Protection, Seed, Agriculture Chemical, Fertilizers, Protected Cultivators, Palm Oil, Hybrid Seed, Animal Feed additives and many more.
Our continuous study and findings in agriculture sector provide better insights to companies dealing with related product and services, government and agriculture associations, researchers and students to well understand the present and expected scenario.
Our Animal care category provides solutions on Animal Healthcare and related products and services, including, animal feed additives, vaccination
Italy Agriculture Equipment Market Outlook to 2027
Linked Media Management with Apache Marmotta
1. Apache Marmotta
for
Multimedia Management
Jakob Frank, Thomas Kurz
http://marmotta.apache.org/
2. Who are we?
Jakob Frank
• Researcher at Salzburg Research
• Solution Architect at Redlink GmbH
• ASF Committer of Marmotta
Thomas Kurz
• Researcher at Salzburg Research
• Solution Architect at Redlink GmbH
• ASF Committer of Marmotta
2014-11-19 Apache Marmotta
6. Outline
Part I : Apache Marmotta
● Basics of the Semantic Web
● Apache Marmotta Linked Data Server
● Marmotta Modules and Libraries
Part II: Linked Media
● What is Linked Media
● Semantic Media Annotation
● Extending backend to Media Storage and Retrieval
2014-11-19 Apache Marmotta
7. Why we need a Semantic Web ?
2014-11-19 Apache Marmotta
Slide from James Hendler (Univ. Maryland)
8. What is RDF?
• RDF = Resource Description Framework
• formal language for describing web resources and
relationships in between
• based on a directed labeled graph, represented as
triple model (Subject - Predicate - Object)
• several syntaxes (RDF/XML, Turtle, ...)
• RDFS = RDF Schema
• formal language for describing possible instances of
the graph (classes and predicates)
• both specified by W3C (~ 1998)
2014-11-19 Apache Marmotta
9. What is SPARQL?
• SPARQL is an RDF Query Language specified by W3C
(v1.1. March 2013)
• SQL-like syntax
PREFIX uni : <http://example.org/uni/>
SELECT ?name
FROM <http://example.org/personal>
WHERE {
?s uni:name ?name.
?s a uni:Lecturer.
}
2014-11-19 Apache Marmotta
10. What is Linked Data?
1. Use URIs to denote things.
2. Use HTTP URIs so that these things can be referred to
and looked up ("dereferenced") by people and user
agents.
3. Provide useful information about the thing when its URI
is dereferenced, leveraging standards such as RDF,
SPARQL.
4. Include links to other related things (using their URIs)
when publishing data on the Web.
2014-11-19 Apache Marmotta
11. What is Apache Marmotta?
● Linked Data Server
full Linked Data stack
incl. Content Negotiation and LDP
● SPARQL Server
SPARQL 1.1 query, update, protocol
● Linked Data Development Environment
collection of modules and libraries for building Linked Data applications
● Community of
Open Source Linked Data Developers
2014-11-19 Apache Marmotta
12. Linked Data Server
• easily setup to provide your (RDF-)data as Linked Data
on the Web
• human- and machine-readable read-write data access
based on HTTP content negotiation
• Query and interlink you data using SPARQL and
LDPath
• reference implementation of the Linked Data Platform
2014-11-19 Apache Marmotta
13. SPARQL Server
● full support of SPARQL 1.1 through HTTP web
services
● SPARQL 1.1 query and update endpoints
● implements the SPARQL 1.1 protocol
(supports any standard SPARQL client)
● fast native translation of SPARQL to SQL in the KiWi
triple store
● lightweight Squebi SPARQL explorer UI
2014-11-19 Apache Marmotta
14. Linked Data Development
● modular server architecture
○ combine exactly those features you need
● collection of independent libraries for common Linked
Data problems:
○ access Linked Data resources (and even some that are not Linked
Data): LDClient
○ simple and intuitive query language for Linked Data: LDPath
○ Sesame Triplestore based on a SQL database: KiWi Triplestore
optional: Versioning & Reasoning
2014-11-19 Apache Marmotta
15. Marmotta Community
● discuss with people interested in getting-things-done in
the Linked Data world
● build applications that are useful without re-implementing
the whole stack
● thorough software engineering process under the roof
of the Apache Software Foundation
● Join us!
○ users@marmotta.apache.org
○ dev@marmotta.apache.org
2014-11-19 Apache Marmotta
16. Apache Marmotta Platform
• JavaEE web application
• service oriented architecture using CDI
(J2EE)
• REST web services using JAX-RS
(RestEasy)
• CDI services found in the classpath are
automatically loaded
2014-11-19 Apache Marmotta
18. Marmotta Core
• Core functionalities:
• Linked Data access
• RDF import/export
• Admin/Configuration UI
• Platform glue code
• Service and Dependency injection
• Triple Store
• System configuration
• Logging
2014-11-19 Apache Marmotta
19. Marmotta Backends
Choose the one that fits your needs best:
• KiWi (Marmotta)
• based on relational database (PostgreSQL, MySQL, H2)
• highly scaleable
• Sesame Native
• BigData
• based on BigData clustered triple store
• Titan Graph DB
• backed by HBase, Cassandra or BerkeleyDB
2014-11-19 Apache Marmotta
20. Marmotta SPARQL
• SPARQL 1.1 HTTP endpoint
• SPARQL 1.1 protocol
• endpoints for query & update
• SPARQL explorer UI (Squebi)
with KiWi Triplestore:
• Translation of most SPARQL constructs into
native SQL for improved performance
2014-11-19 Apache Marmotta
21. Marmotta LDPath
• Query language designed for the Linked Data Cloud
• path based navigation starting at a resource, following
links across the Cloud
• limited expressivity (c.f. SPARQL) but full Linked Data
support
@prefix foaf: < http://xmlns.com/foaf/0.1/ >;
@prefix gn: < http://www.geonames.org/ontology# >;
name = foaf:firstName :: xsd:string;
friends = foaf:knows / fn:concat(foaf:firstName, “ “, foaf:surname)
:: xsd:string;
country = foaf:based_near / gn:parentCountry / gn:name :: xsd:string;
2014-11-19 Apache Marmotta
22. Marmotta LDCache
• transparently access Linked Data resources from other
servers as if they were local
• support for wrapping some legacy data sources (e.g.
Facebook Graph)
• local triple cache, honors HTTP expiry and cache
headers
SPARQL does not work well with LDCache, use LDPath instead!
2014-11-19 Apache Marmotta
24. Marmotta Versioning
with KiWi Triplestore:
• transaction-based versioning of all changes to the triple
store
• implementation of Memento protocol for exploring
changes over time
• snapshot/wayback functionality (i.e. possibility to query
the state of the triple store at a given time in history)
2014-11-19 Apache Marmotta
25. Linked Media
is a “Web scale layer of structured,
interlinked media annotations (...) inspired
by the Linked Data movement for making
structured, interlinked descriptions of
resources better available online.”
Lyndon J. B. Nixon. The importance of linked media to the future web: lime 2013 keynote talk - a proposal for
the linked media research agenda. WWW Companion Volume, page 455-456. International World Wide Web
Conferences Steering Committee / ACM, (2013).
2014-11-19 Apache Marmotta
26. Linked Media - An example
“Give me the spatio-temporal snippet that shows Lewis
Jones right beside Connor Macfarlane”
2014-11-19 Apache Marmotta
27. Media Fragment URIs
"... a media-format independent, standard
means of addressing media fragments on the
Web using Uniform Resource Identifiers."
[W3C Recommendation: Media Fragments URI 1.0 (basic)]
http://test.org/video.mpg?t=10,20&xywh=10,20,30,40
the spatial range having a dimension from 30 to 40
pixels with upper-left point (10px, 20px) from video.mpg
on domain test.org from second 10 to 20.
2014-11-19 Apache Marmotta
28. Open Annotation Model
“The Open Annotation Core Data Model
specifies an interoperable framework for
creating associations between related
resources … .“
http://www.openannotation.org/spec/core/
2014-11-19 Apache Marmotta
29. SPARQL-MM
“... is a multimedia-extension for SPARQL 1.1
implemented for Sesame. By now it supports
relation and aggregation functions for Media
Fragments URI 1.0 … .”
2014-11-19 Apache Marmotta
https://github.com/tkurz/sparql-mm
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX mm: <http://linkedmultimedia.org/sparql-mm/ns/1.0.0/function#>
SELECT ?f1 ?f2 (mm:boundingBox(?f1,?f2) AS ?box) WHERE {
?f1 rdfs:label "a".
?f2 rdfs:label "b".
FILTER mm:rightBeside(?f1,?f2)
}
31. Extending LDP to Media Fragments
public class ImageWebservice extends LdpWebService {
@GET @Produces("image/*")
public Response GET(..., @QueryParam("xywh") Rectangle rectangle) {
if(rectangle != null) {
// get mimetype for uri with LdpService
// get binary data with LdpBinaryStoreService
// read and crop image with ImageIO
return Response.ok().header( "Content-Type",mimetype).entity(image).build();
}
return super.GET(uriInfo, type, preferHeader);
}
}
2014-11-19 Apache Marmotta
32. You can find and download the demo example code at https://github.com/wikier/apache-marmotta-tutorial-iswc2014.
33. thanks!
available under
CreativeCommons Attribution 4.0
International License
http://marmotta.apache.org/
MICO FP7 project
(grant no. 610480)
Fusepool P3 project
(grant no. 609696)
acknowledgments to: